problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_13485 | rasdani/github-patches | git_diff | pre-commit__pre-commit-1814 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pre-commit install fails on Windows network mount drive
Hi, I'm trying to help some team members set up pre-commit on a Windows network mount drive, and they're encountering an issue with paths. Seems related to [this comment](https://github.com/pre-commit/pre-commit/issues/1610#issuecomment-719774326) in #1610, which seems to be a different issue than the one the original issue, as we are on the most recent version of pre-commit and git, and the fix that was merged in #1727 for that doesn't seem to address this issue. I also tried other versions of Git for Windows <2.25, and the issue still seemed to persist.
I tested the solution that @christopherdoyle proposed, and that does seems to fix this issue with network mount drives, though I saw that @asottile would prefer not to use `pathlib`. I am not able to propose a fix right now, but I wanted to raise this as an issue that still exists.
Full error below:
```
### version information
```
pre-commit version: 2.10.1
sys.version:
3.9.1 | packaged by conda-forge | (default, Jan 26 2021, 01:29:07) [MSC v.1916 64 bit (AMD64)]
sys.executable: C:\Users\roderick\.conda\envs\nmt\python.exe
os.name: nt
sys.platform: win32
```
### error information
```
An unexpected error has occurred: ValueError: path is on mount 'S:', start on mount '\\\\MyServer\Directory'
```
```
Traceback (most recent call last):
File "C:\Users\roderick\.conda\envs\nmt\lib\site-packages\pre_commit\error_handler.py", line 65, in error_handler
yield
File "C:\Users\roderick\.conda\envs\nmt\lib\site-packages\pre_commit\main.py", line 333, in main
_adjust_args_and_chdir(args)
File "C:\Users\roderick\.conda\envs\nmt\lib\site-packages\pre_commit\main.py", line 153, in _adjust_args_and_chdir
args.config = os.path.relpath(args.config)
File "C:\Users\roderick\.conda\envs\nmt\lib\ntpath.py", line 703, in relpath
raise ValueError("path is on mount %r, start on mount %r" % (
ValueError: path is on mount 'S:', start on mount '\\\\MyServer\Directory'
```
```
</issue>
<code>
[start of pre_commit/git.py]
1 import logging
2 import os.path
3 import sys
4 from typing import Dict
5 from typing import List
6 from typing import MutableMapping
7 from typing import Optional
8 from typing import Set
9
10 from pre_commit.errors import FatalError
11 from pre_commit.util import CalledProcessError
12 from pre_commit.util import cmd_output
13 from pre_commit.util import cmd_output_b
14
15
16 logger = logging.getLogger(__name__)
17
18
19 def zsplit(s: str) -> List[str]:
20 s = s.strip('\0')
21 if s:
22 return s.split('\0')
23 else:
24 return []
25
26
27 def no_git_env(
28 _env: Optional[MutableMapping[str, str]] = None,
29 ) -> Dict[str, str]:
30 # Too many bugs dealing with environment variables and GIT:
31 # https://github.com/pre-commit/pre-commit/issues/300
32 # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running
33 # pre-commit hooks
34 # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE
35 # while running pre-commit hooks in submodules.
36 # GIT_DIR: Causes git clone to clone wrong thing
37 # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit
38 _env = _env if _env is not None else os.environ
39 return {
40 k: v for k, v in _env.items()
41 if not k.startswith('GIT_') or
42 k in {
43 'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO',
44 'GIT_SSL_NO_VERIFY',
45 }
46 }
47
48
49 def get_root() -> str:
50 # Git 2.25 introduced a change to "rev-parse --show-toplevel" that exposed
51 # underlying volumes for Windows drives mapped with SUBST. We use
52 # "rev-parse --show-cdup" to get the appropriate path, but must perform
53 # an extra check to see if we are in the .git directory.
54 try:
55 root = os.path.realpath(
56 cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),
57 )
58 git_dir = os.path.realpath(get_git_dir())
59 except CalledProcessError:
60 raise FatalError(
61 'git failed. Is it installed, and are you in a Git repository '
62 'directory?',
63 )
64 if os.path.samefile(root, git_dir):
65 raise FatalError(
66 'git toplevel unexpectedly empty! make sure you are not '
67 'inside the `.git` directory of your repository.',
68 )
69 return root
70
71
72 def get_git_dir(git_root: str = '.') -> str:
73 opts = ('--git-common-dir', '--git-dir')
74 _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)
75 for line, opt in zip(out.splitlines(), opts):
76 if line != opt: # pragma: no branch (git < 2.5)
77 return os.path.normpath(os.path.join(git_root, line))
78 else:
79 raise AssertionError('unreachable: no git dir')
80
81
82 def get_remote_url(git_root: str) -> str:
83 _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)
84 return out.strip()
85
86
87 def is_in_merge_conflict() -> bool:
88 git_dir = get_git_dir('.')
89 return (
90 os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and
91 os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))
92 )
93
94
95 def parse_merge_msg_for_conflicts(merge_msg: bytes) -> List[str]:
96 # Conflicted files start with tabs
97 return [
98 line.lstrip(b'#').strip().decode()
99 for line in merge_msg.splitlines()
100 # '#\t' for git 2.4.1
101 if line.startswith((b'\t', b'#\t'))
102 ]
103
104
105 def get_conflicted_files() -> Set[str]:
106 logger.info('Checking merge-conflict files only.')
107 # Need to get the conflicted files from the MERGE_MSG because they could
108 # have resolved the conflict by choosing one side or the other
109 with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:
110 merge_msg = f.read()
111 merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)
112
113 # This will get the rest of the changes made after the merge.
114 # If they resolved the merge conflict by choosing a mesh of both sides
115 # this will also include the conflicted files
116 tree_hash = cmd_output('git', 'write-tree')[1].strip()
117 merge_diff_filenames = zsplit(
118 cmd_output(
119 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
120 '-m', tree_hash, 'HEAD', 'MERGE_HEAD',
121 )[1],
122 )
123 return set(merge_conflict_filenames) | set(merge_diff_filenames)
124
125
126 def get_staged_files(cwd: Optional[str] = None) -> List[str]:
127 return zsplit(
128 cmd_output(
129 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',
130 # Everything except for D
131 '--diff-filter=ACMRTUXB',
132 cwd=cwd,
133 )[1],
134 )
135
136
137 def intent_to_add_files() -> List[str]:
138 _, stdout, _ = cmd_output(
139 'git', 'status', '--ignore-submodules', '--porcelain', '-z',
140 )
141 parts = list(reversed(zsplit(stdout)))
142 intent_to_add = []
143 while parts:
144 line = parts.pop()
145 status, filename = line[:3], line[3:]
146 if status[0] in {'C', 'R'}: # renames / moves have an additional arg
147 parts.pop()
148 if status[1] == 'A':
149 intent_to_add.append(filename)
150 return intent_to_add
151
152
153 def get_all_files() -> List[str]:
154 return zsplit(cmd_output('git', 'ls-files', '-z')[1])
155
156
157 def get_changed_files(old: str, new: str) -> List[str]:
158 return zsplit(
159 cmd_output(
160 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
161 f'{old}...{new}',
162 )[1],
163 )
164
165
166 def head_rev(remote: str) -> str:
167 _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')
168 return out.split()[0]
169
170
171 def has_diff(*args: str, repo: str = '.') -> bool:
172 cmd = ('git', 'diff', '--quiet', '--no-ext-diff', *args)
173 return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1
174
175
176 def has_core_hookpaths_set() -> bool:
177 _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)
178 return bool(out.strip())
179
180
181 def init_repo(path: str, remote: str) -> None:
182 if os.path.isdir(remote):
183 remote = os.path.abspath(remote)
184
185 env = no_git_env()
186 # avoid the user's template so that hooks do not recurse
187 cmd_output_b('git', 'init', '--template=', path, env=env)
188 cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)
189
190
191 def commit(repo: str = '.') -> None:
192 env = no_git_env()
193 name, email = 'pre-commit', '[email protected]'
194 env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name
195 env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email
196 cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')
197 cmd_output_b(*cmd, cwd=repo, env=env)
198
199
200 def git_path(name: str, repo: str = '.') -> str:
201 _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)
202 return os.path.join(repo, out.strip())
203
204
205 def check_for_cygwin_mismatch() -> None:
206 """See https://github.com/pre-commit/pre-commit/issues/354"""
207 if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)
208 is_cygwin_python = sys.platform == 'cygwin'
209 try:
210 toplevel = get_root()
211 except FatalError: # skip the check if we're not in a git repo
212 return
213 is_cygwin_git = toplevel.startswith('/')
214
215 if is_cygwin_python ^ is_cygwin_git:
216 exe_type = {True: '(cygwin)', False: '(windows)'}
217 logger.warn(
218 f'pre-commit has detected a mix of cygwin python / git\n'
219 f'This combination is not supported, it is likely you will '
220 f'receive an error later in the program.\n'
221 f'Make sure to use cygwin git+python while using cygwin\n'
222 f'These can be installed through the cygwin installer.\n'
223 f' - python {exe_type[is_cygwin_python]}\n'
224 f' - git {exe_type[is_cygwin_git]}\n',
225 )
226
[end of pre_commit/git.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -52,10 +52,10 @@
# "rev-parse --show-cdup" to get the appropriate path, but must perform
# an extra check to see if we are in the .git directory.
try:
- root = os.path.realpath(
+ root = os.path.abspath(
cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),
)
- git_dir = os.path.realpath(get_git_dir())
+ git_dir = os.path.abspath(get_git_dir())
except CalledProcessError:
raise FatalError(
'git failed. Is it installed, and are you in a Git repository '
| {"golden_diff": "diff --git a/pre_commit/git.py b/pre_commit/git.py\n--- a/pre_commit/git.py\n+++ b/pre_commit/git.py\n@@ -52,10 +52,10 @@\n # \"rev-parse --show-cdup\" to get the appropriate path, but must perform\n # an extra check to see if we are in the .git directory.\n try:\n- root = os.path.realpath(\n+ root = os.path.abspath(\n cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),\n )\n- git_dir = os.path.realpath(get_git_dir())\n+ git_dir = os.path.abspath(get_git_dir())\n except CalledProcessError:\n raise FatalError(\n 'git failed. Is it installed, and are you in a Git repository '\n", "issue": "Pre-commit install fails on Windows network mount drive\nHi, I'm trying to help some team members set up pre-commit on a Windows network mount drive, and they're encountering an issue with paths. Seems related to [this comment](https://github.com/pre-commit/pre-commit/issues/1610#issuecomment-719774326) in #1610, which seems to be a different issue than the one the original issue, as we are on the most recent version of pre-commit and git, and the fix that was merged in #1727 for that doesn't seem to address this issue. I also tried other versions of Git for Windows <2.25, and the issue still seemed to persist.\r\n\r\nI tested the solution that @christopherdoyle proposed, and that does seems to fix this issue with network mount drives, though I saw that @asottile would prefer not to use `pathlib`. I am not able to propose a fix right now, but I wanted to raise this as an issue that still exists.\r\n\r\nFull error below:\r\n\r\n```\r\n ### version information\r\n \r\n ```\r\n pre-commit version: 2.10.1\r\n sys.version:\r\n 3.9.1 | packaged by conda-forge | (default, Jan 26 2021, 01:29:07) [MSC v.1916 64 bit (AMD64)]\r\n sys.executable: C:\\Users\\roderick\\.conda\\envs\\nmt\\python.exe\r\n os.name: nt\r\n sys.platform: win32\r\n ```\r\n \r\n ### error information\r\n \r\n ```\r\n An unexpected error has occurred: ValueError: path is on mount 'S:', start on mount '\\\\\\\\MyServer\\Directory'\r\n ```\r\n \r\n ```\r\n Traceback (most recent call last):\r\n File \"C:\\Users\\roderick\\.conda\\envs\\nmt\\lib\\site-packages\\pre_commit\\error_handler.py\", line 65, in error_handler\r\n yield\r\n File \"C:\\Users\\roderick\\.conda\\envs\\nmt\\lib\\site-packages\\pre_commit\\main.py\", line 333, in main\r\n _adjust_args_and_chdir(args)\r\n File \"C:\\Users\\roderick\\.conda\\envs\\nmt\\lib\\site-packages\\pre_commit\\main.py\", line 153, in _adjust_args_and_chdir\r\n args.config = os.path.relpath(args.config)\r\n File \"C:\\Users\\roderick\\.conda\\envs\\nmt\\lib\\ntpath.py\", line 703, in relpath\r\n raise ValueError(\"path is on mount %r, start on mount %r\" % (\r\n ValueError: path is on mount 'S:', start on mount '\\\\\\\\MyServer\\Directory'\r\n```\r\n\r\n```\n", "before_files": [{"content": "import logging\nimport os.path\nimport sys\nfrom typing import Dict\nfrom typing import List\nfrom typing import MutableMapping\nfrom typing import Optional\nfrom typing import Set\n\nfrom pre_commit.errors import FatalError\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef zsplit(s: str) -> List[str]:\n s = s.strip('\\0')\n if s:\n return s.split('\\0')\n else:\n return []\n\n\ndef no_git_env(\n _env: Optional[MutableMapping[str, str]] = None,\n) -> Dict[str, str]:\n # Too many bugs dealing with environment variables and GIT:\n # https://github.com/pre-commit/pre-commit/issues/300\n # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running\n # pre-commit hooks\n # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE\n # while running pre-commit hooks in submodules.\n # GIT_DIR: Causes git clone to clone wrong thing\n # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit\n _env = _env if _env is not None else os.environ\n return {\n k: v for k, v in _env.items()\n if not k.startswith('GIT_') or\n k in {\n 'GIT_EXEC_PATH', 'GIT_SSH', 'GIT_SSH_COMMAND', 'GIT_SSL_CAINFO',\n 'GIT_SSL_NO_VERIFY',\n }\n }\n\n\ndef get_root() -> str:\n # Git 2.25 introduced a change to \"rev-parse --show-toplevel\" that exposed\n # underlying volumes for Windows drives mapped with SUBST. We use\n # \"rev-parse --show-cdup\" to get the appropriate path, but must perform\n # an extra check to see if we are in the .git directory.\n try:\n root = os.path.realpath(\n cmd_output('git', 'rev-parse', '--show-cdup')[1].strip(),\n )\n git_dir = os.path.realpath(get_git_dir())\n except CalledProcessError:\n raise FatalError(\n 'git failed. Is it installed, and are you in a Git repository '\n 'directory?',\n )\n if os.path.samefile(root, git_dir):\n raise FatalError(\n 'git toplevel unexpectedly empty! make sure you are not '\n 'inside the `.git` directory of your repository.',\n )\n return root\n\n\ndef get_git_dir(git_root: str = '.') -> str:\n opts = ('--git-common-dir', '--git-dir')\n _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)\n for line, opt in zip(out.splitlines(), opts):\n if line != opt: # pragma: no branch (git < 2.5)\n return os.path.normpath(os.path.join(git_root, line))\n else:\n raise AssertionError('unreachable: no git dir')\n\n\ndef get_remote_url(git_root: str) -> str:\n _, out, _ = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)\n return out.strip()\n\n\ndef is_in_merge_conflict() -> bool:\n git_dir = get_git_dir('.')\n return (\n os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and\n os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))\n )\n\n\ndef parse_merge_msg_for_conflicts(merge_msg: bytes) -> List[str]:\n # Conflicted files start with tabs\n return [\n line.lstrip(b'#').strip().decode()\n for line in merge_msg.splitlines()\n # '#\\t' for git 2.4.1\n if line.startswith((b'\\t', b'#\\t'))\n ]\n\n\ndef get_conflicted_files() -> Set[str]:\n logger.info('Checking merge-conflict files only.')\n # Need to get the conflicted files from the MERGE_MSG because they could\n # have resolved the conflict by choosing one side or the other\n with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:\n merge_msg = f.read()\n merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)\n\n # This will get the rest of the changes made after the merge.\n # If they resolved the merge conflict by choosing a mesh of both sides\n # this will also include the conflicted files\n tree_hash = cmd_output('git', 'write-tree')[1].strip()\n merge_diff_filenames = zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '-m', tree_hash, 'HEAD', 'MERGE_HEAD',\n )[1],\n )\n return set(merge_conflict_filenames) | set(merge_diff_filenames)\n\n\ndef get_staged_files(cwd: Optional[str] = None) -> List[str]:\n return zsplit(\n cmd_output(\n 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',\n # Everything except for D\n '--diff-filter=ACMRTUXB',\n cwd=cwd,\n )[1],\n )\n\n\ndef intent_to_add_files() -> List[str]:\n _, stdout, _ = cmd_output(\n 'git', 'status', '--ignore-submodules', '--porcelain', '-z',\n )\n parts = list(reversed(zsplit(stdout)))\n intent_to_add = []\n while parts:\n line = parts.pop()\n status, filename = line[:3], line[3:]\n if status[0] in {'C', 'R'}: # renames / moves have an additional arg\n parts.pop()\n if status[1] == 'A':\n intent_to_add.append(filename)\n return intent_to_add\n\n\ndef get_all_files() -> List[str]:\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n\n\ndef get_changed_files(old: str, new: str) -> List[str]:\n return zsplit(\n cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n f'{old}...{new}',\n )[1],\n )\n\n\ndef head_rev(remote: str) -> str:\n _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')\n return out.split()[0]\n\n\ndef has_diff(*args: str, repo: str = '.') -> bool:\n cmd = ('git', 'diff', '--quiet', '--no-ext-diff', *args)\n return cmd_output_b(*cmd, cwd=repo, retcode=None)[0] == 1\n\n\ndef has_core_hookpaths_set() -> bool:\n _, out, _ = cmd_output_b('git', 'config', 'core.hooksPath', retcode=None)\n return bool(out.strip())\n\n\ndef init_repo(path: str, remote: str) -> None:\n if os.path.isdir(remote):\n remote = os.path.abspath(remote)\n\n env = no_git_env()\n # avoid the user's template so that hooks do not recurse\n cmd_output_b('git', 'init', '--template=', path, env=env)\n cmd_output_b('git', 'remote', 'add', 'origin', remote, cwd=path, env=env)\n\n\ndef commit(repo: str = '.') -> None:\n env = no_git_env()\n name, email = 'pre-commit', '[email protected]'\n env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name\n env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email\n cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')\n cmd_output_b(*cmd, cwd=repo, env=env)\n\n\ndef git_path(name: str, repo: str = '.') -> str:\n _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)\n return os.path.join(repo, out.strip())\n\n\ndef check_for_cygwin_mismatch() -> None:\n \"\"\"See https://github.com/pre-commit/pre-commit/issues/354\"\"\"\n if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)\n is_cygwin_python = sys.platform == 'cygwin'\n try:\n toplevel = get_root()\n except FatalError: # skip the check if we're not in a git repo\n return\n is_cygwin_git = toplevel.startswith('/')\n\n if is_cygwin_python ^ is_cygwin_git:\n exe_type = {True: '(cygwin)', False: '(windows)'}\n logger.warn(\n f'pre-commit has detected a mix of cygwin python / git\\n'\n f'This combination is not supported, it is likely you will '\n f'receive an error later in the program.\\n'\n f'Make sure to use cygwin git+python while using cygwin\\n'\n f'These can be installed through the cygwin installer.\\n'\n f' - python {exe_type[is_cygwin_python]}\\n'\n f' - git {exe_type[is_cygwin_git]}\\n',\n )\n", "path": "pre_commit/git.py"}]} | 3,795 | 170 |
gh_patches_debug_23270 | rasdani/github-patches | git_diff | spyder-ide__spyder-11838 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Activation of kernel env fails when using anaconda shortcut (FileNotFound error kernel start 4.1.0)
<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->
## Problem Description
Spyder4.1.0 IPython kernel startup error
Close spyder and restart, the problem persists.
### What steps reproduce the problem?
After upgrading spyder4.0.1 to 4.1.0, IPython kernel startup error occurs when starting spyder.
The following command is used to upgrade in a virtual environment:
conda install spyder=4.1.0
```python-traceback
启动 IPython 内核时发生错误(An error occurred while starting the IPython kernel)
错误:(Error)
Traceback (most recent call last):
File "C:\Users\Admin\Anaconda3\envs\python37\lib\site‑packages\spyder\plugins\ipythonconsole\plugin.py", line 1209, in create_kernel_manager_and_kernel_client
kernel_manager.start_kernel(stderr=stderr_handle, **kwargs)
File "C:\Users\Admin\Anaconda3\envs\python37\lib\site‑packages\jupyter_client\manager.py", line 259, in start_kernel
**kw)
File "C:\Users\Admin\Anaconda3\envs\python37\lib\site‑packages\jupyter_client\manager.py", line 204, in _launch_kernel
return launch_kernel(kernel_cmd, **kw)
File "C:\Users\Admin\Anaconda3\envs\python37\lib\site‑packages\jupyter_client\launcher.py", line 138, in launch_kernel
proc = Popen(cmd, **kwargs)
File "C:\Users\Admin\Anaconda3\envs\python37\lib\subprocess.py", line 800, in __init__
restore_signals, start_new_session)
File "C:\Users\Admin\Anaconda3\envs\python37\lib\subprocess.py", line 1207, in _execute_child
startupinfo)
FileNotFoundError: [WinError 2] 系统找不到指定的文件(The system can not find the file specified)。
```
## Versions
<!--- You can get this information from Help > About Spyder...
or (if Spyder won't launch) the "conda list" command
from the Anaconda Prompt/Terminal/command line. --->
* Spyder version: 4.1.0
* Python version: Python3.7.5
* Qt version: 5.9.6
* PyQt version: 5.9.2
* Operating System name/version: win10
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright © Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """
8 Spyder
9 ======
10
11 The Scientific Python Development Environment
12
13 Spyder is a powerful scientific environment written in Python, for Python,
14 and designed by and for scientists, engineers and data analysts.
15
16 It features a unique combination of the advanced editing, analysis, debugging
17 and profiling functionality of a comprehensive development tool with the data
18 exploration, interactive execution, deep inspection and beautiful visualization
19 capabilities of a scientific package.
20 """
21
22 from __future__ import print_function
23
24 import io
25 import os
26 import os.path as osp
27 import subprocess
28 import sys
29 import shutil
30
31 from distutils.core import setup
32 from distutils.command.install_data import install_data
33
34
35 #==============================================================================
36 # Check for Python 3
37 #==============================================================================
38 PY3 = sys.version_info[0] == 3
39
40
41 #==============================================================================
42 # Minimal Python version sanity check
43 # Taken from the notebook setup.py -- Modified BSD License
44 #==============================================================================
45 v = sys.version_info
46 if v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 5)):
47 error = "ERROR: Spyder requires Python version 2.7 or 3.5 and above."
48 print(error, file=sys.stderr)
49 sys.exit(1)
50
51
52 #==============================================================================
53 # Constants
54 #==============================================================================
55 NAME = 'spyder'
56 LIBNAME = 'spyder'
57 from spyder import __version__, __website_url__ #analysis:ignore
58
59
60 #==============================================================================
61 # Auxiliary functions
62 #==============================================================================
63 def get_package_data(name, extlist):
64 """Return data files for package *name* with extensions in *extlist*"""
65 flist = []
66 # Workaround to replace os.path.relpath (not available until Python 2.6):
67 offset = len(name)+len(os.pathsep)
68 for dirpath, _dirnames, filenames in os.walk(name):
69 if 'tests' not in dirpath:
70 for fname in filenames:
71 if (not fname.startswith('.') and
72 osp.splitext(fname)[1] in extlist):
73 flist.append(osp.join(dirpath, fname)[offset:])
74 return flist
75
76
77 def get_subpackages(name):
78 """Return subpackages of package *name*"""
79 splist = []
80 for dirpath, _dirnames, _filenames in os.walk(name):
81 if 'tests' not in dirpath:
82 if osp.isfile(osp.join(dirpath, '__init__.py')):
83 splist.append(".".join(dirpath.split(os.sep)))
84 return splist
85
86
87 def get_data_files():
88 """Return data_files in a platform dependent manner"""
89 if sys.platform.startswith('linux'):
90 if PY3:
91 data_files = [('share/applications', ['scripts/spyder3.desktop']),
92 ('share/icons', ['img_src/spyder3.png']),
93 ('share/metainfo', ['scripts/spyder3.appdata.xml'])]
94 else:
95 data_files = [('share/applications', ['scripts/spyder.desktop']),
96 ('share/icons', ['img_src/spyder.png'])]
97 elif os.name == 'nt':
98 data_files = [('scripts', ['img_src/spyder.ico',
99 'img_src/spyder_reset.ico'])]
100 else:
101 data_files = []
102 return data_files
103
104
105 def get_packages():
106 """Return package list"""
107 packages = get_subpackages(LIBNAME)
108 return packages
109
110
111 #==============================================================================
112 # Make Linux detect Spyder desktop file
113 #==============================================================================
114 class MyInstallData(install_data):
115 def run(self):
116 install_data.run(self)
117 if sys.platform.startswith('linux'):
118 try:
119 subprocess.call(['update-desktop-database'])
120 except:
121 print("ERROR: unable to update desktop database",
122 file=sys.stderr)
123 CMDCLASS = {'install_data': MyInstallData}
124
125
126 #==============================================================================
127 # Main scripts
128 #==============================================================================
129 # NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows
130 # platforms due to a bug in pip installation process
131 # See spyder-ide/spyder#1158.
132 SCRIPTS = ['%s_win_post_install.py' % NAME]
133 if PY3 and sys.platform.startswith('linux'):
134 SCRIPTS.append('spyder3')
135 else:
136 SCRIPTS.append('spyder')
137
138
139 #==============================================================================
140 # Files added to the package
141 #==============================================================================
142 EXTLIST = ['.pot', '.po', '.mo', '.svg', '.png', '.css', '.html', '.js',
143 '.ini', '.txt', '.qss', '.ttf', '.json', '.rst', '.bloom']
144 if os.name == 'nt':
145 SCRIPTS += ['spyder.bat']
146 EXTLIST += ['.ico']
147
148
149 #==============================================================================
150 # Use Readme for long description
151 #==============================================================================
152 with io.open('README.md', encoding='utf-8') as f:
153 LONG_DESCRIPTION = f.read()
154
155
156 #==============================================================================
157 # Setup arguments
158 #==============================================================================
159 setup_args = dict(
160 name=NAME,
161 version=__version__,
162 description='The Scientific Python Development Environment',
163 long_description=LONG_DESCRIPTION,
164 long_description_content_type='text/markdown',
165 download_url=__website_url__ + "#fh5co-download",
166 author="The Spyder Project Contributors",
167 author_email="[email protected]",
168 url=__website_url__,
169 license='MIT',
170 keywords='PyQt5 editor console widgets IDE science data analysis IPython',
171 platforms=["Windows", "Linux", "Mac OS-X"],
172 packages=get_packages(),
173 package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST)},
174 scripts=[osp.join('scripts', fname) for fname in SCRIPTS],
175 data_files=get_data_files(),
176 classifiers=['License :: OSI Approved :: MIT License',
177 'Operating System :: MacOS',
178 'Operating System :: Microsoft :: Windows',
179 'Operating System :: POSIX :: Linux',
180 'Programming Language :: Python :: 2',
181 'Programming Language :: Python :: 2.7',
182 'Programming Language :: Python :: 3',
183 'Programming Language :: Python :: 3.4',
184 'Programming Language :: Python :: 3.5',
185 'Programming Language :: Python :: 3.6',
186 'Programming Language :: Python :: 3.7',
187 'Development Status :: 5 - Production/Stable',
188 'Intended Audience :: Education',
189 'Intended Audience :: Science/Research',
190 'Intended Audience :: Developers',
191 'Topic :: Scientific/Engineering',
192 'Topic :: Software Development :: Widget Sets'],
193 cmdclass=CMDCLASS)
194
195
196 #==============================================================================
197 # Setuptools deps
198 #==============================================================================
199 if any(arg == 'bdist_wheel' for arg in sys.argv):
200 import setuptools # analysis:ignore
201
202 install_requires = [
203 'applaunchservices>=0.1.7;platform_system=="Darwin"',
204 'atomicwrites>=1.2.0',
205 'chardet>=2.0.0',
206 'cloudpickle>=0.5.0',
207 'diff-match-patch>=20181111',
208 'intervaltree',
209 'ipython>=4.0',
210 # This is here until Jedi 0.15+ fixes completions for
211 # Numpy and Pandas
212 'jedi==0.15.2',
213 # Don't require keyring for Python 2 and Linux
214 # because it depends on system packages
215 'keyring;sys_platform!="linux2"',
216 'nbconvert>=4.0',
217 'numpydoc>=0.6.0',
218 # Required to get SSH connections to remote kernels
219 'paramiko>=2.4.0;platform_system=="Windows"',
220 'parso==0.5.2',
221 'pexpect>=4.4.0',
222 'pickleshare>=0.4',
223 'psutil>=5.3',
224 'pygments>=2.0',
225 'pylint>=0.25',
226 'pyqt5<5.13;python_version>="3"',
227 'pyqtwebengine<5.13;python_version>="3"',
228 'python-language-server[all]>=0.31.2,<0.32.0',
229 'pyxdg>=0.26;platform_system=="Linux"',
230 'pyzmq>=17',
231 'qdarkstyle>=2.8',
232 'qtawesome>=0.5.7',
233 'qtconsole>=4.6.0',
234 'qtpy>=1.5.0',
235 'sphinx>=0.6.6',
236 'spyder-kernels>=1.9.0,<1.10.0',
237 'watchdog',
238 ]
239
240 extras_require = {
241 'test:python_version == "2.7"': ['mock'],
242 'test:platform_system == "Linux"': ['pytest-xvfb'],
243 'test:platform_system == "Windows"': ['pywin32'],
244 'test': [
245 'coverage<5.0',
246 'cython',
247 'flaky',
248 'matplotlib',
249 'mock',
250 'pandas',
251 'pillow',
252 'pytest<5.0',
253 'pytest-cov',
254 'pytest-faulthandler<2.0',
255 'pytest-lazy-fixture',
256 'pytest-mock',
257 'pytest-ordering',
258 'pytest-qt',
259 'pyyaml',
260 'scipy',
261 'sympy',
262 ],
263 }
264
265 if 'setuptools' in sys.modules:
266 setup_args['install_requires'] = install_requires
267 setup_args['extras_require'] = extras_require
268
269 setup_args['entry_points'] = {
270 'gui_scripts': [
271 '{} = spyder.app.start:main'.format(
272 'spyder3' if PY3 else 'spyder')
273 ]
274 }
275
276 setup_args.pop('scripts', None)
277
278
279 #==============================================================================
280 # Main setup
281 #==============================================================================
282 setup(**setup_args)
283
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -130,20 +130,21 @@
# platforms due to a bug in pip installation process
# See spyder-ide/spyder#1158.
SCRIPTS = ['%s_win_post_install.py' % NAME]
+
if PY3 and sys.platform.startswith('linux'):
SCRIPTS.append('spyder3')
else:
SCRIPTS.append('spyder')
+if os.name == 'nt':
+ SCRIPTS += ['spyder.bat']
#==============================================================================
# Files added to the package
#==============================================================================
EXTLIST = ['.pot', '.po', '.mo', '.svg', '.png', '.css', '.html', '.js',
- '.ini', '.txt', '.qss', '.ttf', '.json', '.rst', '.bloom']
-if os.name == 'nt':
- SCRIPTS += ['spyder.bat']
- EXTLIST += ['.ico']
+ '.ini', '.txt', '.qss', '.ttf', '.json', '.rst', '.bloom',
+ '.ico', '.gif', '.mp3', '.ogg', '.sfd', '.bat', '.sh']
#==============================================================================
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -130,20 +130,21 @@\n # platforms due to a bug in pip installation process\n # See spyder-ide/spyder#1158.\n SCRIPTS = ['%s_win_post_install.py' % NAME]\n+\n if PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\n else:\n SCRIPTS.append('spyder')\n \n+if os.name == 'nt':\n+ SCRIPTS += ['spyder.bat']\n \n #==============================================================================\n # Files added to the package\n #==============================================================================\n EXTLIST = ['.pot', '.po', '.mo', '.svg', '.png', '.css', '.html', '.js',\n- '.ini', '.txt', '.qss', '.ttf', '.json', '.rst', '.bloom']\n-if os.name == 'nt':\n- SCRIPTS += ['spyder.bat']\n- EXTLIST += ['.ico']\n+ '.ini', '.txt', '.qss', '.ttf', '.json', '.rst', '.bloom',\n+ '.ico', '.gif', '.mp3', '.ogg', '.sfd', '.bat', '.sh']\n \n \n #==============================================================================\n", "issue": "Activation of kernel env fails when using anaconda shortcut (FileNotFound error kernel start 4.1.0)\n<!--- **PLEASE READ:** When submitting here, please ensure you've completed the following checklist and checked the boxes to confirm. Issue reports without it may be closed. Thanks! --->\r\n## Problem Description\r\n\r\nSpyder4.1.0 IPython kernel startup error\r\nClose spyder and restart, the problem persists.\r\n### What steps reproduce the problem?\r\n\r\nAfter upgrading spyder4.0.1 to 4.1.0, IPython kernel startup error occurs when starting spyder.\r\nThe following command is used to upgrade in a virtual environment:\r\nconda install spyder=4.1.0\r\n```python-traceback\r\n\u542f\u52a8 IPython \u5185\u6838\u65f6\u53d1\u751f\u9519\u8bef\uff08An error occurred while starting the IPython kernel\uff09\r\n\u9519\u8bef\uff1a\uff08Error\uff09\r\nTraceback (most recent call last):\r\nFile \"C:\\Users\\Admin\\Anaconda3\\envs\\python37\\lib\\site\u2011packages\\spyder\\plugins\\ipythonconsole\\plugin.py\", line 1209, in create_kernel_manager_and_kernel_client\r\nkernel_manager.start_kernel(stderr=stderr_handle, **kwargs)\r\nFile \"C:\\Users\\Admin\\Anaconda3\\envs\\python37\\lib\\site\u2011packages\\jupyter_client\\manager.py\", line 259, in start_kernel\r\n**kw)\r\nFile \"C:\\Users\\Admin\\Anaconda3\\envs\\python37\\lib\\site\u2011packages\\jupyter_client\\manager.py\", line 204, in _launch_kernel\r\nreturn launch_kernel(kernel_cmd, **kw)\r\nFile \"C:\\Users\\Admin\\Anaconda3\\envs\\python37\\lib\\site\u2011packages\\jupyter_client\\launcher.py\", line 138, in launch_kernel\r\nproc = Popen(cmd, **kwargs)\r\nFile \"C:\\Users\\Admin\\Anaconda3\\envs\\python37\\lib\\subprocess.py\", line 800, in __init__\r\nrestore_signals, start_new_session)\r\nFile \"C:\\Users\\Admin\\Anaconda3\\envs\\python37\\lib\\subprocess.py\", line 1207, in _execute_child\r\nstartupinfo)\r\nFileNotFoundError: [WinError 2] \u7cfb\u7edf\u627e\u4e0d\u5230\u6307\u5b9a\u7684\u6587\u4ef6\uff08The system can not find the file specified\uff09\u3002\r\n```\r\n\r\n## Versions\r\n<!--- You can get this information from Help > About Spyder...\r\nor (if Spyder won't launch) the \"conda list\" command\r\nfrom the Anaconda Prompt/Terminal/command line. --->\r\n\r\n* Spyder version: 4.1.0\r\n* Python version: Python3.7.5\r\n* Qt version: 5.9.6\r\n* PyQt version: 5.9.2\r\n* Operating System name/version: win10\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright \u00a9 Spyder Project Contributors\n# Licensed under the terms of the MIT License\n# (see spyder/__init__.py for details)\n\n\"\"\"\nSpyder\n======\n\nThe Scientific Python Development Environment\n\nSpyder is a powerful scientific environment written in Python, for Python,\nand designed by and for scientists, engineers and data analysts.\n\nIt features a unique combination of the advanced editing, analysis, debugging\nand profiling functionality of a comprehensive development tool with the data\nexploration, interactive execution, deep inspection and beautiful visualization\ncapabilities of a scientific package.\n\"\"\"\n\nfrom __future__ import print_function\n\nimport io\nimport os\nimport os.path as osp\nimport subprocess\nimport sys\nimport shutil\n\nfrom distutils.core import setup\nfrom distutils.command.install_data import install_data\n\n\n#==============================================================================\n# Check for Python 3\n#==============================================================================\nPY3 = sys.version_info[0] == 3\n\n\n#==============================================================================\n# Minimal Python version sanity check\n# Taken from the notebook setup.py -- Modified BSD License\n#==============================================================================\nv = sys.version_info\nif v[:2] < (2, 7) or (v[0] >= 3 and v[:2] < (3, 5)):\n error = \"ERROR: Spyder requires Python version 2.7 or 3.5 and above.\"\n print(error, file=sys.stderr)\n sys.exit(1)\n\n\n#==============================================================================\n# Constants\n#==============================================================================\nNAME = 'spyder'\nLIBNAME = 'spyder'\nfrom spyder import __version__, __website_url__ #analysis:ignore\n\n\n#==============================================================================\n# Auxiliary functions\n#==============================================================================\ndef get_package_data(name, extlist):\n \"\"\"Return data files for package *name* with extensions in *extlist*\"\"\"\n flist = []\n # Workaround to replace os.path.relpath (not available until Python 2.6):\n offset = len(name)+len(os.pathsep)\n for dirpath, _dirnames, filenames in os.walk(name):\n if 'tests' not in dirpath:\n for fname in filenames:\n if (not fname.startswith('.') and\n osp.splitext(fname)[1] in extlist):\n flist.append(osp.join(dirpath, fname)[offset:])\n return flist\n\n\ndef get_subpackages(name):\n \"\"\"Return subpackages of package *name*\"\"\"\n splist = []\n for dirpath, _dirnames, _filenames in os.walk(name):\n if 'tests' not in dirpath:\n if osp.isfile(osp.join(dirpath, '__init__.py')):\n splist.append(\".\".join(dirpath.split(os.sep)))\n return splist\n\n\ndef get_data_files():\n \"\"\"Return data_files in a platform dependent manner\"\"\"\n if sys.platform.startswith('linux'):\n if PY3:\n data_files = [('share/applications', ['scripts/spyder3.desktop']),\n ('share/icons', ['img_src/spyder3.png']),\n ('share/metainfo', ['scripts/spyder3.appdata.xml'])]\n else:\n data_files = [('share/applications', ['scripts/spyder.desktop']),\n ('share/icons', ['img_src/spyder.png'])]\n elif os.name == 'nt':\n data_files = [('scripts', ['img_src/spyder.ico',\n 'img_src/spyder_reset.ico'])]\n else:\n data_files = []\n return data_files\n\n\ndef get_packages():\n \"\"\"Return package list\"\"\"\n packages = get_subpackages(LIBNAME)\n return packages\n\n\n#==============================================================================\n# Make Linux detect Spyder desktop file\n#==============================================================================\nclass MyInstallData(install_data):\n def run(self):\n install_data.run(self)\n if sys.platform.startswith('linux'):\n try:\n subprocess.call(['update-desktop-database'])\n except:\n print(\"ERROR: unable to update desktop database\",\n file=sys.stderr)\nCMDCLASS = {'install_data': MyInstallData}\n\n\n#==============================================================================\n# Main scripts\n#==============================================================================\n# NOTE: the '[...]_win_post_install.py' script is installed even on non-Windows\n# platforms due to a bug in pip installation process\n# See spyder-ide/spyder#1158.\nSCRIPTS = ['%s_win_post_install.py' % NAME]\nif PY3 and sys.platform.startswith('linux'):\n SCRIPTS.append('spyder3')\nelse:\n SCRIPTS.append('spyder')\n\n\n#==============================================================================\n# Files added to the package\n#==============================================================================\nEXTLIST = ['.pot', '.po', '.mo', '.svg', '.png', '.css', '.html', '.js',\n '.ini', '.txt', '.qss', '.ttf', '.json', '.rst', '.bloom']\nif os.name == 'nt':\n SCRIPTS += ['spyder.bat']\n EXTLIST += ['.ico']\n\n\n#==============================================================================\n# Use Readme for long description\n#==============================================================================\nwith io.open('README.md', encoding='utf-8') as f:\n LONG_DESCRIPTION = f.read()\n\n\n#==============================================================================\n# Setup arguments\n#==============================================================================\nsetup_args = dict(\n name=NAME,\n version=__version__,\n description='The Scientific Python Development Environment',\n long_description=LONG_DESCRIPTION,\n long_description_content_type='text/markdown',\n download_url=__website_url__ + \"#fh5co-download\",\n author=\"The Spyder Project Contributors\",\n author_email=\"[email protected]\",\n url=__website_url__,\n license='MIT',\n keywords='PyQt5 editor console widgets IDE science data analysis IPython',\n platforms=[\"Windows\", \"Linux\", \"Mac OS-X\"],\n packages=get_packages(),\n package_data={LIBNAME: get_package_data(LIBNAME, EXTLIST)},\n scripts=[osp.join('scripts', fname) for fname in SCRIPTS],\n data_files=get_data_files(),\n classifiers=['License :: OSI Approved :: MIT License',\n 'Operating System :: MacOS',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Education',\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering',\n 'Topic :: Software Development :: Widget Sets'],\n cmdclass=CMDCLASS)\n\n\n#==============================================================================\n# Setuptools deps\n#==============================================================================\nif any(arg == 'bdist_wheel' for arg in sys.argv):\n import setuptools # analysis:ignore\n\ninstall_requires = [\n 'applaunchservices>=0.1.7;platform_system==\"Darwin\"',\n 'atomicwrites>=1.2.0',\n 'chardet>=2.0.0',\n 'cloudpickle>=0.5.0',\n 'diff-match-patch>=20181111',\n 'intervaltree',\n 'ipython>=4.0',\n # This is here until Jedi 0.15+ fixes completions for\n # Numpy and Pandas\n 'jedi==0.15.2',\n # Don't require keyring for Python 2 and Linux\n # because it depends on system packages\n 'keyring;sys_platform!=\"linux2\"',\n 'nbconvert>=4.0',\n 'numpydoc>=0.6.0',\n # Required to get SSH connections to remote kernels\n 'paramiko>=2.4.0;platform_system==\"Windows\"',\n 'parso==0.5.2',\n 'pexpect>=4.4.0',\n 'pickleshare>=0.4',\n 'psutil>=5.3',\n 'pygments>=2.0',\n 'pylint>=0.25',\n 'pyqt5<5.13;python_version>=\"3\"',\n 'pyqtwebengine<5.13;python_version>=\"3\"',\n 'python-language-server[all]>=0.31.2,<0.32.0',\n 'pyxdg>=0.26;platform_system==\"Linux\"',\n 'pyzmq>=17',\n 'qdarkstyle>=2.8',\n 'qtawesome>=0.5.7',\n 'qtconsole>=4.6.0',\n 'qtpy>=1.5.0',\n 'sphinx>=0.6.6',\n 'spyder-kernels>=1.9.0,<1.10.0',\n 'watchdog',\n]\n\nextras_require = {\n 'test:python_version == \"2.7\"': ['mock'],\n 'test:platform_system == \"Linux\"': ['pytest-xvfb'],\n 'test:platform_system == \"Windows\"': ['pywin32'],\n 'test': [\n 'coverage<5.0',\n 'cython',\n 'flaky',\n 'matplotlib',\n 'mock',\n 'pandas',\n 'pillow',\n 'pytest<5.0',\n 'pytest-cov',\n 'pytest-faulthandler<2.0',\n 'pytest-lazy-fixture',\n 'pytest-mock',\n 'pytest-ordering',\n 'pytest-qt',\n 'pyyaml',\n 'scipy',\n 'sympy',\n ],\n}\n\nif 'setuptools' in sys.modules:\n setup_args['install_requires'] = install_requires\n setup_args['extras_require'] = extras_require\n\n setup_args['entry_points'] = {\n 'gui_scripts': [\n '{} = spyder.app.start:main'.format(\n 'spyder3' if PY3 else 'spyder')\n ]\n }\n\n setup_args.pop('scripts', None)\n\n\n#==============================================================================\n# Main setup\n#==============================================================================\nsetup(**setup_args)\n", "path": "setup.py"}]} | 4,048 | 274 |
gh_patches_debug_36472 | rasdani/github-patches | git_diff | elastic__apm-agent-python-621 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception when using `quote_ident` in psycopg2
**Describe the bug**:
If you make use of the function `psycopg2.extensions.quote_ident` [docs](http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.quote_ident), a `TypeError` exception is thrown. This is because the cursor object, when under instrumentation from ES-APM, is an instance of `PGCursorProxy`, not the actual cursor, and `quote_ident` does not allow this because the type is checked in the C code [link](https://github.com/psycopg/psycopg2/blob/2_7_6_1/psycopg/psycopgmodule.c#L181). With the error message saying `TypeError: argument 2 must be a connection or a cursor`. Inspecting the cur object at a debug breakpoint we can see it is the proxy object:
```
>>> cur
<PGCursorProxy at 0x7fd7f70f9a88 for NamedTupleCursor at 0x7fd7f70f0148>
>>> type(cur)
<class 'elasticapm.instrumentation.packages.psycopg2.PGCursorProxy'>
>>> type(cur.__wrapped__)
<class 'psycopg2.extras.NamedTupleCursor'>
```
**To Reproduce**
```python
from psycopg2.extensions import quote_ident
....
....
with psycopg2.connect(DSN) as conn:
with conn.cursor() as curs:
ident = quote_ident("column_name", cur)
curs.execute(f"SELECT {column_name} FROM data.table;")
data = curs.fetchall()
```
passing the underlying wrapped cursor works:
```python
from psycopg2.extensions import quote_ident
....
....
with psycopg2.connect(DSN) as conn:
with conn.cursor() as curs:
ident = quote_ident("column_name", cur.__wrapped__)
curs.execute(f"SELECT {column_name} FROM data.table;")
data = curs.fetchall()
```
**Environment (please complete the following information)**
- OS: Linux
- Python version: 3.6.4
- Agent version: 5.2.2
**Additional context**
Looks like the same problem was encountered here https://github.com/DataDog/dd-trace-py/issues/474, and was fixed by also patching quote_ident to pass the `__wrapped__` object. Testing this out with a basic in-module proxy function worked, but obviously a patch right at the top level from the apm module would sort it out.
```python
def quote_ident(string, cursor):
return psycopg2.extensions.quote_ident(string, cursor.__wrapped__)
```
</issue>
<code>
[start of elasticapm/instrumentation/register.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31
32 from elasticapm.utils.module_import import import_string
33
34 _cls_register = {
35 "elasticapm.instrumentation.packages.botocore.BotocoreInstrumentation",
36 "elasticapm.instrumentation.packages.jinja2.Jinja2Instrumentation",
37 "elasticapm.instrumentation.packages.psycopg2.Psycopg2Instrumentation",
38 "elasticapm.instrumentation.packages.psycopg2.Psycopg2RegisterTypeInstrumentation",
39 "elasticapm.instrumentation.packages.mysql.MySQLInstrumentation",
40 "elasticapm.instrumentation.packages.pylibmc.PyLibMcInstrumentation",
41 "elasticapm.instrumentation.packages.pymongo.PyMongoInstrumentation",
42 "elasticapm.instrumentation.packages.pymongo.PyMongoBulkInstrumentation",
43 "elasticapm.instrumentation.packages.pymongo.PyMongoCursorInstrumentation",
44 "elasticapm.instrumentation.packages.python_memcached.PythonMemcachedInstrumentation",
45 "elasticapm.instrumentation.packages.redis.RedisInstrumentation",
46 "elasticapm.instrumentation.packages.redis.RedisPipelineInstrumentation",
47 "elasticapm.instrumentation.packages.requests.RequestsInstrumentation",
48 "elasticapm.instrumentation.packages.sqlite.SQLiteInstrumentation",
49 "elasticapm.instrumentation.packages.urllib3.Urllib3Instrumentation",
50 "elasticapm.instrumentation.packages.elasticsearch.ElasticsearchConnectionInstrumentation",
51 "elasticapm.instrumentation.packages.elasticsearch.ElasticsearchInstrumentation",
52 "elasticapm.instrumentation.packages.cassandra.CassandraInstrumentation",
53 "elasticapm.instrumentation.packages.pymssql.PyMSSQLInstrumentation",
54 "elasticapm.instrumentation.packages.pyodbc.PyODBCInstrumentation",
55 "elasticapm.instrumentation.packages.django.template.DjangoTemplateInstrumentation",
56 "elasticapm.instrumentation.packages.django.template.DjangoTemplateSourceInstrumentation",
57 "elasticapm.instrumentation.packages.urllib.UrllibInstrumentation",
58 }
59
60
61 def register(cls):
62 _cls_register.add(cls)
63
64
65 _instrumentation_singletons = {}
66
67
68 def get_instrumentation_objects():
69 for cls_str in _cls_register:
70 if cls_str not in _instrumentation_singletons:
71 cls = import_string(cls_str)
72 _instrumentation_singletons[cls_str] = cls()
73
74 obj = _instrumentation_singletons[cls_str]
75 yield obj
76
[end of elasticapm/instrumentation/register.py]
[start of elasticapm/instrumentation/packages/psycopg2.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 from elasticapm.instrumentation.packages.dbapi2 import (
32 ConnectionProxy,
33 CursorProxy,
34 DbApi2Instrumentation,
35 extract_signature,
36 )
37 from elasticapm.traces import capture_span
38 from elasticapm.utils import default_ports
39
40
41 class PGCursorProxy(CursorProxy):
42 provider_name = "postgresql"
43
44 def _bake_sql(self, sql):
45 # if this is a Composable object, use its `as_string` method
46 # see http://initd.org/psycopg/docs/sql.html
47 if hasattr(sql, "as_string"):
48 return sql.as_string(self.__wrapped__)
49 return sql
50
51 def extract_signature(self, sql):
52 return extract_signature(sql)
53
54 def __enter__(self):
55 return PGCursorProxy(self.__wrapped__.__enter__())
56
57
58 class PGConnectionProxy(ConnectionProxy):
59 cursor_proxy = PGCursorProxy
60
61 def __enter__(self):
62 return PGConnectionProxy(self.__wrapped__.__enter__())
63
64
65 class Psycopg2Instrumentation(DbApi2Instrumentation):
66 name = "psycopg2"
67
68 instrument_list = [("psycopg2", "connect")]
69
70 def call(self, module, method, wrapped, instance, args, kwargs):
71 signature = "psycopg2.connect"
72
73 host = kwargs.get("host")
74 if host:
75 signature += " " + str(host)
76
77 port = kwargs.get("port")
78 if port:
79 port = str(port)
80 if int(port) != default_ports.get("postgresql"):
81 signature += ":" + port
82 else:
83 # Parse connection string and extract host/port
84 pass
85
86 with capture_span(signature, span_type="db", span_subtype="postgresql", span_action="connect"):
87 return PGConnectionProxy(wrapped(*args, **kwargs))
88
89
90 class Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):
91 name = "psycopg2-register-type"
92
93 instrument_list = [
94 ("psycopg2.extensions", "register_type"),
95 # specifically instrument `register_json` as it bypasses `register_type`
96 ("psycopg2._json", "register_json"),
97 ]
98
99 def call(self, module, method, wrapped, instance, args, kwargs):
100 if "conn_or_curs" in kwargs and hasattr(kwargs["conn_or_curs"], "__wrapped__"):
101 kwargs["conn_or_curs"] = kwargs["conn_or_curs"].__wrapped__
102 # register_type takes the connection as second argument
103 elif len(args) == 2 and hasattr(args[1], "__wrapped__"):
104 args = (args[0], args[1].__wrapped__)
105 # register_json takes the connection as first argument, and can have
106 # several more arguments
107 elif method == "register_json":
108 if args and hasattr(args[0], "__wrapped__"):
109 args = (args[0].__wrapped__,) + args[1:]
110
111 return wrapped(*args, **kwargs)
112
[end of elasticapm/instrumentation/packages/psycopg2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/instrumentation/packages/psycopg2.py b/elasticapm/instrumentation/packages/psycopg2.py
--- a/elasticapm/instrumentation/packages/psycopg2.py
+++ b/elasticapm/instrumentation/packages/psycopg2.py
@@ -87,13 +87,21 @@
return PGConnectionProxy(wrapped(*args, **kwargs))
-class Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):
- name = "psycopg2-register-type"
+class Psycopg2ExtensionsInstrumentation(DbApi2Instrumentation):
+ """
+ Some extensions do a type check on the Connection/Cursor in C-code, which our
+ proxy fails. For these extensions, we need to ensure that the unwrapped
+ Connection/Cursor is passed.
+ """
+
+ name = "psycopg2"
instrument_list = [
("psycopg2.extensions", "register_type"),
# specifically instrument `register_json` as it bypasses `register_type`
("psycopg2._json", "register_json"),
+ ("psycopg2.extensions", "quote_ident"),
+ ("psycopg2.extensions", "encrypt_password"),
]
def call(self, module, method, wrapped, instance, args, kwargs):
@@ -108,4 +116,11 @@
if args and hasattr(args[0], "__wrapped__"):
args = (args[0].__wrapped__,) + args[1:]
+ elif method == "encrypt_password":
+ # connection/cursor is either 3rd argument, or "scope" keyword argument
+ if len(args) >= 3 and hasattr(args[2], "__wrapped__"):
+ args = args[:2] + (args[2].__wrapped__,) + args[3:]
+ elif "scope" in kwargs and hasattr(kwargs["scope"], "__wrapped__"):
+ kwargs["scope"] = kwargs["scope"].__wrapped__
+
return wrapped(*args, **kwargs)
diff --git a/elasticapm/instrumentation/register.py b/elasticapm/instrumentation/register.py
--- a/elasticapm/instrumentation/register.py
+++ b/elasticapm/instrumentation/register.py
@@ -35,7 +35,7 @@
"elasticapm.instrumentation.packages.botocore.BotocoreInstrumentation",
"elasticapm.instrumentation.packages.jinja2.Jinja2Instrumentation",
"elasticapm.instrumentation.packages.psycopg2.Psycopg2Instrumentation",
- "elasticapm.instrumentation.packages.psycopg2.Psycopg2RegisterTypeInstrumentation",
+ "elasticapm.instrumentation.packages.psycopg2.Psycopg2ExtensionsInstrumentation",
"elasticapm.instrumentation.packages.mysql.MySQLInstrumentation",
"elasticapm.instrumentation.packages.pylibmc.PyLibMcInstrumentation",
"elasticapm.instrumentation.packages.pymongo.PyMongoInstrumentation",
| {"golden_diff": "diff --git a/elasticapm/instrumentation/packages/psycopg2.py b/elasticapm/instrumentation/packages/psycopg2.py\n--- a/elasticapm/instrumentation/packages/psycopg2.py\n+++ b/elasticapm/instrumentation/packages/psycopg2.py\n@@ -87,13 +87,21 @@\n return PGConnectionProxy(wrapped(*args, **kwargs))\n \n \n-class Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):\n- name = \"psycopg2-register-type\"\n+class Psycopg2ExtensionsInstrumentation(DbApi2Instrumentation):\n+ \"\"\"\n+ Some extensions do a type check on the Connection/Cursor in C-code, which our\n+ proxy fails. For these extensions, we need to ensure that the unwrapped\n+ Connection/Cursor is passed.\n+ \"\"\"\n+\n+ name = \"psycopg2\"\n \n instrument_list = [\n (\"psycopg2.extensions\", \"register_type\"),\n # specifically instrument `register_json` as it bypasses `register_type`\n (\"psycopg2._json\", \"register_json\"),\n+ (\"psycopg2.extensions\", \"quote_ident\"),\n+ (\"psycopg2.extensions\", \"encrypt_password\"),\n ]\n \n def call(self, module, method, wrapped, instance, args, kwargs):\n@@ -108,4 +116,11 @@\n if args and hasattr(args[0], \"__wrapped__\"):\n args = (args[0].__wrapped__,) + args[1:]\n \n+ elif method == \"encrypt_password\":\n+ # connection/cursor is either 3rd argument, or \"scope\" keyword argument\n+ if len(args) >= 3 and hasattr(args[2], \"__wrapped__\"):\n+ args = args[:2] + (args[2].__wrapped__,) + args[3:]\n+ elif \"scope\" in kwargs and hasattr(kwargs[\"scope\"], \"__wrapped__\"):\n+ kwargs[\"scope\"] = kwargs[\"scope\"].__wrapped__\n+\n return wrapped(*args, **kwargs)\ndiff --git a/elasticapm/instrumentation/register.py b/elasticapm/instrumentation/register.py\n--- a/elasticapm/instrumentation/register.py\n+++ b/elasticapm/instrumentation/register.py\n@@ -35,7 +35,7 @@\n \"elasticapm.instrumentation.packages.botocore.BotocoreInstrumentation\",\n \"elasticapm.instrumentation.packages.jinja2.Jinja2Instrumentation\",\n \"elasticapm.instrumentation.packages.psycopg2.Psycopg2Instrumentation\",\n- \"elasticapm.instrumentation.packages.psycopg2.Psycopg2RegisterTypeInstrumentation\",\n+ \"elasticapm.instrumentation.packages.psycopg2.Psycopg2ExtensionsInstrumentation\",\n \"elasticapm.instrumentation.packages.mysql.MySQLInstrumentation\",\n \"elasticapm.instrumentation.packages.pylibmc.PyLibMcInstrumentation\",\n \"elasticapm.instrumentation.packages.pymongo.PyMongoInstrumentation\",\n", "issue": "Exception when using `quote_ident` in psycopg2\n**Describe the bug**:\r\nIf you make use of the function `psycopg2.extensions.quote_ident` [docs](http://initd.org/psycopg/docs/extensions.html#psycopg2.extensions.quote_ident), a `TypeError` exception is thrown. This is because the cursor object, when under instrumentation from ES-APM, is an instance of `PGCursorProxy`, not the actual cursor, and `quote_ident` does not allow this because the type is checked in the C code [link](https://github.com/psycopg/psycopg2/blob/2_7_6_1/psycopg/psycopgmodule.c#L181). With the error message saying `TypeError: argument 2 must be a connection or a cursor`. Inspecting the cur object at a debug breakpoint we can see it is the proxy object:\r\n\r\n```\r\n>>> cur\r\n<PGCursorProxy at 0x7fd7f70f9a88 for NamedTupleCursor at 0x7fd7f70f0148>\r\n>>> type(cur)\r\n<class 'elasticapm.instrumentation.packages.psycopg2.PGCursorProxy'>\r\n>>> type(cur.__wrapped__)\r\n<class 'psycopg2.extras.NamedTupleCursor'>\r\n```\r\n\r\n**To Reproduce**\r\n\r\n```python\r\nfrom psycopg2.extensions import quote_ident\r\n....\r\n....\r\nwith psycopg2.connect(DSN) as conn:\r\n with conn.cursor() as curs:\r\n ident = quote_ident(\"column_name\", cur)\r\n curs.execute(f\"SELECT {column_name} FROM data.table;\")\r\n data = curs.fetchall()\r\n```\r\npassing the underlying wrapped cursor works:\r\n```python\r\nfrom psycopg2.extensions import quote_ident\r\n....\r\n....\r\nwith psycopg2.connect(DSN) as conn:\r\n with conn.cursor() as curs:\r\n ident = quote_ident(\"column_name\", cur.__wrapped__)\r\n curs.execute(f\"SELECT {column_name} FROM data.table;\")\r\n data = curs.fetchall()\r\n```\r\n\r\n**Environment (please complete the following information)**\r\n- OS: Linux\r\n- Python version: 3.6.4\r\n- Agent version: 5.2.2\r\n\r\n\r\n**Additional context**\r\nLooks like the same problem was encountered here https://github.com/DataDog/dd-trace-py/issues/474, and was fixed by also patching quote_ident to pass the `__wrapped__` object. Testing this out with a basic in-module proxy function worked, but obviously a patch right at the top level from the apm module would sort it out.\r\n\r\n```python\r\ndef quote_ident(string, cursor):\r\n return psycopg2.extensions.quote_ident(string, cursor.__wrapped__)\r\n```\r\n\r\n\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\nfrom elasticapm.utils.module_import import import_string\n\n_cls_register = {\n \"elasticapm.instrumentation.packages.botocore.BotocoreInstrumentation\",\n \"elasticapm.instrumentation.packages.jinja2.Jinja2Instrumentation\",\n \"elasticapm.instrumentation.packages.psycopg2.Psycopg2Instrumentation\",\n \"elasticapm.instrumentation.packages.psycopg2.Psycopg2RegisterTypeInstrumentation\",\n \"elasticapm.instrumentation.packages.mysql.MySQLInstrumentation\",\n \"elasticapm.instrumentation.packages.pylibmc.PyLibMcInstrumentation\",\n \"elasticapm.instrumentation.packages.pymongo.PyMongoInstrumentation\",\n \"elasticapm.instrumentation.packages.pymongo.PyMongoBulkInstrumentation\",\n \"elasticapm.instrumentation.packages.pymongo.PyMongoCursorInstrumentation\",\n \"elasticapm.instrumentation.packages.python_memcached.PythonMemcachedInstrumentation\",\n \"elasticapm.instrumentation.packages.redis.RedisInstrumentation\",\n \"elasticapm.instrumentation.packages.redis.RedisPipelineInstrumentation\",\n \"elasticapm.instrumentation.packages.requests.RequestsInstrumentation\",\n \"elasticapm.instrumentation.packages.sqlite.SQLiteInstrumentation\",\n \"elasticapm.instrumentation.packages.urllib3.Urllib3Instrumentation\",\n \"elasticapm.instrumentation.packages.elasticsearch.ElasticsearchConnectionInstrumentation\",\n \"elasticapm.instrumentation.packages.elasticsearch.ElasticsearchInstrumentation\",\n \"elasticapm.instrumentation.packages.cassandra.CassandraInstrumentation\",\n \"elasticapm.instrumentation.packages.pymssql.PyMSSQLInstrumentation\",\n \"elasticapm.instrumentation.packages.pyodbc.PyODBCInstrumentation\",\n \"elasticapm.instrumentation.packages.django.template.DjangoTemplateInstrumentation\",\n \"elasticapm.instrumentation.packages.django.template.DjangoTemplateSourceInstrumentation\",\n \"elasticapm.instrumentation.packages.urllib.UrllibInstrumentation\",\n}\n\n\ndef register(cls):\n _cls_register.add(cls)\n\n\n_instrumentation_singletons = {}\n\n\ndef get_instrumentation_objects():\n for cls_str in _cls_register:\n if cls_str not in _instrumentation_singletons:\n cls = import_string(cls_str)\n _instrumentation_singletons[cls_str] = cls()\n\n obj = _instrumentation_singletons[cls_str]\n yield obj\n", "path": "elasticapm/instrumentation/register.py"}, {"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nfrom elasticapm.instrumentation.packages.dbapi2 import (\n ConnectionProxy,\n CursorProxy,\n DbApi2Instrumentation,\n extract_signature,\n)\nfrom elasticapm.traces import capture_span\nfrom elasticapm.utils import default_ports\n\n\nclass PGCursorProxy(CursorProxy):\n provider_name = \"postgresql\"\n\n def _bake_sql(self, sql):\n # if this is a Composable object, use its `as_string` method\n # see http://initd.org/psycopg/docs/sql.html\n if hasattr(sql, \"as_string\"):\n return sql.as_string(self.__wrapped__)\n return sql\n\n def extract_signature(self, sql):\n return extract_signature(sql)\n\n def __enter__(self):\n return PGCursorProxy(self.__wrapped__.__enter__())\n\n\nclass PGConnectionProxy(ConnectionProxy):\n cursor_proxy = PGCursorProxy\n\n def __enter__(self):\n return PGConnectionProxy(self.__wrapped__.__enter__())\n\n\nclass Psycopg2Instrumentation(DbApi2Instrumentation):\n name = \"psycopg2\"\n\n instrument_list = [(\"psycopg2\", \"connect\")]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n signature = \"psycopg2.connect\"\n\n host = kwargs.get(\"host\")\n if host:\n signature += \" \" + str(host)\n\n port = kwargs.get(\"port\")\n if port:\n port = str(port)\n if int(port) != default_ports.get(\"postgresql\"):\n signature += \":\" + port\n else:\n # Parse connection string and extract host/port\n pass\n\n with capture_span(signature, span_type=\"db\", span_subtype=\"postgresql\", span_action=\"connect\"):\n return PGConnectionProxy(wrapped(*args, **kwargs))\n\n\nclass Psycopg2RegisterTypeInstrumentation(DbApi2Instrumentation):\n name = \"psycopg2-register-type\"\n\n instrument_list = [\n (\"psycopg2.extensions\", \"register_type\"),\n # specifically instrument `register_json` as it bypasses `register_type`\n (\"psycopg2._json\", \"register_json\"),\n ]\n\n def call(self, module, method, wrapped, instance, args, kwargs):\n if \"conn_or_curs\" in kwargs and hasattr(kwargs[\"conn_or_curs\"], \"__wrapped__\"):\n kwargs[\"conn_or_curs\"] = kwargs[\"conn_or_curs\"].__wrapped__\n # register_type takes the connection as second argument\n elif len(args) == 2 and hasattr(args[1], \"__wrapped__\"):\n args = (args[0], args[1].__wrapped__)\n # register_json takes the connection as first argument, and can have\n # several more arguments\n elif method == \"register_json\":\n if args and hasattr(args[0], \"__wrapped__\"):\n args = (args[0].__wrapped__,) + args[1:]\n\n return wrapped(*args, **kwargs)\n", "path": "elasticapm/instrumentation/packages/psycopg2.py"}]} | 3,324 | 651 |
gh_patches_debug_34730 | rasdani/github-patches | git_diff | bokeh__bokeh-8435 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
text_align attribute in NumberFormatter not doing anything
When making a `datatable`, I want to right align numerical values in the table, but when I set the `text_align` attribute in `NumberFormatter`, the values continue to remain left aligned. Here is my enviornment:
* Python 2.7.12 :: Anaconda 4.1.1 (64-bit)
* numpy==1.11.1
* pandas==0.18.1
* bokeh==0.12.4
* Windows 7, Chrome
And here is a code snippet:
```
import pandas as pd
import numpy as np
from bokeh.models import ColumnDataSource
from bokeh.models.widgets import DataTable, NumberFormatter, TableColumn
from bokeh.plotting import show
df = []
for ii in range(1, 11):
df.append({'x': ii, 'y': 1000 * np.random.rand()})
df = pd.DataFrame(df)
source = ColumnDataSource(data=df)
columns = [
TableColumn(field='x', title='Col 1'),
TableColumn(field='y', title='Col 2',
formatter=NumberFormatter(format='$0,0.00',
text_align='right')),
]
dt = DataTable(source=source, columns=columns, width=500, height=200, row_headers=False)
show(dt)
```
Here is the output I am getting in my Jupyter Notebook:

I would expect that the dollar amounts in `Col 2` would be right aligned, but they aren't.
</issue>
<code>
[start of examples/integration/widgets/data_table_customization.py]
1 from bokeh.io import save
2 from bokeh.models import ColumnDataSource
3 from bokeh.models.widgets import DataTable, TableColumn, HTMLTemplateFormatter
4
5 from bokeh.sampledata.periodic_table import elements
6
7 elements['name_lower'] = elements['name'].str.lower()
8 source = ColumnDataSource(elements)
9
10 html_font_template = '<font color="<%= CPK %>"><%= value %></font>'
11 html_image_template = """
12 <a href="http://images-of-elements.com/<%= value %>.php" target="_blank">
13 <img src="http://images-of-elements.com/<%= value %>.jpg" style="width:40px;height:40px;border:0">
14 </a>
15 """
16 columns = [
17 TableColumn(field='atomic number', title='Atomic Number'),
18 TableColumn(field='symbol', title='Symbol'),
19 TableColumn(field='name', title='Name',
20 formatter=HTMLTemplateFormatter(template=html_font_template)),
21 TableColumn(field='name_lower', title='Image',
22 formatter=HTMLTemplateFormatter(template=html_image_template))
23 ]
24 data_table = DataTable(source=source, columns=columns, editable=False, row_height=45)
25
26 save(data_table)
27
[end of examples/integration/widgets/data_table_customization.py]
[start of examples/app/dash/main.py]
1 from collections import Counter
2 from math import pi
3
4 import numpy as np
5 import pandas as pd
6
7 from bokeh.io import curdoc
8 from bokeh.layouts import column
9 from bokeh.models import ColumnDataSource, DataTable, RangeTool, TableColumn
10 from bokeh.palettes import Spectral11
11 from bokeh.plotting import figure
12 from bokeh.transform import cumsum
13 from bokeh.sampledata.autompg2 import autompg2 as mpg
14 from bokeh.sampledata.stocks import AAPL
15
16 # Timeseries
17
18 dates = np.array(AAPL['date'], dtype=np.datetime64)
19 source = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))
20
21 p = figure(plot_height=110, tools="", toolbar_location=None, #name="line",
22 x_axis_type="datetime", x_range=(dates[1500], dates[2500]), sizing_mode="scale_width")
23
24 p.line('date', 'close', source=source, line_width=2, alpha=0.7)
25 p.yaxis.axis_label = 'Traffic'
26 p.background_fill_color="#f5f5f5"
27 p.grid.grid_line_color="white"
28
29 select = figure(plot_height=50, plot_width=800, y_range=p.y_range,
30 x_axis_type="datetime", y_axis_type=None,
31 tools="", toolbar_location=None, sizing_mode="scale_width")
32
33 range_rool = RangeTool(x_range=p.x_range)
34 range_rool.overlay.fill_color = "navy"
35 range_rool.overlay.fill_alpha = 0.2
36
37 select.line('date', 'close', source=source)
38 select.ygrid.grid_line_color = None
39 select.add_tools(range_rool)
40 select.toolbar.active_multi = range_rool
41 select.background_fill_color="#f5f5f5"
42 select.grid.grid_line_color="white"
43 select.x_range.range_padding = 0.01
44
45 layout = column(p, select, sizing_mode="scale_width", name="line")
46
47 curdoc().add_root(layout)
48
49 # Donut chart
50
51 x = Counter({ 'United States': 157, 'United Kingdom': 93, 'Japan': 89, 'China': 63,
52 'Germany': 44, 'India': 42, 'Italy': 40, 'Australia': 35, 'Brazil': 32,
53 'France': 31, 'Taiwan': 31 })
54
55 data = pd.DataFrame.from_dict(dict(x), orient='index').reset_index().rename(index=str, columns={0:'value', 'index':'country'})
56 data['angle'] = data['value']/sum(x.values()) * 2*pi
57 data['color'] = Spectral11
58
59 region = figure(plot_height=350, toolbar_location=None, outline_line_color=None, sizing_mode="scale_both", name="region", x_range=(-0.4, 1))
60
61 region.annular_wedge(x=-0, y=1, inner_radius=0.2, outer_radius=0.32,
62 start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'),
63 line_color="white", fill_color='color', legend='country', source=data)
64
65 region.axis.axis_label=None
66 region.axis.visible=False
67 region.grid.grid_line_color = None
68 region.legend.label_text_font_size = "0.7em"
69 region.legend.spacing = 1
70 region.legend.glyph_height = 15
71 region.legend.label_height = 15
72
73 curdoc().add_root(region)
74
75 # Bar chart
76
77 plats = ("IOS", "Android", "OSX", "Windows", "Other")
78 values = (35, 22, 13, 26, 4)
79 platform = figure(plot_height=350, toolbar_location=None, outline_line_color=None, sizing_mode="scale_both", name="platform",
80 y_range=list(reversed(plats)), x_axis_location="above")
81 platform.x_range.start = 0
82 platform.ygrid.grid_line_color = None
83 platform.axis.minor_tick_line_color = None
84 platform.outline_line_color = None
85
86 platform.hbar(left=0, right=values, y=plats, height=0.8)
87
88 curdoc().add_root(platform)
89
90 # Table
91
92 source = ColumnDataSource(data=mpg[:6])
93 columns = [
94 TableColumn(field="cyl", title="Counts"),
95 TableColumn(field="cty", title="Uniques"),
96 TableColumn(field="hwy", title="Rating"),
97 ]
98 table = DataTable(source=source, columns=columns, height=210, width=330, name="table", sizing_mode="scale_both")
99
100 curdoc().add_root(table)
101
102 # Setup
103
104 curdoc().title = "Bokeh Dashboard"
105 curdoc().template_variables['stats_names'] = ['users', 'new_users', 'time', 'sessions', 'sales']
106 curdoc().template_variables['stats'] = {
107 'users' : {'icon': 'user', 'value': 11200, 'change': 4 , 'label': 'Total Users'},
108 'new_users' : {'icon': 'user', 'value': 350, 'change': 1.2 , 'label': 'New Users'},
109 'time' : {'icon': 'clock-o', 'value': 5.6, 'change': -2.3 , 'label': 'Total Time'},
110 'sessions' : {'icon': 'user', 'value': 27300, 'change': 0.5 , 'label': 'Total Sessions'},
111 'sales' : {'icon': 'dollar-sign', 'value': 8700, 'change': -0.2 , 'label': 'Average Sales'},
112 }
113
[end of examples/app/dash/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/app/dash/main.py b/examples/app/dash/main.py
--- a/examples/app/dash/main.py
+++ b/examples/app/dash/main.py
@@ -6,7 +6,8 @@
from bokeh.io import curdoc
from bokeh.layouts import column
-from bokeh.models import ColumnDataSource, DataTable, RangeTool, TableColumn
+from bokeh.models import ColumnDataSource, DataTable, RangeTool, TableColumn, \
+ NumberFormatter, StringFormatter
from bokeh.palettes import Spectral11
from bokeh.plotting import figure
from bokeh.transform import cumsum
@@ -92,8 +93,10 @@
source = ColumnDataSource(data=mpg[:6])
columns = [
TableColumn(field="cyl", title="Counts"),
- TableColumn(field="cty", title="Uniques"),
- TableColumn(field="hwy", title="Rating"),
+ TableColumn(field="cty", title="Uniques",
+ formatter=StringFormatter(text_align="center")),
+ TableColumn(field="hwy", title="Rating",
+ formatter=NumberFormatter(text_align="right")),
]
table = DataTable(source=source, columns=columns, height=210, width=330, name="table", sizing_mode="scale_both")
diff --git a/examples/integration/widgets/data_table_customization.py b/examples/integration/widgets/data_table_customization.py
--- a/examples/integration/widgets/data_table_customization.py
+++ b/examples/integration/widgets/data_table_customization.py
@@ -1,5 +1,5 @@
from bokeh.io import save
-from bokeh.models import ColumnDataSource
+from bokeh.models import ColumnDataSource, NumberFormatter, StringFormatter
from bokeh.models.widgets import DataTable, TableColumn, HTMLTemplateFormatter
from bokeh.sampledata.periodic_table import elements
@@ -14,8 +14,10 @@
</a>
"""
columns = [
- TableColumn(field='atomic number', title='Atomic Number'),
- TableColumn(field='symbol', title='Symbol'),
+ TableColumn(field='atomic number', title='Atomic Number',
+ formatter=NumberFormatter(text_align="right")),
+ TableColumn(field='symbol', title='Symbol',
+ formatter=StringFormatter(text_align="center")),
TableColumn(field='name', title='Name',
formatter=HTMLTemplateFormatter(template=html_font_template)),
TableColumn(field='name_lower', title='Image',
| {"golden_diff": "diff --git a/examples/app/dash/main.py b/examples/app/dash/main.py\n--- a/examples/app/dash/main.py\n+++ b/examples/app/dash/main.py\n@@ -6,7 +6,8 @@\n \n from bokeh.io import curdoc\n from bokeh.layouts import column\n-from bokeh.models import ColumnDataSource, DataTable, RangeTool, TableColumn\n+from bokeh.models import ColumnDataSource, DataTable, RangeTool, TableColumn, \\\n+ NumberFormatter, StringFormatter\n from bokeh.palettes import Spectral11\n from bokeh.plotting import figure\n from bokeh.transform import cumsum\n@@ -92,8 +93,10 @@\n source = ColumnDataSource(data=mpg[:6])\n columns = [\n TableColumn(field=\"cyl\", title=\"Counts\"),\n- TableColumn(field=\"cty\", title=\"Uniques\"),\n- TableColumn(field=\"hwy\", title=\"Rating\"),\n+ TableColumn(field=\"cty\", title=\"Uniques\",\n+ formatter=StringFormatter(text_align=\"center\")),\n+ TableColumn(field=\"hwy\", title=\"Rating\",\n+ formatter=NumberFormatter(text_align=\"right\")),\n ]\n table = DataTable(source=source, columns=columns, height=210, width=330, name=\"table\", sizing_mode=\"scale_both\")\n \ndiff --git a/examples/integration/widgets/data_table_customization.py b/examples/integration/widgets/data_table_customization.py\n--- a/examples/integration/widgets/data_table_customization.py\n+++ b/examples/integration/widgets/data_table_customization.py\n@@ -1,5 +1,5 @@\n from bokeh.io import save\n-from bokeh.models import ColumnDataSource\n+from bokeh.models import ColumnDataSource, NumberFormatter, StringFormatter\n from bokeh.models.widgets import DataTable, TableColumn, HTMLTemplateFormatter\n \n from bokeh.sampledata.periodic_table import elements\n@@ -14,8 +14,10 @@\n </a>\n \"\"\"\n columns = [\n- TableColumn(field='atomic number', title='Atomic Number'),\n- TableColumn(field='symbol', title='Symbol'),\n+ TableColumn(field='atomic number', title='Atomic Number',\n+ formatter=NumberFormatter(text_align=\"right\")),\n+ TableColumn(field='symbol', title='Symbol',\n+ formatter=StringFormatter(text_align=\"center\")),\n TableColumn(field='name', title='Name',\n formatter=HTMLTemplateFormatter(template=html_font_template)),\n TableColumn(field='name_lower', title='Image',\n", "issue": "text_align attribute in NumberFormatter not doing anything\nWhen making a `datatable`, I want to right align numerical values in the table, but when I set the `text_align` attribute in `NumberFormatter`, the values continue to remain left aligned. Here is my enviornment:\r\n* Python 2.7.12 :: Anaconda 4.1.1 (64-bit)\r\n* numpy==1.11.1\r\n* pandas==0.18.1\r\n* bokeh==0.12.4\r\n* Windows 7, Chrome\r\n\r\nAnd here is a code snippet:\r\n```\r\nimport pandas as pd\r\nimport numpy as np\r\nfrom bokeh.models import ColumnDataSource\r\nfrom bokeh.models.widgets import DataTable, NumberFormatter, TableColumn\r\nfrom bokeh.plotting import show\r\n\r\ndf = []\r\nfor ii in range(1, 11):\r\n df.append({'x': ii, 'y': 1000 * np.random.rand()})\r\ndf = pd.DataFrame(df)\r\n\r\nsource = ColumnDataSource(data=df)\r\n\r\ncolumns = [\r\n TableColumn(field='x', title='Col 1'),\r\n TableColumn(field='y', title='Col 2',\r\n formatter=NumberFormatter(format='$0,0.00',\r\n text_align='right')),\r\n]\r\n\r\ndt = DataTable(source=source, columns=columns, width=500, height=200, row_headers=False)\r\n\r\nshow(dt)\r\n```\r\n\r\nHere is the output I am getting in my Jupyter Notebook:\r\n\r\n\r\nI would expect that the dollar amounts in `Col 2` would be right aligned, but they aren't.\n", "before_files": [{"content": "from bokeh.io import save\nfrom bokeh.models import ColumnDataSource\nfrom bokeh.models.widgets import DataTable, TableColumn, HTMLTemplateFormatter\n\nfrom bokeh.sampledata.periodic_table import elements\n\nelements['name_lower'] = elements['name'].str.lower()\nsource = ColumnDataSource(elements)\n\nhtml_font_template = '<font color=\"<%= CPK %>\"><%= value %></font>'\nhtml_image_template = \"\"\"\n<a href=\"http://images-of-elements.com/<%= value %>.php\" target=\"_blank\">\n <img src=\"http://images-of-elements.com/<%= value %>.jpg\" style=\"width:40px;height:40px;border:0\">\n</a>\n\"\"\"\ncolumns = [\n TableColumn(field='atomic number', title='Atomic Number'),\n TableColumn(field='symbol', title='Symbol'),\n TableColumn(field='name', title='Name',\n formatter=HTMLTemplateFormatter(template=html_font_template)),\n TableColumn(field='name_lower', title='Image',\n formatter=HTMLTemplateFormatter(template=html_image_template))\n]\ndata_table = DataTable(source=source, columns=columns, editable=False, row_height=45)\n\nsave(data_table)\n", "path": "examples/integration/widgets/data_table_customization.py"}, {"content": "from collections import Counter\nfrom math import pi\n\nimport numpy as np\nimport pandas as pd\n\nfrom bokeh.io import curdoc\nfrom bokeh.layouts import column\nfrom bokeh.models import ColumnDataSource, DataTable, RangeTool, TableColumn\nfrom bokeh.palettes import Spectral11\nfrom bokeh.plotting import figure\nfrom bokeh.transform import cumsum\nfrom bokeh.sampledata.autompg2 import autompg2 as mpg\nfrom bokeh.sampledata.stocks import AAPL\n\n# Timeseries\n\ndates = np.array(AAPL['date'], dtype=np.datetime64)\nsource = ColumnDataSource(data=dict(date=dates, close=AAPL['adj_close']))\n\np = figure(plot_height=110, tools=\"\", toolbar_location=None, #name=\"line\",\n x_axis_type=\"datetime\", x_range=(dates[1500], dates[2500]), sizing_mode=\"scale_width\")\n\np.line('date', 'close', source=source, line_width=2, alpha=0.7)\np.yaxis.axis_label = 'Traffic'\np.background_fill_color=\"#f5f5f5\"\np.grid.grid_line_color=\"white\"\n\nselect = figure(plot_height=50, plot_width=800, y_range=p.y_range,\n x_axis_type=\"datetime\", y_axis_type=None,\n tools=\"\", toolbar_location=None, sizing_mode=\"scale_width\")\n\nrange_rool = RangeTool(x_range=p.x_range)\nrange_rool.overlay.fill_color = \"navy\"\nrange_rool.overlay.fill_alpha = 0.2\n\nselect.line('date', 'close', source=source)\nselect.ygrid.grid_line_color = None\nselect.add_tools(range_rool)\nselect.toolbar.active_multi = range_rool\nselect.background_fill_color=\"#f5f5f5\"\nselect.grid.grid_line_color=\"white\"\nselect.x_range.range_padding = 0.01\n\nlayout = column(p, select, sizing_mode=\"scale_width\", name=\"line\")\n\ncurdoc().add_root(layout)\n\n# Donut chart\n\nx = Counter({ 'United States': 157, 'United Kingdom': 93, 'Japan': 89, 'China': 63,\n 'Germany': 44, 'India': 42, 'Italy': 40, 'Australia': 35, 'Brazil': 32,\n 'France': 31, 'Taiwan': 31 })\n\ndata = pd.DataFrame.from_dict(dict(x), orient='index').reset_index().rename(index=str, columns={0:'value', 'index':'country'})\ndata['angle'] = data['value']/sum(x.values()) * 2*pi\ndata['color'] = Spectral11\n\nregion = figure(plot_height=350, toolbar_location=None, outline_line_color=None, sizing_mode=\"scale_both\", name=\"region\", x_range=(-0.4, 1))\n\nregion.annular_wedge(x=-0, y=1, inner_radius=0.2, outer_radius=0.32,\n start_angle=cumsum('angle', include_zero=True), end_angle=cumsum('angle'),\n line_color=\"white\", fill_color='color', legend='country', source=data)\n\nregion.axis.axis_label=None\nregion.axis.visible=False\nregion.grid.grid_line_color = None\nregion.legend.label_text_font_size = \"0.7em\"\nregion.legend.spacing = 1\nregion.legend.glyph_height = 15\nregion.legend.label_height = 15\n\ncurdoc().add_root(region)\n\n# Bar chart\n\nplats = (\"IOS\", \"Android\", \"OSX\", \"Windows\", \"Other\")\nvalues = (35, 22, 13, 26, 4)\nplatform = figure(plot_height=350, toolbar_location=None, outline_line_color=None, sizing_mode=\"scale_both\", name=\"platform\",\n y_range=list(reversed(plats)), x_axis_location=\"above\")\nplatform.x_range.start = 0\nplatform.ygrid.grid_line_color = None\nplatform.axis.minor_tick_line_color = None\nplatform.outline_line_color = None\n\nplatform.hbar(left=0, right=values, y=plats, height=0.8)\n\ncurdoc().add_root(platform)\n\n# Table\n\nsource = ColumnDataSource(data=mpg[:6])\ncolumns = [\n TableColumn(field=\"cyl\", title=\"Counts\"),\n TableColumn(field=\"cty\", title=\"Uniques\"),\n TableColumn(field=\"hwy\", title=\"Rating\"),\n]\ntable = DataTable(source=source, columns=columns, height=210, width=330, name=\"table\", sizing_mode=\"scale_both\")\n\ncurdoc().add_root(table)\n\n# Setup\n\ncurdoc().title = \"Bokeh Dashboard\"\ncurdoc().template_variables['stats_names'] = ['users', 'new_users', 'time', 'sessions', 'sales']\ncurdoc().template_variables['stats'] = {\n 'users' : {'icon': 'user', 'value': 11200, 'change': 4 , 'label': 'Total Users'},\n 'new_users' : {'icon': 'user', 'value': 350, 'change': 1.2 , 'label': 'New Users'},\n 'time' : {'icon': 'clock-o', 'value': 5.6, 'change': -2.3 , 'label': 'Total Time'},\n 'sessions' : {'icon': 'user', 'value': 27300, 'change': 0.5 , 'label': 'Total Sessions'},\n 'sales' : {'icon': 'dollar-sign', 'value': 8700, 'change': -0.2 , 'label': 'Average Sales'},\n}\n", "path": "examples/app/dash/main.py"}]} | 2,733 | 510 |
gh_patches_debug_27250 | rasdani/github-patches | git_diff | nilearn__nilearn-3710 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation builder failure on main
https://github.com/nilearn/nilearn/actions/workflows/build-docs.yml
started occurring after merging #3698 (doubt it is related given the content of the PR)
https://github.com/nilearn/nilearn/actions/runs/4741116007
</issue>
<code>
[start of nilearn/datasets/__init__.py]
1 """Helper functions to download NeuroImaging datasets."""
2
3 from .atlas import (
4 fetch_atlas_aal,
5 fetch_atlas_allen_2011,
6 fetch_atlas_basc_multiscale_2015,
7 fetch_atlas_craddock_2012,
8 fetch_atlas_destrieux_2009,
9 fetch_atlas_difumo,
10 fetch_atlas_harvard_oxford,
11 fetch_atlas_juelich,
12 fetch_atlas_msdl,
13 fetch_atlas_schaefer_2018,
14 fetch_atlas_smith_2009,
15 fetch_atlas_surf_destrieux,
16 fetch_atlas_talairach,
17 fetch_atlas_yeo_2011,
18 fetch_coords_dosenbach_2010,
19 fetch_coords_power_2011,
20 fetch_coords_seitzman_2018,
21 )
22 from .func import (
23 fetch_abide_pcp,
24 fetch_adhd,
25 fetch_bids_langloc_dataset,
26 fetch_development_fmri,
27 fetch_fiac_first_level,
28 fetch_haxby,
29 fetch_language_localizer_demo_dataset,
30 fetch_localizer_button_task,
31 fetch_localizer_calculation_task,
32 fetch_localizer_contrasts,
33 fetch_localizer_first_level,
34 fetch_megatrawls_netmats,
35 fetch_mixed_gambles,
36 fetch_miyawaki2008,
37 fetch_openneuro_dataset,
38 fetch_openneuro_dataset_index,
39 fetch_spm_auditory,
40 fetch_spm_multimodal_fmri,
41 fetch_surf_nki_enhanced,
42 patch_openneuro_dataset,
43 select_from_index,
44 )
45 from .neurovault import (
46 fetch_neurovault,
47 fetch_neurovault_auditory_computation_task,
48 fetch_neurovault_ids,
49 fetch_neurovault_motor_task,
50 )
51 from .struct import (
52 GM_MNI152_FILE_PATH,
53 MNI152_FILE_PATH,
54 WM_MNI152_FILE_PATH,
55 fetch_icbm152_2009,
56 fetch_icbm152_brain_gm_mask,
57 fetch_oasis_vbm,
58 fetch_surf_fsaverage,
59 load_mni152_brain_mask,
60 load_mni152_gm_mask,
61 load_mni152_gm_template,
62 load_mni152_template,
63 load_mni152_wm_mask,
64 load_mni152_wm_template,
65 )
66 from .utils import get_data_dirs, load_sample_motor_activation_image
67
68 __all__ = [
69 "MNI152_FILE_PATH",
70 "GM_MNI152_FILE_PATH",
71 "WM_MNI152_FILE_PATH",
72 "fetch_icbm152_2009",
73 "load_mni152_template",
74 "load_mni152_gm_template",
75 "load_mni152_wm_template",
76 "fetch_oasis_vbm",
77 "fetch_haxby",
78 "fetch_adhd",
79 "fetch_miyawaki2008",
80 "fetch_localizer_contrasts",
81 "fetch_localizer_button_task",
82 "fetch_abide_pcp",
83 "fetch_localizer_calculation_task",
84 "fetch_atlas_craddock_2012",
85 "fetch_atlas_destrieux_2009",
86 "fetch_atlas_juelich",
87 "fetch_atlas_harvard_oxford",
88 "fetch_atlas_msdl",
89 "fetch_atlas_schaefer_2018",
90 "fetch_coords_power_2011",
91 "fetch_coords_seitzman_2018",
92 "fetch_atlas_smith_2009",
93 "fetch_atlas_allen_2011",
94 "fetch_atlas_yeo_2011",
95 "fetch_mixed_gambles",
96 "fetch_atlas_aal",
97 "fetch_atlas_difumo",
98 "fetch_megatrawls_netmats",
99 "fetch_surf_nki_enhanced",
100 "fetch_development_fmri",
101 "fetch_surf_fsaverage",
102 "fetch_atlas_basc_multiscale_2015",
103 "fetch_coords_dosenbach_2010",
104 "fetch_neurovault",
105 "fetch_neurovault_ids",
106 "fetch_neurovault_motor_task",
107 "fetch_neurovault_auditory_computation_task",
108 "load_mni152_brain_mask",
109 "load_mni152_gm_mask",
110 "load_mni152_wm_mask",
111 "fetch_icbm152_brain_gm_mask",
112 "fetch_atlas_surf_destrieux",
113 "fetch_atlas_talairach",
114 "get_data_dirs",
115 "load_sample_motor_activation_image",
116 "fetch_language_localizer_demo_dataset",
117 "fetch_bids_langloc_dataset",
118 "fetch_openneuro_dataset_index",
119 "select_from_index",
120 "patch_openneuro_dataset",
121 "fetch_openneuro_dataset",
122 "fetch_localizer_first_level",
123 "fetch_spm_auditory",
124 "fetch_spm_multimodal_fmri",
125 "fetch_fiac_first_level",
126 ]
127
[end of nilearn/datasets/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nilearn/datasets/__init__.py b/nilearn/datasets/__init__.py
--- a/nilearn/datasets/__init__.py
+++ b/nilearn/datasets/__init__.py
@@ -10,6 +10,7 @@
fetch_atlas_harvard_oxford,
fetch_atlas_juelich,
fetch_atlas_msdl,
+ fetch_atlas_pauli_2017,
fetch_atlas_schaefer_2018,
fetch_atlas_smith_2009,
fetch_atlas_surf_destrieux,
@@ -24,6 +25,7 @@
fetch_adhd,
fetch_bids_langloc_dataset,
fetch_development_fmri,
+ fetch_ds000030_urls,
fetch_fiac_first_level,
fetch_haxby,
fetch_language_localizer_demo_dataset,
@@ -86,6 +88,7 @@
"fetch_atlas_juelich",
"fetch_atlas_harvard_oxford",
"fetch_atlas_msdl",
+ "fetch_atlas_pauli_2017",
"fetch_atlas_schaefer_2018",
"fetch_coords_power_2011",
"fetch_coords_seitzman_2018",
@@ -98,6 +101,7 @@
"fetch_megatrawls_netmats",
"fetch_surf_nki_enhanced",
"fetch_development_fmri",
+ "fetch_ds000030_urls",
"fetch_surf_fsaverage",
"fetch_atlas_basc_multiscale_2015",
"fetch_coords_dosenbach_2010",
| {"golden_diff": "diff --git a/nilearn/datasets/__init__.py b/nilearn/datasets/__init__.py\n--- a/nilearn/datasets/__init__.py\n+++ b/nilearn/datasets/__init__.py\n@@ -10,6 +10,7 @@\n fetch_atlas_harvard_oxford,\n fetch_atlas_juelich,\n fetch_atlas_msdl,\n+ fetch_atlas_pauli_2017,\n fetch_atlas_schaefer_2018,\n fetch_atlas_smith_2009,\n fetch_atlas_surf_destrieux,\n@@ -24,6 +25,7 @@\n fetch_adhd,\n fetch_bids_langloc_dataset,\n fetch_development_fmri,\n+ fetch_ds000030_urls,\n fetch_fiac_first_level,\n fetch_haxby,\n fetch_language_localizer_demo_dataset,\n@@ -86,6 +88,7 @@\n \"fetch_atlas_juelich\",\n \"fetch_atlas_harvard_oxford\",\n \"fetch_atlas_msdl\",\n+ \"fetch_atlas_pauli_2017\",\n \"fetch_atlas_schaefer_2018\",\n \"fetch_coords_power_2011\",\n \"fetch_coords_seitzman_2018\",\n@@ -98,6 +101,7 @@\n \"fetch_megatrawls_netmats\",\n \"fetch_surf_nki_enhanced\",\n \"fetch_development_fmri\",\n+ \"fetch_ds000030_urls\",\n \"fetch_surf_fsaverage\",\n \"fetch_atlas_basc_multiscale_2015\",\n \"fetch_coords_dosenbach_2010\",\n", "issue": "Documentation builder failure on main\nhttps://github.com/nilearn/nilearn/actions/workflows/build-docs.yml\r\n\r\nstarted occurring after merging #3698 (doubt it is related given the content of the PR)\r\nhttps://github.com/nilearn/nilearn/actions/runs/4741116007\r\n\r\n\n", "before_files": [{"content": "\"\"\"Helper functions to download NeuroImaging datasets.\"\"\"\n\nfrom .atlas import (\n fetch_atlas_aal,\n fetch_atlas_allen_2011,\n fetch_atlas_basc_multiscale_2015,\n fetch_atlas_craddock_2012,\n fetch_atlas_destrieux_2009,\n fetch_atlas_difumo,\n fetch_atlas_harvard_oxford,\n fetch_atlas_juelich,\n fetch_atlas_msdl,\n fetch_atlas_schaefer_2018,\n fetch_atlas_smith_2009,\n fetch_atlas_surf_destrieux,\n fetch_atlas_talairach,\n fetch_atlas_yeo_2011,\n fetch_coords_dosenbach_2010,\n fetch_coords_power_2011,\n fetch_coords_seitzman_2018,\n)\nfrom .func import (\n fetch_abide_pcp,\n fetch_adhd,\n fetch_bids_langloc_dataset,\n fetch_development_fmri,\n fetch_fiac_first_level,\n fetch_haxby,\n fetch_language_localizer_demo_dataset,\n fetch_localizer_button_task,\n fetch_localizer_calculation_task,\n fetch_localizer_contrasts,\n fetch_localizer_first_level,\n fetch_megatrawls_netmats,\n fetch_mixed_gambles,\n fetch_miyawaki2008,\n fetch_openneuro_dataset,\n fetch_openneuro_dataset_index,\n fetch_spm_auditory,\n fetch_spm_multimodal_fmri,\n fetch_surf_nki_enhanced,\n patch_openneuro_dataset,\n select_from_index,\n)\nfrom .neurovault import (\n fetch_neurovault,\n fetch_neurovault_auditory_computation_task,\n fetch_neurovault_ids,\n fetch_neurovault_motor_task,\n)\nfrom .struct import (\n GM_MNI152_FILE_PATH,\n MNI152_FILE_PATH,\n WM_MNI152_FILE_PATH,\n fetch_icbm152_2009,\n fetch_icbm152_brain_gm_mask,\n fetch_oasis_vbm,\n fetch_surf_fsaverage,\n load_mni152_brain_mask,\n load_mni152_gm_mask,\n load_mni152_gm_template,\n load_mni152_template,\n load_mni152_wm_mask,\n load_mni152_wm_template,\n)\nfrom .utils import get_data_dirs, load_sample_motor_activation_image\n\n__all__ = [\n \"MNI152_FILE_PATH\",\n \"GM_MNI152_FILE_PATH\",\n \"WM_MNI152_FILE_PATH\",\n \"fetch_icbm152_2009\",\n \"load_mni152_template\",\n \"load_mni152_gm_template\",\n \"load_mni152_wm_template\",\n \"fetch_oasis_vbm\",\n \"fetch_haxby\",\n \"fetch_adhd\",\n \"fetch_miyawaki2008\",\n \"fetch_localizer_contrasts\",\n \"fetch_localizer_button_task\",\n \"fetch_abide_pcp\",\n \"fetch_localizer_calculation_task\",\n \"fetch_atlas_craddock_2012\",\n \"fetch_atlas_destrieux_2009\",\n \"fetch_atlas_juelich\",\n \"fetch_atlas_harvard_oxford\",\n \"fetch_atlas_msdl\",\n \"fetch_atlas_schaefer_2018\",\n \"fetch_coords_power_2011\",\n \"fetch_coords_seitzman_2018\",\n \"fetch_atlas_smith_2009\",\n \"fetch_atlas_allen_2011\",\n \"fetch_atlas_yeo_2011\",\n \"fetch_mixed_gambles\",\n \"fetch_atlas_aal\",\n \"fetch_atlas_difumo\",\n \"fetch_megatrawls_netmats\",\n \"fetch_surf_nki_enhanced\",\n \"fetch_development_fmri\",\n \"fetch_surf_fsaverage\",\n \"fetch_atlas_basc_multiscale_2015\",\n \"fetch_coords_dosenbach_2010\",\n \"fetch_neurovault\",\n \"fetch_neurovault_ids\",\n \"fetch_neurovault_motor_task\",\n \"fetch_neurovault_auditory_computation_task\",\n \"load_mni152_brain_mask\",\n \"load_mni152_gm_mask\",\n \"load_mni152_wm_mask\",\n \"fetch_icbm152_brain_gm_mask\",\n \"fetch_atlas_surf_destrieux\",\n \"fetch_atlas_talairach\",\n \"get_data_dirs\",\n \"load_sample_motor_activation_image\",\n \"fetch_language_localizer_demo_dataset\",\n \"fetch_bids_langloc_dataset\",\n \"fetch_openneuro_dataset_index\",\n \"select_from_index\",\n \"patch_openneuro_dataset\",\n \"fetch_openneuro_dataset\",\n \"fetch_localizer_first_level\",\n \"fetch_spm_auditory\",\n \"fetch_spm_multimodal_fmri\",\n \"fetch_fiac_first_level\",\n]\n", "path": "nilearn/datasets/__init__.py"}]} | 2,036 | 387 |
gh_patches_debug_27595 | rasdani/github-patches | git_diff | netbox-community__netbox-14870 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Simple condition (without and/or) does not work in event rule
### Deployment Type
Self-hosted
### NetBox Version
v3.7.0
### Python Version
3.11
### Steps to Reproduce
1. Create webhook: Name = Test, URL = http://127.0.0.1:9000 (doesn't matter in this case, it won't be triggered but is required to configure event rule)
2. Go to **Event rules - Add**:
- Name = Test
- Content types = Circuit
- select Updates
- set Conditions:
```
{
"attr": "status.value",
"value": "active"
}
```
- Action type = Webhook
- Webhook = Test
- **Create**
### Expected Behavior
Event rule is created
### Observed Behavior
Error is shown about the condition:
**Ruleset must have exactly one logical operator (found 2)**
The examples in https://docs.netbox.dev/en/stable/reference/conditions/ look the same: simple JSON object with attributes `attr` and `value`.
</issue>
<code>
[start of netbox/extras/conditions.py]
1 import functools
2 import re
3 from django.utils.translation import gettext as _
4
5 __all__ = (
6 'Condition',
7 'ConditionSet',
8 )
9
10
11 AND = 'and'
12 OR = 'or'
13
14
15 def is_ruleset(data):
16 """
17 Determine whether the given dictionary looks like a rule set.
18 """
19 return type(data) is dict and len(data) == 1 and list(data.keys())[0] in (AND, OR)
20
21
22 class Condition:
23 """
24 An individual conditional rule that evaluates a single attribute and its value.
25
26 :param attr: The name of the attribute being evaluated
27 :param value: The value being compared
28 :param op: The logical operation to use when evaluating the value (default: 'eq')
29 """
30 EQ = 'eq'
31 GT = 'gt'
32 GTE = 'gte'
33 LT = 'lt'
34 LTE = 'lte'
35 IN = 'in'
36 CONTAINS = 'contains'
37 REGEX = 'regex'
38
39 OPERATORS = (
40 EQ, GT, GTE, LT, LTE, IN, CONTAINS, REGEX
41 )
42
43 TYPES = {
44 str: (EQ, CONTAINS, REGEX),
45 bool: (EQ, CONTAINS),
46 int: (EQ, GT, GTE, LT, LTE, CONTAINS),
47 float: (EQ, GT, GTE, LT, LTE, CONTAINS),
48 list: (EQ, IN, CONTAINS),
49 type(None): (EQ,)
50 }
51
52 def __init__(self, attr, value, op=EQ, negate=False):
53 if op not in self.OPERATORS:
54 raise ValueError(_("Unknown operator: {op}. Must be one of: {operators}").format(
55 op=op, operators=', '.join(self.OPERATORS)
56 ))
57 if type(value) not in self.TYPES:
58 raise ValueError(_("Unsupported value type: {value}").format(value=type(value)))
59 if op not in self.TYPES[type(value)]:
60 raise ValueError(_("Invalid type for {op} operation: {value}").format(op=op, value=type(value)))
61
62 self.attr = attr
63 self.value = value
64 self.eval_func = getattr(self, f'eval_{op}')
65 self.negate = negate
66
67 def eval(self, data):
68 """
69 Evaluate the provided data to determine whether it matches the condition.
70 """
71 def _get(obj, key):
72 if isinstance(obj, list):
73 return [dict.get(i, key) for i in obj]
74
75 return dict.get(obj, key)
76
77 try:
78 value = functools.reduce(_get, self.attr.split('.'), data)
79 except TypeError:
80 # Invalid key path
81 value = None
82 result = self.eval_func(value)
83
84 if self.negate:
85 return not result
86 return result
87
88 # Equivalency
89
90 def eval_eq(self, value):
91 return value == self.value
92
93 def eval_neq(self, value):
94 return value != self.value
95
96 # Numeric comparisons
97
98 def eval_gt(self, value):
99 return value > self.value
100
101 def eval_gte(self, value):
102 return value >= self.value
103
104 def eval_lt(self, value):
105 return value < self.value
106
107 def eval_lte(self, value):
108 return value <= self.value
109
110 # Membership
111
112 def eval_in(self, value):
113 return value in self.value
114
115 def eval_contains(self, value):
116 return self.value in value
117
118 # Regular expressions
119
120 def eval_regex(self, value):
121 return re.match(self.value, value) is not None
122
123
124 class ConditionSet:
125 """
126 A set of one or more Condition to be evaluated per the prescribed logic (AND or OR). Example:
127
128 {"and": [
129 {"attr": "foo", "op": "eq", "value": 1},
130 {"attr": "bar", "op": "eq", "value": 2, "negate": true}
131 ]}
132
133 :param ruleset: A dictionary mapping a logical operator to a list of conditional rules
134 """
135 def __init__(self, ruleset):
136 if type(ruleset) is not dict:
137 raise ValueError(_("Ruleset must be a dictionary, not {ruleset}.").format(ruleset=type(ruleset)))
138 if len(ruleset) != 1:
139 raise ValueError(_("Ruleset must have exactly one logical operator (found {ruleset})").format(
140 ruleset=len(ruleset)))
141
142 # Determine the logic type
143 logic = list(ruleset.keys())[0]
144 if type(logic) is not str or logic.lower() not in (AND, OR):
145 raise ValueError(_("Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')").format(
146 logic=logic, op_and=AND, op_or=OR
147 ))
148 self.logic = logic.lower()
149
150 # Compile the set of Conditions
151 self.conditions = [
152 ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)
153 for rule in ruleset[self.logic]
154 ]
155
156 def eval(self, data):
157 """
158 Evaluate the provided data to determine whether it matches this set of conditions.
159 """
160 func = any if self.logic == 'or' else all
161 return func(d.eval(data) for d in self.conditions)
162
[end of netbox/extras/conditions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netbox/extras/conditions.py b/netbox/extras/conditions.py
--- a/netbox/extras/conditions.py
+++ b/netbox/extras/conditions.py
@@ -135,23 +135,23 @@
def __init__(self, ruleset):
if type(ruleset) is not dict:
raise ValueError(_("Ruleset must be a dictionary, not {ruleset}.").format(ruleset=type(ruleset)))
- if len(ruleset) != 1:
- raise ValueError(_("Ruleset must have exactly one logical operator (found {ruleset})").format(
- ruleset=len(ruleset)))
-
- # Determine the logic type
- logic = list(ruleset.keys())[0]
- if type(logic) is not str or logic.lower() not in (AND, OR):
- raise ValueError(_("Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')").format(
- logic=logic, op_and=AND, op_or=OR
- ))
- self.logic = logic.lower()
- # Compile the set of Conditions
- self.conditions = [
- ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)
- for rule in ruleset[self.logic]
- ]
+ if len(ruleset) == 1:
+ self.logic = (list(ruleset.keys())[0]).lower()
+ if self.logic not in (AND, OR):
+ raise ValueError(_("Invalid logic type: must be 'AND' or 'OR'. Please check documentation."))
+
+ # Compile the set of Conditions
+ self.conditions = [
+ ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)
+ for rule in ruleset[self.logic]
+ ]
+ else:
+ try:
+ self.logic = None
+ self.conditions = [Condition(**ruleset)]
+ except TypeError:
+ raise ValueError(_("Incorrect key(s) informed. Please check documentation."))
def eval(self, data):
"""
| {"golden_diff": "diff --git a/netbox/extras/conditions.py b/netbox/extras/conditions.py\n--- a/netbox/extras/conditions.py\n+++ b/netbox/extras/conditions.py\n@@ -135,23 +135,23 @@\n def __init__(self, ruleset):\n if type(ruleset) is not dict:\n raise ValueError(_(\"Ruleset must be a dictionary, not {ruleset}.\").format(ruleset=type(ruleset)))\n- if len(ruleset) != 1:\n- raise ValueError(_(\"Ruleset must have exactly one logical operator (found {ruleset})\").format(\n- ruleset=len(ruleset)))\n-\n- # Determine the logic type\n- logic = list(ruleset.keys())[0]\n- if type(logic) is not str or logic.lower() not in (AND, OR):\n- raise ValueError(_(\"Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')\").format(\n- logic=logic, op_and=AND, op_or=OR\n- ))\n- self.logic = logic.lower()\n \n- # Compile the set of Conditions\n- self.conditions = [\n- ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n- for rule in ruleset[self.logic]\n- ]\n+ if len(ruleset) == 1:\n+ self.logic = (list(ruleset.keys())[0]).lower()\n+ if self.logic not in (AND, OR):\n+ raise ValueError(_(\"Invalid logic type: must be 'AND' or 'OR'. Please check documentation.\"))\n+\n+ # Compile the set of Conditions\n+ self.conditions = [\n+ ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n+ for rule in ruleset[self.logic]\n+ ]\n+ else:\n+ try:\n+ self.logic = None\n+ self.conditions = [Condition(**ruleset)]\n+ except TypeError:\n+ raise ValueError(_(\"Incorrect key(s) informed. Please check documentation.\"))\n \n def eval(self, data):\n \"\"\"\n", "issue": "Simple condition (without and/or) does not work in event rule\n### Deployment Type\n\nSelf-hosted\n\n### NetBox Version\n\nv3.7.0\n\n### Python Version\n\n3.11\n\n### Steps to Reproduce\n\n1. Create webhook: Name = Test, URL = http://127.0.0.1:9000 (doesn't matter in this case, it won't be triggered but is required to configure event rule)\r\n2. Go to **Event rules - Add**:\r\n- Name = Test\r\n- Content types = Circuit\r\n- select Updates\r\n- set Conditions:\r\n```\r\n{\r\n \"attr\": \"status.value\",\r\n \"value\": \"active\"\r\n}\r\n```\r\n\r\n- Action type = Webhook\r\n- Webhook = Test\r\n- **Create**\r\n\n\n### Expected Behavior\n\nEvent rule is created\n\n### Observed Behavior\n\nError is shown about the condition:\r\n\r\n**Ruleset must have exactly one logical operator (found 2)** \r\n\r\nThe examples in https://docs.netbox.dev/en/stable/reference/conditions/ look the same: simple JSON object with attributes `attr` and `value`.\n", "before_files": [{"content": "import functools\nimport re\nfrom django.utils.translation import gettext as _\n\n__all__ = (\n 'Condition',\n 'ConditionSet',\n)\n\n\nAND = 'and'\nOR = 'or'\n\n\ndef is_ruleset(data):\n \"\"\"\n Determine whether the given dictionary looks like a rule set.\n \"\"\"\n return type(data) is dict and len(data) == 1 and list(data.keys())[0] in (AND, OR)\n\n\nclass Condition:\n \"\"\"\n An individual conditional rule that evaluates a single attribute and its value.\n\n :param attr: The name of the attribute being evaluated\n :param value: The value being compared\n :param op: The logical operation to use when evaluating the value (default: 'eq')\n \"\"\"\n EQ = 'eq'\n GT = 'gt'\n GTE = 'gte'\n LT = 'lt'\n LTE = 'lte'\n IN = 'in'\n CONTAINS = 'contains'\n REGEX = 'regex'\n\n OPERATORS = (\n EQ, GT, GTE, LT, LTE, IN, CONTAINS, REGEX\n )\n\n TYPES = {\n str: (EQ, CONTAINS, REGEX),\n bool: (EQ, CONTAINS),\n int: (EQ, GT, GTE, LT, LTE, CONTAINS),\n float: (EQ, GT, GTE, LT, LTE, CONTAINS),\n list: (EQ, IN, CONTAINS),\n type(None): (EQ,)\n }\n\n def __init__(self, attr, value, op=EQ, negate=False):\n if op not in self.OPERATORS:\n raise ValueError(_(\"Unknown operator: {op}. Must be one of: {operators}\").format(\n op=op, operators=', '.join(self.OPERATORS)\n ))\n if type(value) not in self.TYPES:\n raise ValueError(_(\"Unsupported value type: {value}\").format(value=type(value)))\n if op not in self.TYPES[type(value)]:\n raise ValueError(_(\"Invalid type for {op} operation: {value}\").format(op=op, value=type(value)))\n\n self.attr = attr\n self.value = value\n self.eval_func = getattr(self, f'eval_{op}')\n self.negate = negate\n\n def eval(self, data):\n \"\"\"\n Evaluate the provided data to determine whether it matches the condition.\n \"\"\"\n def _get(obj, key):\n if isinstance(obj, list):\n return [dict.get(i, key) for i in obj]\n\n return dict.get(obj, key)\n\n try:\n value = functools.reduce(_get, self.attr.split('.'), data)\n except TypeError:\n # Invalid key path\n value = None\n result = self.eval_func(value)\n\n if self.negate:\n return not result\n return result\n\n # Equivalency\n\n def eval_eq(self, value):\n return value == self.value\n\n def eval_neq(self, value):\n return value != self.value\n\n # Numeric comparisons\n\n def eval_gt(self, value):\n return value > self.value\n\n def eval_gte(self, value):\n return value >= self.value\n\n def eval_lt(self, value):\n return value < self.value\n\n def eval_lte(self, value):\n return value <= self.value\n\n # Membership\n\n def eval_in(self, value):\n return value in self.value\n\n def eval_contains(self, value):\n return self.value in value\n\n # Regular expressions\n\n def eval_regex(self, value):\n return re.match(self.value, value) is not None\n\n\nclass ConditionSet:\n \"\"\"\n A set of one or more Condition to be evaluated per the prescribed logic (AND or OR). Example:\n\n {\"and\": [\n {\"attr\": \"foo\", \"op\": \"eq\", \"value\": 1},\n {\"attr\": \"bar\", \"op\": \"eq\", \"value\": 2, \"negate\": true}\n ]}\n\n :param ruleset: A dictionary mapping a logical operator to a list of conditional rules\n \"\"\"\n def __init__(self, ruleset):\n if type(ruleset) is not dict:\n raise ValueError(_(\"Ruleset must be a dictionary, not {ruleset}.\").format(ruleset=type(ruleset)))\n if len(ruleset) != 1:\n raise ValueError(_(\"Ruleset must have exactly one logical operator (found {ruleset})\").format(\n ruleset=len(ruleset)))\n\n # Determine the logic type\n logic = list(ruleset.keys())[0]\n if type(logic) is not str or logic.lower() not in (AND, OR):\n raise ValueError(_(\"Invalid logic type: {logic} (must be '{op_and}' or '{op_or}')\").format(\n logic=logic, op_and=AND, op_or=OR\n ))\n self.logic = logic.lower()\n\n # Compile the set of Conditions\n self.conditions = [\n ConditionSet(rule) if is_ruleset(rule) else Condition(**rule)\n for rule in ruleset[self.logic]\n ]\n\n def eval(self, data):\n \"\"\"\n Evaluate the provided data to determine whether it matches this set of conditions.\n \"\"\"\n func = any if self.logic == 'or' else all\n return func(d.eval(data) for d in self.conditions)\n", "path": "netbox/extras/conditions.py"}]} | 2,308 | 447 |
gh_patches_debug_51300 | rasdani/github-patches | git_diff | translate__pootle-5619 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Priority column is missing
Since the column reordering we've lost the priority column in the vfolders table
</issue>
<code>
[start of pootle/apps/virtualfolder/views.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 from django import forms
10 from django.http import Http404
11 from django.shortcuts import get_object_or_404
12 from django.urls import reverse
13 from django.utils.functional import cached_property
14
15 from pootle.core.browser import get_table_headings
16 from pootle.core.delegate import search_backend
17 from pootle.core.exceptions import Http400
18 from pootle.core.http import JsonResponse
19 from pootle.core.url_helpers import get_path_parts, split_pootle_path
20 from pootle.i18n.gettext import ugettext as _
21 from pootle_misc.util import ajax_required
22 from pootle_store.forms import UnitSearchForm
23 from pootle_store.unit.results import GroupedResults
24 from pootle_translationproject.views import TPTranslateView
25
26 from .delegate import vfolders_data_tool
27 from .models import VirtualFolder
28
29
30 def make_vfolder_dict(context, vf, stats):
31 lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]
32 base_url = reverse(
33 "pootle-vfolder-tp-translate",
34 kwargs=dict(
35 vfolder_name=vf,
36 language_code=lang_code,
37 project_code=proj_code))
38 return {
39 'href_translate': base_url,
40 'title': stats["title"],
41 'code': vf,
42 'priority': stats.get("priority"),
43 'is_grayed': not stats["isVisible"],
44 'stats': stats,
45 'icon': 'vfolder'}
46
47
48 class VFolderTPTranslateView(TPTranslateView):
49 display_vfolder_priority = False
50
51 @cached_property
52 def check_data(self):
53 return self.vfolders_data_view.vfolder_data_tool.get_checks(
54 user=self.request.user).get(self.vfolder_pk, {})
55
56 @cached_property
57 def vfolder(self):
58 return VirtualFolder.objects.get(name=self.kwargs["vfolder_name"])
59
60 @property
61 def vfolder_pk(self):
62 return self.vfolder.pk
63
64 def get_context_data(self, *args, **kwargs):
65 ctx = super(
66 VFolderTPTranslateView,
67 self).get_context_data(*args, **kwargs)
68 ctx["unit_api_root"] = reverse(
69 "vfolder-pootle-xhr-units",
70 kwargs=dict(vfolder_name=self.vfolder.name))
71 ctx["resource_path"] = (
72 "/".join(
73 ["++vfolder",
74 self.vfolder.name,
75 self.object.pootle_path.replace(self.ctx_path, "")]))
76 ctx["resource_path_parts"] = get_path_parts(ctx["resource_path"])
77 return ctx
78
79
80 @ajax_required
81 def get_vfolder_units(request, **kwargs):
82 """Gets source and target texts and its metadata.
83
84 :return: A JSON-encoded string containing the source and target texts
85 grouped by the store they belong to.
86
87 The optional `count` GET parameter defines the chunk size to
88 consider. The user's preference will be used by default.
89
90 When the `initial` GET parameter is present, a sorted list of
91 the result set ids will be returned too.
92 """
93 search_form = UnitSearchForm(request.GET, user=request.user)
94
95 vfolder = get_object_or_404(
96 VirtualFolder,
97 name=kwargs.get("vfolder_name"))
98
99 if not search_form.is_valid():
100 errors = search_form.errors.as_data()
101 if "path" in errors:
102 for error in errors["path"]:
103 if error.code == "max_length":
104 raise Http400(_('Path too long.'))
105 elif error.code == "required":
106 raise Http400(_('Arguments missing.'))
107 raise Http404(forms.ValidationError(search_form.errors).messages)
108
109 search_form.cleaned_data["vfolder"] = vfolder
110 backend = search_backend.get(VirtualFolder)(
111 request.user, **search_form.cleaned_data)
112 total, start, end, units_qs = backend.search()
113 return JsonResponse(
114 {'start': start,
115 'end': end,
116 'total': total,
117 'unitGroups': GroupedResults(units_qs).data})
118
119
120 class VFoldersDataView(object):
121
122 _table_fields = (
123 'name', 'progress', 'activity',
124 'total', 'need-translation',
125 'suggestions', 'critical')
126
127 def __init__(self, context, user, has_admin_access=False):
128 self.context = context
129 self.user = user
130 self.has_admin_access = has_admin_access
131
132 @property
133 def vfolder_data_tool(self):
134 return vfolders_data_tool.get(self.context.__class__)(self.context)
135
136 @property
137 def table_fields(self):
138 fields = self._table_fields
139 if self.has_admin_access:
140 fields += ('last-updated', )
141 return fields
142
143 @cached_property
144 def table_data(self):
145 ctx = {}
146 if len(self.all_stats) > 0:
147 ctx.update({
148 'children': {
149 'id': 'vfolders',
150 'fields': self.table_fields,
151 'headings': get_table_headings(self.table_fields),
152 'rows': self.table_items}})
153 return ctx
154
155 @cached_property
156 def all_stats(self):
157 return self.vfolder_data_tool.get_stats(user=self.user)
158
159 @cached_property
160 def stats(self):
161 return dict(children=self.all_stats)
162
163 @property
164 def table_items(self):
165 return [
166 make_vfolder_dict(self.context, *vf)
167 for vf
168 in self.all_stats.items()]
169
170 @cached_property
171 def has_data(self):
172 return (
173 self.vfolder_data_tool.all_stat_data.exists()
174 if self.vfolder_data_tool.show_all_to(self.user)
175 else self.vfolder_data_tool.stat_data.exists())
176
[end of pootle/apps/virtualfolder/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/virtualfolder/views.py b/pootle/apps/virtualfolder/views.py
--- a/pootle/apps/virtualfolder/views.py
+++ b/pootle/apps/virtualfolder/views.py
@@ -122,7 +122,7 @@
_table_fields = (
'name', 'progress', 'activity',
'total', 'need-translation',
- 'suggestions', 'critical')
+ 'suggestions', 'critical', 'priority')
def __init__(self, context, user, has_admin_access=False):
self.context = context
| {"golden_diff": "diff --git a/pootle/apps/virtualfolder/views.py b/pootle/apps/virtualfolder/views.py\n--- a/pootle/apps/virtualfolder/views.py\n+++ b/pootle/apps/virtualfolder/views.py\n@@ -122,7 +122,7 @@\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n- 'suggestions', 'critical')\n+ 'suggestions', 'critical', 'priority')\n \n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n", "issue": "Priority column is missing\nSince the column reordering we've lost the priority column in the vfolders table\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nfrom django import forms\nfrom django.http import Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.urls import reverse\nfrom django.utils.functional import cached_property\n\nfrom pootle.core.browser import get_table_headings\nfrom pootle.core.delegate import search_backend\nfrom pootle.core.exceptions import Http400\nfrom pootle.core.http import JsonResponse\nfrom pootle.core.url_helpers import get_path_parts, split_pootle_path\nfrom pootle.i18n.gettext import ugettext as _\nfrom pootle_misc.util import ajax_required\nfrom pootle_store.forms import UnitSearchForm\nfrom pootle_store.unit.results import GroupedResults\nfrom pootle_translationproject.views import TPTranslateView\n\nfrom .delegate import vfolders_data_tool\nfrom .models import VirtualFolder\n\n\ndef make_vfolder_dict(context, vf, stats):\n lang_code, proj_code = split_pootle_path(context.pootle_path)[:2]\n base_url = reverse(\n \"pootle-vfolder-tp-translate\",\n kwargs=dict(\n vfolder_name=vf,\n language_code=lang_code,\n project_code=proj_code))\n return {\n 'href_translate': base_url,\n 'title': stats[\"title\"],\n 'code': vf,\n 'priority': stats.get(\"priority\"),\n 'is_grayed': not stats[\"isVisible\"],\n 'stats': stats,\n 'icon': 'vfolder'}\n\n\nclass VFolderTPTranslateView(TPTranslateView):\n display_vfolder_priority = False\n\n @cached_property\n def check_data(self):\n return self.vfolders_data_view.vfolder_data_tool.get_checks(\n user=self.request.user).get(self.vfolder_pk, {})\n\n @cached_property\n def vfolder(self):\n return VirtualFolder.objects.get(name=self.kwargs[\"vfolder_name\"])\n\n @property\n def vfolder_pk(self):\n return self.vfolder.pk\n\n def get_context_data(self, *args, **kwargs):\n ctx = super(\n VFolderTPTranslateView,\n self).get_context_data(*args, **kwargs)\n ctx[\"unit_api_root\"] = reverse(\n \"vfolder-pootle-xhr-units\",\n kwargs=dict(vfolder_name=self.vfolder.name))\n ctx[\"resource_path\"] = (\n \"/\".join(\n [\"++vfolder\",\n self.vfolder.name,\n self.object.pootle_path.replace(self.ctx_path, \"\")]))\n ctx[\"resource_path_parts\"] = get_path_parts(ctx[\"resource_path\"])\n return ctx\n\n\n@ajax_required\ndef get_vfolder_units(request, **kwargs):\n \"\"\"Gets source and target texts and its metadata.\n\n :return: A JSON-encoded string containing the source and target texts\n grouped by the store they belong to.\n\n The optional `count` GET parameter defines the chunk size to\n consider. The user's preference will be used by default.\n\n When the `initial` GET parameter is present, a sorted list of\n the result set ids will be returned too.\n \"\"\"\n search_form = UnitSearchForm(request.GET, user=request.user)\n\n vfolder = get_object_or_404(\n VirtualFolder,\n name=kwargs.get(\"vfolder_name\"))\n\n if not search_form.is_valid():\n errors = search_form.errors.as_data()\n if \"path\" in errors:\n for error in errors[\"path\"]:\n if error.code == \"max_length\":\n raise Http400(_('Path too long.'))\n elif error.code == \"required\":\n raise Http400(_('Arguments missing.'))\n raise Http404(forms.ValidationError(search_form.errors).messages)\n\n search_form.cleaned_data[\"vfolder\"] = vfolder\n backend = search_backend.get(VirtualFolder)(\n request.user, **search_form.cleaned_data)\n total, start, end, units_qs = backend.search()\n return JsonResponse(\n {'start': start,\n 'end': end,\n 'total': total,\n 'unitGroups': GroupedResults(units_qs).data})\n\n\nclass VFoldersDataView(object):\n\n _table_fields = (\n 'name', 'progress', 'activity',\n 'total', 'need-translation',\n 'suggestions', 'critical')\n\n def __init__(self, context, user, has_admin_access=False):\n self.context = context\n self.user = user\n self.has_admin_access = has_admin_access\n\n @property\n def vfolder_data_tool(self):\n return vfolders_data_tool.get(self.context.__class__)(self.context)\n\n @property\n def table_fields(self):\n fields = self._table_fields\n if self.has_admin_access:\n fields += ('last-updated', )\n return fields\n\n @cached_property\n def table_data(self):\n ctx = {}\n if len(self.all_stats) > 0:\n ctx.update({\n 'children': {\n 'id': 'vfolders',\n 'fields': self.table_fields,\n 'headings': get_table_headings(self.table_fields),\n 'rows': self.table_items}})\n return ctx\n\n @cached_property\n def all_stats(self):\n return self.vfolder_data_tool.get_stats(user=self.user)\n\n @cached_property\n def stats(self):\n return dict(children=self.all_stats)\n\n @property\n def table_items(self):\n return [\n make_vfolder_dict(self.context, *vf)\n for vf\n in self.all_stats.items()]\n\n @cached_property\n def has_data(self):\n return (\n self.vfolder_data_tool.all_stat_data.exists()\n if self.vfolder_data_tool.show_all_to(self.user)\n else self.vfolder_data_tool.stat_data.exists())\n", "path": "pootle/apps/virtualfolder/views.py"}]} | 2,280 | 131 |
gh_patches_debug_1413 | rasdani/github-patches | git_diff | gratipay__gratipay.com-1314 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
reset.css doesn't load sometimes
@clone1018 saw this when we first started caching static assets. It's why I turned off static caching initially. Now static caching is back with #1245 and indeed we're seeing this again. :(

</issue>
<code>
[start of gittip/cache_static.py]
1 """
2 Handles caching of static resources.
3 """
4 import os
5 from calendar import timegm
6 from email.utils import parsedate
7 from wsgiref.handlers import format_date_time
8
9 from aspen import Response
10
11
12 def version_is_available(request):
13 """Return a boolean, whether we have the version they asked for.
14 """
15 path = request.line.uri.path
16 version = request.website.version
17 return path['version'] == version if 'version' in path else True
18
19
20 def version_is_dash(request):
21 """Return a boolean, whether the version they asked for is -.
22 """
23 return request.line.uri.path.get('version') == '-'
24
25
26 def get_last_modified(fs_path):
27 """Get the last modified time, as int, of the file pointed to by fs_path.
28 """
29 return int(os.path.getctime(fs_path))
30
31
32 def inbound(request):
33 """Try to serve a 304 for resources under assets/.
34 """
35 uri = request.line.uri
36
37 if not uri.startswith('/assets/'):
38
39 # Only apply to the assets/ directory.
40
41 return request
42
43 if version_is_dash(request):
44
45 # Special-case a version of '-' to never 304/404 here.
46
47 return request
48
49 if not version_is_available(request):
50
51 # Don't serve one version of a file as if it were another.
52
53 raise Response(404)
54
55 ims = request.headers.get('If-Modified-Since')
56 if not ims:
57
58 # This client doesn't care about when the file was modified.
59
60 return request
61
62 if request.fs.endswith('.spt'):
63
64 # This is a requests for a dynamic resource. Perhaps in the future
65 # we'll delegate to such resources to compute a sensible Last-Modified
66 # or E-Tag, but for now we punt. This is okay, because we expect to
67 # put our dynamic assets behind a CDN in production.
68
69 return request
70
71
72 try:
73 ims = timegm(parsedate(ims))
74 except:
75
76 # Malformed If-Modified-Since header. Proceed with the request.
77
78 return request
79
80 last_modified = get_last_modified(request.fs)
81 if ims < last_modified:
82
83 # The file has been modified since. Serve the whole thing.
84
85 return request
86
87
88 # Huzzah!
89 # =======
90 # We can serve a 304! :D
91
92 response = Response(304)
93 response.headers['Last-Modified'] = format_date_time(last_modified)
94 response.headers['Cache-Control'] = 'no-cache'
95 raise response
96
97
98 def outbound(response):
99 """Set caching headers for resources under assets/.
100 """
101 request = response.request
102 website = request.website
103 uri = request.line.uri
104
105 version = website.version
106 response.headers['X-Gittip-Version'] = version
107
108 if not uri.startswith('/assets/'):
109 return response
110
111 response.headers.cookie.clear()
112
113 if response.code == 304:
114 return response
115
116 if website.cache_static:
117
118 # https://developers.google.com/speed/docs/best-practices/caching
119 response.headers['Cache-Control'] = 'public'
120 response.headers['Vary'] = 'accept-encoding'
121
122 if 'version' in uri.path:
123 # This specific asset is versioned, so it's fine to cache it.
124 response.headers['Expires'] = 'Sun, 17 Jan 2038 19:14:07 GMT'
125 else:
126 # Asset is not versioned. Don't cache it, but set Last-Modified.
127 last_modified = get_last_modified(request.fs)
128 response.headers['Last-Modified'] = format_date_time(last_modified)
129
[end of gittip/cache_static.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gittip/cache_static.py b/gittip/cache_static.py
--- a/gittip/cache_static.py
+++ b/gittip/cache_static.py
@@ -111,6 +111,10 @@
response.headers.cookie.clear()
if response.code == 304:
+
+ # https://github.com/gittip/www.gittip.com/issues/1308
+ del response.headers['Content-Type']
+
return response
if website.cache_static:
| {"golden_diff": "diff --git a/gittip/cache_static.py b/gittip/cache_static.py\n--- a/gittip/cache_static.py\n+++ b/gittip/cache_static.py\n@@ -111,6 +111,10 @@\n response.headers.cookie.clear()\n \n if response.code == 304:\n+\n+ # https://github.com/gittip/www.gittip.com/issues/1308\n+ del response.headers['Content-Type']\n+\n return response\n \n if website.cache_static:\n", "issue": "reset.css doesn't load sometimes\n@clone1018 saw this when we first started caching static assets. It's why I turned off static caching initially. Now static caching is back with #1245 and indeed we're seeing this again. :(\n\n\n\n", "before_files": [{"content": "\"\"\"\nHandles caching of static resources.\n\"\"\"\nimport os\nfrom calendar import timegm\nfrom email.utils import parsedate\nfrom wsgiref.handlers import format_date_time\n\nfrom aspen import Response\n\n\ndef version_is_available(request):\n \"\"\"Return a boolean, whether we have the version they asked for.\n \"\"\"\n path = request.line.uri.path\n version = request.website.version\n return path['version'] == version if 'version' in path else True\n\n\ndef version_is_dash(request):\n \"\"\"Return a boolean, whether the version they asked for is -.\n \"\"\"\n return request.line.uri.path.get('version') == '-'\n\n\ndef get_last_modified(fs_path):\n \"\"\"Get the last modified time, as int, of the file pointed to by fs_path.\n \"\"\"\n return int(os.path.getctime(fs_path))\n\n\ndef inbound(request):\n \"\"\"Try to serve a 304 for resources under assets/.\n \"\"\"\n uri = request.line.uri\n\n if not uri.startswith('/assets/'):\n\n # Only apply to the assets/ directory.\n\n return request\n\n if version_is_dash(request):\n\n # Special-case a version of '-' to never 304/404 here.\n\n return request\n\n if not version_is_available(request):\n\n # Don't serve one version of a file as if it were another.\n\n raise Response(404)\n\n ims = request.headers.get('If-Modified-Since')\n if not ims:\n\n # This client doesn't care about when the file was modified.\n\n return request\n\n if request.fs.endswith('.spt'):\n\n # This is a requests for a dynamic resource. Perhaps in the future\n # we'll delegate to such resources to compute a sensible Last-Modified\n # or E-Tag, but for now we punt. This is okay, because we expect to\n # put our dynamic assets behind a CDN in production.\n\n return request\n\n\n try:\n ims = timegm(parsedate(ims))\n except:\n\n # Malformed If-Modified-Since header. Proceed with the request.\n\n return request\n\n last_modified = get_last_modified(request.fs)\n if ims < last_modified:\n\n # The file has been modified since. Serve the whole thing.\n\n return request\n\n\n # Huzzah!\n # =======\n # We can serve a 304! :D\n\n response = Response(304)\n response.headers['Last-Modified'] = format_date_time(last_modified)\n response.headers['Cache-Control'] = 'no-cache'\n raise response\n\n\ndef outbound(response):\n \"\"\"Set caching headers for resources under assets/.\n \"\"\"\n request = response.request\n website = request.website\n uri = request.line.uri\n\n version = website.version\n response.headers['X-Gittip-Version'] = version\n\n if not uri.startswith('/assets/'):\n return response\n\n response.headers.cookie.clear()\n\n if response.code == 304:\n return response\n\n if website.cache_static:\n\n # https://developers.google.com/speed/docs/best-practices/caching\n response.headers['Cache-Control'] = 'public'\n response.headers['Vary'] = 'accept-encoding'\n\n if 'version' in uri.path:\n # This specific asset is versioned, so it's fine to cache it.\n response.headers['Expires'] = 'Sun, 17 Jan 2038 19:14:07 GMT'\n else:\n # Asset is not versioned. Don't cache it, but set Last-Modified.\n last_modified = get_last_modified(request.fs)\n response.headers['Last-Modified'] = format_date_time(last_modified)\n", "path": "gittip/cache_static.py"}]} | 1,763 | 111 |
gh_patches_debug_28903 | rasdani/github-patches | git_diff | crytic__slither-252 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect source mappings because of certain (Unicode?) characters in comments
Certain characters (or scripts) in Solidity comments appear to cause incorrect source mappings.
For example, in `0x06012c8cf97bead5deae237070f9587f8e7a266d_KittyCore.sol`, the symbol that looks like underscore in "email_protected":
```
/// @author Dieter Shirley <<a href="/cdn-cgi/l/email-protection" class="__cf_email__" data-cfemail="6004051405200118090f0d1a050e4e030f">[email_protected]</a>> (https://github.com/dete)
```
Similarly, the Asian characters in below comments from `0x5d0d76787d9d564061dd23f8209f804a3b8ad2f2_FoMo3Dlong.sol` also cause source mapping problems:
```
struct Round {
uint256 plyr; // pID of player in lead, lead领导吗?
uint256 team; // tID of team in lead
uint256 end; // time ends/ended
bool ended; // has round end function been ran 这个开关值得研究下
uint256 strt; // time round started
uint256 keys; // keys
uint256 eth; // total eth in
uint256 pot; // eth to pot (during round) / final amount paid to winner (after round ends)
uint256 mask; // global mask
uint256 ico; // total eth sent in during ICO phase
uint256 icoGen; // total eth for gen during ICO phase
uint256 icoAvg; // average key price for ICO phase
}
```
</issue>
<code>
[start of slither/core/source_mapping/source_mapping.py]
1 import re
2 import os
3 from slither.core.context.context import Context
4
5 class SourceMapping(Context):
6
7 def __init__(self):
8 super(SourceMapping, self).__init__()
9 self._source_mapping = None
10
11 @property
12 def source_mapping(self):
13 return self._source_mapping
14
15 @staticmethod
16 def _compute_line(source_code, start, length):
17 """
18 Compute line(s) numbers and starting/ending columns
19 from a start/end offset. All numbers start from 1.
20
21 Not done in an efficient way
22 """
23 total_length = len(source_code)
24 source_code = source_code.splitlines(True)
25 counter = 0
26 i = 0
27 lines = []
28 starting_column = None
29 ending_column = None
30 while counter < total_length:
31 # Determine the length of the line, and advance the line number
32 lineLength = len(source_code[i])
33 i = i + 1
34
35 # Determine our column numbers.
36 if starting_column is None and counter + lineLength > start:
37 starting_column = (start - counter) + 1
38 if starting_column is not None and ending_column is None and counter + lineLength > start + length:
39 ending_column = ((start + length) - counter) + 1
40
41 # Advance the current position counter, and determine line numbers.
42 counter += lineLength
43 if counter > start:
44 lines.append(i)
45
46 # If our advanced position for the next line is out of range, stop.
47 if counter > start + length:
48 break
49
50 return (lines, starting_column, ending_column)
51
52 @staticmethod
53 def _convert_source_mapping(offset, slither):
54 '''
55 Convert a text offset to a real offset
56 see https://solidity.readthedocs.io/en/develop/miscellaneous.html#source-mappings
57 Returns:
58 (dict): {'start':0, 'length':0, 'filename': 'file.sol'}
59 '''
60 sourceUnits = slither.source_units
61
62 position = re.findall('([0-9]*):([0-9]*):([-]?[0-9]*)', offset)
63 if len(position) != 1:
64 return {}
65
66 s, l, f = position[0]
67 s = int(s)
68 l = int(l)
69 f = int(f)
70
71 if f not in sourceUnits:
72 return {'start':s, 'length':l}
73 filename_used = sourceUnits[f]
74 filename_absolute = None
75 filename_relative = None
76 filename_short = None
77
78 lines = []
79
80 # If possible, convert the filename to its absolute/relative version
81 if slither.crytic_compile:
82 filenames = slither.crytic_compile.filename_lookup(filename_used)
83 filename_absolute = filenames.absolute
84 filename_relative = filenames.relative
85 filename_short = filenames.short
86
87 if filename_absolute in slither.source_code:
88 filename = filename_absolute
89 elif filename_relative in slither.source_code:
90 filename = filename_relative
91 elif filename_short in slither.source_code:
92 filename = filename_short
93 else:#
94 filename = filename_used.used
95 else:
96 filename = filename_used
97
98 if filename in slither.source_code:
99 source_code = slither.source_code[filename]
100 (lines, starting_column, ending_column) = SourceMapping._compute_line(source_code,
101 s,
102 l)
103 else:
104 (lines, starting_column, ending_column) = ([], None, None)
105
106
107 return {'start':s,
108 'length':l,
109 'filename_used': filename_used,
110 'filename_relative': filename_relative,
111 'filename_absolute': filename_absolute,
112 'filename_short': filename_short,
113 'lines' : lines,
114 'starting_column': starting_column,
115 'ending_column': ending_column
116 }
117
118 def set_offset(self, offset, slither):
119 if isinstance(offset, dict):
120 self._source_mapping = offset
121 else:
122 self._source_mapping = self._convert_source_mapping(offset, slither)
123
124
125 @property
126 def source_mapping_str(self):
127
128 lines = self.source_mapping.get('lines', None)
129 if not lines:
130 lines = ''
131 elif len(lines) == 1:
132 lines = '#{}'.format(lines[0])
133 else:
134 lines = '#{}-{}'.format(lines[0], lines[-1])
135 return '{}{}'.format(self.source_mapping['filename_short'], lines)
136
137
[end of slither/core/source_mapping/source_mapping.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/core/source_mapping/source_mapping.py b/slither/core/source_mapping/source_mapping.py
--- a/slither/core/source_mapping/source_mapping.py
+++ b/slither/core/source_mapping/source_mapping.py
@@ -20,6 +20,7 @@
Not done in an efficient way
"""
+ source_code = source_code.encode('utf-8')
total_length = len(source_code)
source_code = source_code.splitlines(True)
counter = 0
@@ -29,17 +30,18 @@
ending_column = None
while counter < total_length:
# Determine the length of the line, and advance the line number
- lineLength = len(source_code[i])
+ line_content = source_code[i]
+ line_length = len(line_content)
i = i + 1
# Determine our column numbers.
- if starting_column is None and counter + lineLength > start:
+ if starting_column is None and counter + line_length > start:
starting_column = (start - counter) + 1
- if starting_column is not None and ending_column is None and counter + lineLength > start + length:
+ if starting_column is not None and ending_column is None and counter + line_length > start + length:
ending_column = ((start + length) - counter) + 1
# Advance the current position counter, and determine line numbers.
- counter += lineLength
+ counter += line_length
if counter > start:
lines.append(i)
| {"golden_diff": "diff --git a/slither/core/source_mapping/source_mapping.py b/slither/core/source_mapping/source_mapping.py\n--- a/slither/core/source_mapping/source_mapping.py\n+++ b/slither/core/source_mapping/source_mapping.py\n@@ -20,6 +20,7 @@\n \n Not done in an efficient way\n \"\"\"\n+ source_code = source_code.encode('utf-8')\n total_length = len(source_code)\n source_code = source_code.splitlines(True)\n counter = 0\n@@ -29,17 +30,18 @@\n ending_column = None\n while counter < total_length:\n # Determine the length of the line, and advance the line number\n- lineLength = len(source_code[i])\n+ line_content = source_code[i]\n+ line_length = len(line_content)\n i = i + 1\n \n # Determine our column numbers.\n- if starting_column is None and counter + lineLength > start:\n+ if starting_column is None and counter + line_length > start:\n starting_column = (start - counter) + 1\n- if starting_column is not None and ending_column is None and counter + lineLength > start + length:\n+ if starting_column is not None and ending_column is None and counter + line_length > start + length:\n ending_column = ((start + length) - counter) + 1\n \n # Advance the current position counter, and determine line numbers.\n- counter += lineLength\n+ counter += line_length\n if counter > start:\n lines.append(i)\n", "issue": "Incorrect source mappings because of certain (Unicode?) characters in comments\nCertain characters (or scripts) in Solidity comments appear to cause incorrect source mappings.\r\n\r\nFor example, in `0x06012c8cf97bead5deae237070f9587f8e7a266d_KittyCore.sol`, the symbol that looks like underscore in \"email_protected\":\r\n```\r\n/// @author Dieter Shirley <<a href=\"/cdn-cgi/l/email-protection\" class=\"__cf_email__\" data-cfemail=\"6004051405200118090f0d1a050e4e030f\">[email_protected]</a>> (https://github.com/dete) \r\n```\r\nSimilarly, the Asian characters in below comments from `0x5d0d76787d9d564061dd23f8209f804a3b8ad2f2_FoMo3Dlong.sol` also cause source mapping problems:\r\n\r\n```\r\nstruct Round {\r\n uint256 plyr; // pID of player in lead\uff0c lead\u9886\u5bfc\u5417\uff1f \r\n uint256 team; // tID of team in lead \r\n uint256 end; // time ends/ended \r\n bool ended; // has round end function been ran \u8fd9\u4e2a\u5f00\u5173\u503c\u5f97\u7814\u7a76\u4e0b \r\n\tuint256 strt; // time round started \r\n\tuint256 keys; // keys \r\n\tuint256 eth; // total eth in \r\n\tuint256 pot; // eth to pot (during round) / final amount paid to winner (after round ends) \r\n uint256 mask; // global mask \r\n uint256 ico; // total eth sent in during ICO phase \r\n uint256 icoGen; // total eth for gen during ICO phase \r\n uint256 icoAvg; // average key price for ICO phase \r\n }\r\n```\n", "before_files": [{"content": "import re\nimport os\nfrom slither.core.context.context import Context\n\nclass SourceMapping(Context):\n\n def __init__(self):\n super(SourceMapping, self).__init__()\n self._source_mapping = None\n\n @property\n def source_mapping(self):\n return self._source_mapping\n\n @staticmethod\n def _compute_line(source_code, start, length):\n \"\"\"\n Compute line(s) numbers and starting/ending columns\n from a start/end offset. All numbers start from 1.\n\n Not done in an efficient way\n \"\"\"\n total_length = len(source_code)\n source_code = source_code.splitlines(True)\n counter = 0\n i = 0\n lines = []\n starting_column = None\n ending_column = None\n while counter < total_length:\n # Determine the length of the line, and advance the line number\n lineLength = len(source_code[i])\n i = i + 1\n\n # Determine our column numbers.\n if starting_column is None and counter + lineLength > start:\n starting_column = (start - counter) + 1\n if starting_column is not None and ending_column is None and counter + lineLength > start + length:\n ending_column = ((start + length) - counter) + 1\n\n # Advance the current position counter, and determine line numbers.\n counter += lineLength\n if counter > start:\n lines.append(i)\n\n # If our advanced position for the next line is out of range, stop.\n if counter > start + length:\n break\n\n return (lines, starting_column, ending_column)\n\n @staticmethod\n def _convert_source_mapping(offset, slither):\n '''\n Convert a text offset to a real offset\n see https://solidity.readthedocs.io/en/develop/miscellaneous.html#source-mappings\n Returns:\n (dict): {'start':0, 'length':0, 'filename': 'file.sol'}\n '''\n sourceUnits = slither.source_units\n\n position = re.findall('([0-9]*):([0-9]*):([-]?[0-9]*)', offset)\n if len(position) != 1:\n return {}\n\n s, l, f = position[0]\n s = int(s)\n l = int(l)\n f = int(f)\n\n if f not in sourceUnits:\n return {'start':s, 'length':l}\n filename_used = sourceUnits[f]\n filename_absolute = None\n filename_relative = None\n filename_short = None\n\n lines = []\n\n # If possible, convert the filename to its absolute/relative version\n if slither.crytic_compile:\n filenames = slither.crytic_compile.filename_lookup(filename_used)\n filename_absolute = filenames.absolute\n filename_relative = filenames.relative\n filename_short = filenames.short\n\n if filename_absolute in slither.source_code:\n filename = filename_absolute\n elif filename_relative in slither.source_code:\n filename = filename_relative\n elif filename_short in slither.source_code:\n filename = filename_short\n else:#\n filename = filename_used.used\n else:\n filename = filename_used\n\n if filename in slither.source_code:\n source_code = slither.source_code[filename]\n (lines, starting_column, ending_column) = SourceMapping._compute_line(source_code,\n s,\n l)\n else:\n (lines, starting_column, ending_column) = ([], None, None)\n\n\n return {'start':s,\n 'length':l,\n 'filename_used': filename_used,\n 'filename_relative': filename_relative,\n 'filename_absolute': filename_absolute,\n 'filename_short': filename_short,\n 'lines' : lines,\n 'starting_column': starting_column,\n 'ending_column': ending_column\n }\n\n def set_offset(self, offset, slither):\n if isinstance(offset, dict):\n self._source_mapping = offset\n else:\n self._source_mapping = self._convert_source_mapping(offset, slither)\n\n\n @property\n def source_mapping_str(self):\n\n lines = self.source_mapping.get('lines', None)\n if not lines:\n lines = ''\n elif len(lines) == 1:\n lines = '#{}'.format(lines[0])\n else:\n lines = '#{}-{}'.format(lines[0], lines[-1])\n return '{}{}'.format(self.source_mapping['filename_short'], lines)\n\n", "path": "slither/core/source_mapping/source_mapping.py"}]} | 2,265 | 332 |
gh_patches_debug_30152 | rasdani/github-patches | git_diff | wagtail__wagtail-9973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Setting WAGTAILIMAGES_RENDITION_STORAGE generates a migration in wagtailimages
### Issue Summary
Running `./manage.py makemigrations` while WAGTAILIMAGES_RENDITION_STORAGE is set to something other than the default storage causes a migration to be generated within the wagtailimages app
### Steps to Reproduce
1. (for example) Start a new project with `wagtail start myproject`
2. Run `./manage.py migrate` and `./manage.py makemigrations`; this outputs "No changes detected"
3. `pip install django-storages`
4. Add the line `WAGTAILIMAGES_RENDITION_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"` to myproject/settings/base.py
5. Run `./manage.py makemigrations`; this generates a migration `wagtail/images/migrations/0026_alter_rendition_file.py` that adds a `storage` argument to the Rendition.file field.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes
### Technical details
- Python version: 3.8.0
- Django version: 4.1.3
- Wagtail version: main (4.2a0, 4b770784ca68f22d5ea58ecbd01e5c8c13882a3d)
</issue>
<code>
[start of wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py]
1 # Generated by Django 4.0.7 on 2022-08-10 16:26
2
3 from django.db import migrations
4 import wagtail.images.models
5
6
7 class Migration(migrations.Migration):
8
9 dependencies = [
10 ("wagtailimages", "0024_index_image_file_hash"),
11 ]
12
13 operations = [
14 migrations.AlterField(
15 model_name="image",
16 name="file",
17 field=wagtail.images.models.WagtailImageField(
18 height_field="height",
19 upload_to=wagtail.images.models.get_upload_to,
20 verbose_name="file",
21 width_field="width",
22 ),
23 ),
24 migrations.AlterField(
25 model_name="rendition",
26 name="file",
27 field=wagtail.images.models.WagtailImageField(
28 height_field="height",
29 upload_to=wagtail.images.models.get_rendition_upload_to,
30 width_field="width",
31 ),
32 ),
33 ]
34
[end of wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py
--- a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py
+++ b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py
@@ -1,5 +1,6 @@
# Generated by Django 4.0.7 on 2022-08-10 16:26
+from django import VERSION as DJANGO_VERSION
from django.db import migrations
import wagtail.images.models
@@ -10,6 +11,19 @@
("wagtailimages", "0024_index_image_file_hash"),
]
+ rendition_file_options = {
+ "height_field": "height",
+ "upload_to": wagtail.images.models.get_rendition_upload_to,
+ "width_field": "width",
+ }
+ # See https://code.djangoproject.com/ticket/34192 - prior to Django 4.2, a callable storage
+ # argument that returns default_storage would be incorrectly omitted from the deconstructed
+ # field. We need to match that behaviour and include/omit it accordingly to prevent
+ # makemigrations from seeing a difference and generating a spurious migration in
+ # wagtail.images.
+ if DJANGO_VERSION >= (4, 2):
+ rendition_file_options["storage"] = wagtail.images.models.get_rendition_storage
+
operations = [
migrations.AlterField(
model_name="image",
@@ -24,10 +38,6 @@
migrations.AlterField(
model_name="rendition",
name="file",
- field=wagtail.images.models.WagtailImageField(
- height_field="height",
- upload_to=wagtail.images.models.get_rendition_upload_to,
- width_field="width",
- ),
+ field=wagtail.images.models.WagtailImageField(**rendition_file_options),
),
]
| {"golden_diff": "diff --git a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py\n--- a/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py\n+++ b/wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py\n@@ -1,5 +1,6 @@\n # Generated by Django 4.0.7 on 2022-08-10 16:26\r\n \r\n+from django import VERSION as DJANGO_VERSION\r\n from django.db import migrations\r\n import wagtail.images.models\r\n \r\n@@ -10,6 +11,19 @@\n (\"wagtailimages\", \"0024_index_image_file_hash\"),\r\n ]\r\n \r\n+ rendition_file_options = {\r\n+ \"height_field\": \"height\",\r\n+ \"upload_to\": wagtail.images.models.get_rendition_upload_to,\r\n+ \"width_field\": \"width\",\r\n+ }\r\n+ # See https://code.djangoproject.com/ticket/34192 - prior to Django 4.2, a callable storage\r\n+ # argument that returns default_storage would be incorrectly omitted from the deconstructed\r\n+ # field. We need to match that behaviour and include/omit it accordingly to prevent\r\n+ # makemigrations from seeing a difference and generating a spurious migration in\r\n+ # wagtail.images.\r\n+ if DJANGO_VERSION >= (4, 2):\r\n+ rendition_file_options[\"storage\"] = wagtail.images.models.get_rendition_storage\r\n+\r\n operations = [\r\n migrations.AlterField(\r\n model_name=\"image\",\r\n@@ -24,10 +38,6 @@\n migrations.AlterField(\r\n model_name=\"rendition\",\r\n name=\"file\",\r\n- field=wagtail.images.models.WagtailImageField(\r\n- height_field=\"height\",\r\n- upload_to=wagtail.images.models.get_rendition_upload_to,\r\n- width_field=\"width\",\r\n- ),\r\n+ field=wagtail.images.models.WagtailImageField(**rendition_file_options),\r\n ),\r\n ]\n", "issue": "Setting WAGTAILIMAGES_RENDITION_STORAGE generates a migration in wagtailimages\n### Issue Summary\r\n\r\nRunning `./manage.py makemigrations` while WAGTAILIMAGES_RENDITION_STORAGE is set to something other than the default storage causes a migration to be generated within the wagtailimages app\r\n\r\n### Steps to Reproduce\r\n\r\n1. (for example) Start a new project with `wagtail start myproject`\r\n2. Run `./manage.py migrate` and `./manage.py makemigrations`; this outputs \"No changes detected\"\r\n3. `pip install django-storages`\r\n4. Add the line `WAGTAILIMAGES_RENDITION_STORAGE = \"storages.backends.s3boto3.S3Boto3Storage\"` to myproject/settings/base.py\r\n5. Run `./manage.py makemigrations`; this generates a migration `wagtail/images/migrations/0026_alter_rendition_file.py` that adds a `storage` argument to the Rendition.file field.\r\n\r\n- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: yes\r\n\r\n### Technical details\r\n\r\n- Python version: 3.8.0\r\n- Django version: 4.1.3\r\n- Wagtail version: main (4.2a0, 4b770784ca68f22d5ea58ecbd01e5c8c13882a3d)\r\n\n", "before_files": [{"content": "# Generated by Django 4.0.7 on 2022-08-10 16:26\r\n\r\nfrom django.db import migrations\r\nimport wagtail.images.models\r\n\r\n\r\nclass Migration(migrations.Migration):\r\n\r\n dependencies = [\r\n (\"wagtailimages\", \"0024_index_image_file_hash\"),\r\n ]\r\n\r\n operations = [\r\n migrations.AlterField(\r\n model_name=\"image\",\r\n name=\"file\",\r\n field=wagtail.images.models.WagtailImageField(\r\n height_field=\"height\",\r\n upload_to=wagtail.images.models.get_upload_to,\r\n verbose_name=\"file\",\r\n width_field=\"width\",\r\n ),\r\n ),\r\n migrations.AlterField(\r\n model_name=\"rendition\",\r\n name=\"file\",\r\n field=wagtail.images.models.WagtailImageField(\r\n height_field=\"height\",\r\n upload_to=wagtail.images.models.get_rendition_upload_to,\r\n width_field=\"width\",\r\n ),\r\n ),\r\n ]\r\n", "path": "wagtail/images/migrations/0025_alter_image_file_alter_rendition_file.py"}]} | 1,148 | 482 |
gh_patches_debug_30208 | rasdani/github-patches | git_diff | microsoft__DeepSpeed-3348 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Size of saved model checkpoint becomes much larger after deepspeed.initialize when using ZeRO-2
**Describe the bug**
Originally reported [here](https://github.com/huggingface/transformers/issues/22822). @stas00 @tjruwase
For some models, the size of model checkpoints saved by `model.save_prtrained()` becomes much larger after calling `deepspeed.initialize`. See examples below.
**To Reproduce**
```python
from transformers import AutoModelForCausalLM
import deepspeed
ds_config = {
"optimizer": {
"type": "AdamW",
},
"zero_optimization": {
"stage": 2,
"offload_optimizer": {
"device": "cpu",
"pin_memory": True
},
"allgather_partitions": True,
"allgather_bucket_size": 2e8,
"overlap_comm": True,
"reduce_scatter": True,
"reduce_bucket_size": 2e8,
"contiguous_gradients": True
},
"offload_optimizer": {
"device": "cpu",
"pin_memory": True
},
"train_batch_size": 1,
"train_micro_batch_size_per_gpu": 1
}
model = AutoModelForCausalLM.from_pretrained("decapoda-research/llama-7b-hf")
model.save_pretrained("before")
deepspeed_engine, _, _, _ = deepspeed.initialize(model=model, config_params=ds_config)
deepspeed_engine.module.save_pretrained("after")
```
File sizes:
```bash
du -a -h --max-depth=1 before/
512 before/config.json
32K before/pytorch_model.bin.index.json
9.2G before/pytorch_model-00001-of-00003.bin
9.3G before/pytorch_model-00002-of-00003.bin
6.7G before/pytorch_model-00003-of-00003.bin
512 before/generation_config.json
26G before/
du -a -h --max-depth=1 after/
512 after/config.json
32K after/pytorch_model.bin.index.json
26G after/pytorch_model-00001-of-00003.bin
26G after/pytorch_model-00002-of-00003.bin
26G after/pytorch_model-00003-of-00003.bin
512 after/generation_config.json
76G after/
```
This issue is not always occurred, for example, `gpt2` does not have this problem. But I tested `decapoda-research/llama-7b-hf`, and `decapoda-research/llama-13b-hf` have this issue.
This can be fixed by re-clone states before the saving:
```python
state_dict = deepspeed_engine.module.state_dict()
state_dict = type(state_dict)(
{k: v.clone()
for k,
v in state_dict.items()})
deepspeed_engine.module.save_pretrained("after_fixed", state_dict=state_dict)
```
**Expected behavior**
The saved model size should be unchanged after `deepspeed.initialize`
**System info (please complete the following information):**
- deepspeed: 0.8.3
- transformers version: 4.28.0.dev0
- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.13.3
- Safetensors version: not installed
- PyTorch version (GPU?): 1.12.1+cu116 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
</issue>
<code>
[start of deepspeed/checkpoint/utils.py]
1 # Copyright (c) Microsoft Corporation.
2 # SPDX-License-Identifier: Apache-2.0
3
4 # DeepSpeed Team
5
6 import os
7 from .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)
8
9
10 def get_model_ckpt_name_for_rank(base_folder, mp_rank_str):
11 ckpt_name = os.path.join(
12 base_folder,
13 MODEL_FILE_PREFIX + mp_rank_str + MODEL_FILE_SUFFIX,
14 )
15 return ckpt_name
16
17
18 def get_zero_ckpt_name_for_rank(base_folder, dp_rank, mp_rank):
19 zero_prefix = f'{ZERO_FILE_PREFIX}{dp_rank}'
20 mp_rank_string = f'_{MODEL_FILE_PREFIX}{mp_rank:02d}'
21 zero_ckpt_name = os.path.join(
22 base_folder,
23 zero_prefix + mp_rank_string + OPTIM_FILE_SUFFIX,
24 )
25 return zero_ckpt_name
26
27
28 def get_layer_ckpt_name_for_rank(base_folder, layer_id, tp_rank):
29 ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'
30 ckpt_path = os.path.join(base_folder, ckpt_file)
31 return ckpt_path
32
[end of deepspeed/checkpoint/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/deepspeed/checkpoint/utils.py b/deepspeed/checkpoint/utils.py
--- a/deepspeed/checkpoint/utils.py
+++ b/deepspeed/checkpoint/utils.py
@@ -4,6 +4,7 @@
# DeepSpeed Team
import os
+import torch
from .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)
@@ -29,3 +30,33 @@
ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'
ckpt_path = os.path.join(base_folder, ckpt_file)
return ckpt_path
+
+
+# We pass cloned tensors to torch.save() to avoid checkpoint bloat that occurs when torch.save()
+# saves the underlying storage rather than the slice of the storage corresponding to individual tensors.
+# This is a problem in DeepSpeed because we often allocate tensors using slices of large flattened buffers.
+# Tensor cloning helps to avoid this problem because the storage of cloned tensors are closer to the true size.
+# It is expected that the garbage collector will reclaim the cloned tensor storage to avoid memory bloat.
+# See https://pytorch.org/docs/stable/notes/serialization.html#preserve-storage-sharing
+def clone_tensors_for_torch_save(item, device=torch.device('cpu')):
+ """
+ Returns a copy of ``item`` with all enclosed tensors replaced by clones on a specified device.
+ Works on individual tensors, and tensors contained/nested in lists, tuples, and dicts.
+
+ Parameters:
+ - ``item``: tensor to clone or (possibly nested) container of tensors to clone.
+ - ``device``: target device (defaults to 'cpu')
+
+ Returns:
+ - copy of ``item`` with cloned tensors on target device
+ """
+ if torch.is_tensor(item):
+ return item.detach().clone().to(device)
+ elif isinstance(item, list):
+ return [clone_tensors_for_torch_save(v, device) for v in item]
+ elif isinstance(item, tuple):
+ return tuple([clone_tensors_for_torch_save(v, device) for v in item])
+ elif isinstance(item, dict):
+ return type(item)({k: clone_tensors_for_torch_save(v, device) for k, v in item.items()})
+ else:
+ return item
| {"golden_diff": "diff --git a/deepspeed/checkpoint/utils.py b/deepspeed/checkpoint/utils.py\n--- a/deepspeed/checkpoint/utils.py\n+++ b/deepspeed/checkpoint/utils.py\n@@ -4,6 +4,7 @@\n # DeepSpeed Team\n \n import os\n+import torch\n from .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)\n \n \n@@ -29,3 +30,33 @@\n ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'\n ckpt_path = os.path.join(base_folder, ckpt_file)\n return ckpt_path\n+\n+\n+# We pass cloned tensors to torch.save() to avoid checkpoint bloat that occurs when torch.save()\n+# saves the underlying storage rather than the slice of the storage corresponding to individual tensors.\n+# This is a problem in DeepSpeed because we often allocate tensors using slices of large flattened buffers.\n+# Tensor cloning helps to avoid this problem because the storage of cloned tensors are closer to the true size.\n+# It is expected that the garbage collector will reclaim the cloned tensor storage to avoid memory bloat.\n+# See https://pytorch.org/docs/stable/notes/serialization.html#preserve-storage-sharing\n+def clone_tensors_for_torch_save(item, device=torch.device('cpu')):\n+ \"\"\"\n+ Returns a copy of ``item`` with all enclosed tensors replaced by clones on a specified device.\n+ Works on individual tensors, and tensors contained/nested in lists, tuples, and dicts.\n+\n+ Parameters:\n+ - ``item``: tensor to clone or (possibly nested) container of tensors to clone.\n+ - ``device``: target device (defaults to 'cpu')\n+\n+ Returns:\n+ - copy of ``item`` with cloned tensors on target device\n+ \"\"\"\n+ if torch.is_tensor(item):\n+ return item.detach().clone().to(device)\n+ elif isinstance(item, list):\n+ return [clone_tensors_for_torch_save(v, device) for v in item]\n+ elif isinstance(item, tuple):\n+ return tuple([clone_tensors_for_torch_save(v, device) for v in item])\n+ elif isinstance(item, dict):\n+ return type(item)({k: clone_tensors_for_torch_save(v, device) for k, v in item.items()})\n+ else:\n+ return item\n", "issue": "[BUG] Size of saved model checkpoint becomes much larger after deepspeed.initialize when using ZeRO-2\n**Describe the bug**\r\nOriginally reported [here](https://github.com/huggingface/transformers/issues/22822). @stas00 @tjruwase\r\n\r\nFor some models, the size of model checkpoints saved by `model.save_prtrained()` becomes much larger after calling `deepspeed.initialize`. See examples below.\r\n\r\n\r\n**To Reproduce**\r\n```python\r\nfrom transformers import AutoModelForCausalLM\r\nimport deepspeed\r\n\r\nds_config = {\r\n \"optimizer\": {\r\n \"type\": \"AdamW\",\r\n },\r\n \"zero_optimization\": {\r\n \"stage\": 2,\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": True\r\n },\r\n \"allgather_partitions\": True,\r\n \"allgather_bucket_size\": 2e8,\r\n \"overlap_comm\": True,\r\n \"reduce_scatter\": True,\r\n \"reduce_bucket_size\": 2e8,\r\n \"contiguous_gradients\": True\r\n },\r\n \"offload_optimizer\": {\r\n \"device\": \"cpu\",\r\n \"pin_memory\": True\r\n },\r\n \"train_batch_size\": 1,\r\n \"train_micro_batch_size_per_gpu\": 1\r\n}\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"decapoda-research/llama-7b-hf\")\r\nmodel.save_pretrained(\"before\")\r\ndeepspeed_engine, _, _, _ = deepspeed.initialize(model=model, config_params=ds_config)\r\ndeepspeed_engine.module.save_pretrained(\"after\")\r\n```\r\n\r\nFile sizes:\r\n\r\n```bash\r\ndu -a -h --max-depth=1 before/\r\n512 before/config.json\r\n32K before/pytorch_model.bin.index.json\r\n9.2G before/pytorch_model-00001-of-00003.bin\r\n9.3G before/pytorch_model-00002-of-00003.bin\r\n6.7G before/pytorch_model-00003-of-00003.bin\r\n512 before/generation_config.json\r\n26G before/\r\n\r\ndu -a -h --max-depth=1 after/\r\n512 after/config.json\r\n32K after/pytorch_model.bin.index.json\r\n26G after/pytorch_model-00001-of-00003.bin\r\n26G after/pytorch_model-00002-of-00003.bin\r\n26G after/pytorch_model-00003-of-00003.bin\r\n512 after/generation_config.json\r\n76G after/\r\n```\r\n\r\nThis issue is not always occurred, for example, `gpt2` does not have this problem. But I tested `decapoda-research/llama-7b-hf`, and `decapoda-research/llama-13b-hf` have this issue.\r\n\r\nThis can be fixed by re-clone states before the saving:\r\n```python\r\nstate_dict = deepspeed_engine.module.state_dict()\r\nstate_dict = type(state_dict)(\r\n {k: v.clone()\r\n for k,\r\n v in state_dict.items()})\r\ndeepspeed_engine.module.save_pretrained(\"after_fixed\", state_dict=state_dict)\r\n```\r\n\r\n**Expected behavior**\r\nThe saved model size should be unchanged after `deepspeed.initialize`\r\n\r\n**System info (please complete the following information):**\r\n- deepspeed: 0.8.3\r\n- transformers version: 4.28.0.dev0\r\n- Platform: Linux-4.18.0-372.32.1.el8_6.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.16\r\n- Huggingface_hub version: 0.13.3\r\n- Safetensors version: not installed\r\n- PyTorch version (GPU?): 1.12.1+cu116 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: yes\r\n- Using distributed or parallel set-up in script?: yes\r\n\r\n\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n# SPDX-License-Identifier: Apache-2.0\n\n# DeepSpeed Team\n\nimport os\nfrom .constants import (MODEL_FILE_PREFIX, MODEL_FILE_SUFFIX, OPTIM_FILE_SUFFIX, ZERO_FILE_PREFIX)\n\n\ndef get_model_ckpt_name_for_rank(base_folder, mp_rank_str):\n ckpt_name = os.path.join(\n base_folder,\n MODEL_FILE_PREFIX + mp_rank_str + MODEL_FILE_SUFFIX,\n )\n return ckpt_name\n\n\ndef get_zero_ckpt_name_for_rank(base_folder, dp_rank, mp_rank):\n zero_prefix = f'{ZERO_FILE_PREFIX}{dp_rank}'\n mp_rank_string = f'_{MODEL_FILE_PREFIX}{mp_rank:02d}'\n zero_ckpt_name = os.path.join(\n base_folder,\n zero_prefix + mp_rank_string + OPTIM_FILE_SUFFIX,\n )\n return zero_ckpt_name\n\n\ndef get_layer_ckpt_name_for_rank(base_folder, layer_id, tp_rank):\n ckpt_file = f'{layer_id}-model_{tp_rank:02d}{MODEL_FILE_SUFFIX}'\n ckpt_path = os.path.join(base_folder, ckpt_file)\n return ckpt_path\n", "path": "deepspeed/checkpoint/utils.py"}]} | 1,792 | 504 |
gh_patches_debug_7862 | rasdani/github-patches | git_diff | coala__coala-bears-2136 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set setup.py url = http://coala.io/
difficulty/newcomer
Opened by @jayvdb at [Gitter](https://gitter.im/coala/coala?at=5a1181aff257ad9109b396a0)
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2
3 import locale
4 import sys
5 from subprocess import call
6
7 import setuptools.command.build_py
8 from bears import Constants
9 from setuptools import find_packages, setup
10 from setuptools.command.test import test as TestCommand
11
12 try:
13 locale.getlocale()
14 except (ValueError, UnicodeError):
15 locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')
16
17
18 class PyTestCommand(TestCommand):
19
20 def run_tests(self):
21 # import here, cause outside the eggs aren't loaded
22 import pytest
23 errno = pytest.main([])
24 sys.exit(errno)
25
26
27 class BuildDocsCommand(setuptools.command.build_py.build_py):
28 apidoc_command = ('sphinx-apidoc', '-f', '-o', 'docs/API',
29 'bears')
30 make_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')
31
32 def run(self):
33 err_no = call(self.apidoc_command)
34 if not err_no:
35 err_no = call(self.make_command)
36 sys.exit(err_no)
37
38
39 with open('requirements.txt') as requirements:
40 required = requirements.read().splitlines()
41 required.remove('-r bear-requirements.txt')
42
43 with open('bear-requirements.txt') as requirements:
44 bear_required = requirements.read().splitlines()
45
46 with open('test-requirements.txt') as requirements:
47 test_required = requirements.read().splitlines()
48
49 with open('ignore.txt') as ignore:
50 ignore_requirements = ignore.read().splitlines()
51
52 with open('README.rst') as readme:
53 long_description = readme.read()
54
55 extras_require = {
56 'alldeps': bear_required,
57 }
58
59 # For the average user we leave out some of the more complicated requirements,
60 # e.g. language-check (needs java).
61 required += [req for req in bear_required
62 if not any(req.startswith(ignore)
63 for ignore in ignore_requirements)]
64
65
66 if __name__ == '__main__':
67 setup(name='coala-bears',
68 version=Constants.VERSION,
69 description='Bears for coala (Code Analysis Application)',
70 author='The coala developers',
71 maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\xfcger',
72 maintainer_email=('[email protected], '
73 '[email protected], '
74 '[email protected]'),
75 url='http://coala.rtfd.org/',
76 platforms='any',
77 packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),
78 install_requires=required,
79 extras_require=extras_require,
80 tests_require=test_required,
81 package_data={'bears': ['VERSION'],
82 'bears.java': ['checkstyle.jar', 'google_checks.xml'],
83 'bears.scala': ['scalastyle.jar',
84 'scalastyle_config.xml']},
85 license='AGPL-3.0',
86 long_description=long_description,
87 entry_points={'coalabears': ['coala_official_bears = bears']},
88 # from http://pypi.python.org/pypi?%3Aaction=list_classifiers
89 classifiers=[
90 'Development Status :: 4 - Beta',
91
92 'Environment :: Plugins',
93 'Environment :: MacOS X',
94 'Environment :: Win32 (MS Windows)',
95 'Environment :: X11 Applications :: Gnome',
96
97 'Intended Audience :: Science/Research',
98 'Intended Audience :: Developers',
99
100 'License :: OSI Approved :: GNU Affero General Public License '
101 'v3 or later (AGPLv3+)',
102
103 'Operating System :: OS Independent',
104
105 'Programming Language :: Python :: Implementation :: CPython',
106 'Programming Language :: Python :: 3.4',
107 'Programming Language :: Python :: 3.5',
108 'Programming Language :: Python :: 3 :: Only',
109
110 'Topic :: Scientific/Engineering :: Information Analysis',
111 'Topic :: Software Development :: Quality Assurance',
112 'Topic :: Text Processing :: Linguistic'],
113 cmdclass={'docs': BuildDocsCommand,
114 'test': PyTestCommand})
115
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -72,7 +72,7 @@
maintainer_email=('[email protected], '
'[email protected], '
'[email protected]'),
- url='http://coala.rtfd.org/',
+ url='http://coala.io/',
platforms='any',
packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),
install_requires=required,
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -72,7 +72,7 @@\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n- url='http://coala.rtfd.org/',\n+ url='http://coala.io/',\n platforms='any',\n packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),\n install_requires=required,\n", "issue": "Set setup.py url = http://coala.io/\ndifficulty/newcomer\nOpened by @jayvdb at [Gitter](https://gitter.im/coala/coala?at=5a1181aff257ad9109b396a0)\n", "before_files": [{"content": "#!/usr/bin/env python3\n\nimport locale\nimport sys\nfrom subprocess import call\n\nimport setuptools.command.build_py\nfrom bears import Constants\nfrom setuptools import find_packages, setup\nfrom setuptools.command.test import test as TestCommand\n\ntry:\n locale.getlocale()\nexcept (ValueError, UnicodeError):\n locale.setlocale(locale.LC_ALL, 'en_US.UTF-8')\n\n\nclass PyTestCommand(TestCommand):\n\n def run_tests(self):\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main([])\n sys.exit(errno)\n\n\nclass BuildDocsCommand(setuptools.command.build_py.build_py):\n apidoc_command = ('sphinx-apidoc', '-f', '-o', 'docs/API',\n 'bears')\n make_command = ('make', '-C', 'docs', 'html', 'SPHINXOPTS=-W')\n\n def run(self):\n err_no = call(self.apidoc_command)\n if not err_no:\n err_no = call(self.make_command)\n sys.exit(err_no)\n\n\nwith open('requirements.txt') as requirements:\n required = requirements.read().splitlines()\n required.remove('-r bear-requirements.txt')\n\nwith open('bear-requirements.txt') as requirements:\n bear_required = requirements.read().splitlines()\n\nwith open('test-requirements.txt') as requirements:\n test_required = requirements.read().splitlines()\n\nwith open('ignore.txt') as ignore:\n ignore_requirements = ignore.read().splitlines()\n\nwith open('README.rst') as readme:\n long_description = readme.read()\n\nextras_require = {\n 'alldeps': bear_required,\n}\n\n# For the average user we leave out some of the more complicated requirements,\n# e.g. language-check (needs java).\nrequired += [req for req in bear_required\n if not any(req.startswith(ignore)\n for ignore in ignore_requirements)]\n\n\nif __name__ == '__main__':\n setup(name='coala-bears',\n version=Constants.VERSION,\n description='Bears for coala (Code Analysis Application)',\n author='The coala developers',\n maintainer='Lasse Schuirmann, Fabian Neuschmidt, Mischa Kr\\xfcger',\n maintainer_email=('[email protected], '\n '[email protected], '\n '[email protected]'),\n url='http://coala.rtfd.org/',\n platforms='any',\n packages=find_packages(exclude=('build.*', 'tests', 'tests.*')),\n install_requires=required,\n extras_require=extras_require,\n tests_require=test_required,\n package_data={'bears': ['VERSION'],\n 'bears.java': ['checkstyle.jar', 'google_checks.xml'],\n 'bears.scala': ['scalastyle.jar',\n 'scalastyle_config.xml']},\n license='AGPL-3.0',\n long_description=long_description,\n entry_points={'coalabears': ['coala_official_bears = bears']},\n # from http://pypi.python.org/pypi?%3Aaction=list_classifiers\n classifiers=[\n 'Development Status :: 4 - Beta',\n\n 'Environment :: Plugins',\n 'Environment :: MacOS X',\n 'Environment :: Win32 (MS Windows)',\n 'Environment :: X11 Applications :: Gnome',\n\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n\n 'License :: OSI Approved :: GNU Affero General Public License '\n 'v3 or later (AGPLv3+)',\n\n 'Operating System :: OS Independent',\n\n 'Programming Language :: Python :: Implementation :: CPython',\n 'Programming Language :: Python :: 3.4',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3 :: Only',\n\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Text Processing :: Linguistic'],\n cmdclass={'docs': BuildDocsCommand,\n 'test': PyTestCommand})\n", "path": "setup.py"}]} | 1,697 | 116 |
gh_patches_debug_34413 | rasdani/github-patches | git_diff | bentoml__BentoML-4212 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug: OpenAPI Schema components from mounted ASGI apps are not being included
### Describe the bug
If a mounted ASGI app has an OpenAPI spec that defines schema components, these are not included in the bento's generated OpenAPI spec.
The following service file reproduces the issue:
```python
import bentoml
import pydantic
from fastapi import FastAPI
svc = bentoml.Service(name="test", runners=[])
fastapi_app = FastAPI()
svc.mount_asgi_app(fastapi_app)
class TestSchema(pydantic.BaseModel):
text_field: str
@fastapi_app.get("/metadata")
def metadata() -> TestSchema:
return TestSchema(text_field="Hello world")
```
If I serve this bento and navigate to the OpenAPI docs, the following error is raised:
```
Could not resolve reference: Could not resolve pointer: /components/schemas/TestSchema does not exist in document
```
This is happening because the OpenAPI path components are being pulled in from the mounted app, but the schema component (the TestSchema class in this case) are not being pulled in. I've got a fix ready for this and will open a PR shortly
### To reproduce
Requires fastapi and pydantic:
`pip install fastapi pydantic`
This service file reproduces the issue:
```python
import bentoml
import pydantic
from fastapi import FastAPI
svc = bentoml.Service(name="test", runners=[])
fastapi_app = FastAPI()
svc.mount_asgi_app(fastapi_app)
class TestSchema(pydantic.BaseModel):
text_field: str
@fastapi_app.get("/metadata")
def metadata() -> TestSchema:
return TestSchema(text_field="Hello world")
```
`bentoml serve service.py:svc`
### Expected behavior
The FastAPI app's schema definitions should be included in the generated OpenAPI spec.
### Environment
#### Environment variable
```bash
BENTOML_DEBUG=''
BENTOML_QUIET=''
BENTOML_BUNDLE_LOCAL_BUILD=''
BENTOML_DO_NOT_TRACK=''
BENTOML_CONFIG=''
BENTOML_CONFIG_OPTIONS=''
BENTOML_PORT=''
BENTOML_HOST=''
BENTOML_API_WORKERS=''
```
#### System information
`bentoml`: 1.1.6
`python`: 3.10.12
`platform`: Linux-6.2.0-33-generic-x86_64-with-glibc2.35
`uid_gid`: 1000:1000
<details><summary><code>pip_packages</code></summary>
<br>
```
aiohttp==3.8.5
aiosignal==1.3.1
annotated-types==0.5.0
anyio==3.7.1
appdirs==1.4.4
asgiref==3.7.2
async-timeout==4.0.3
attrs==23.1.0
bentoml==1.1.6
build==1.0.3
cattrs==23.1.2
certifi==2023.7.22
charset-normalizer==3.2.0
circus==0.18.0
click==8.1.7
click-option-group==0.5.6
cloudpickle==2.2.1
contextlib2==21.6.0
deepmerge==1.1.0
Deprecated==1.2.14
exceptiongroup==1.1.3
fastapi==0.103.1
frozenlist==1.4.0
fs==2.4.16
h11==0.14.0
idna==3.4
importlib-metadata==6.0.1
inflection==0.5.1
Jinja2==3.1.2
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
markdown-it-py==3.0.0
MarkupSafe==2.1.3
mdurl==0.1.2
multidict==6.0.4
numpy==1.26.0
openapi==1.1.0
opentelemetry-api==1.18.0
opentelemetry-instrumentation==0.39b0
opentelemetry-instrumentation-aiohttp-client==0.39b0
opentelemetry-instrumentation-asgi==0.39b0
opentelemetry-sdk==1.18.0
opentelemetry-semantic-conventions==0.39b0
opentelemetry-util-http==0.39b0
packaging==23.1
pathspec==0.11.2
pip-requirements-parser==32.0.1
pip-tools==7.3.0
prometheus-client==0.17.1
psutil==5.9.5
pydantic==2.4.1
pydantic_core==2.10.1
Pygments==2.16.1
pynvml==11.5.0
pyparsing==3.1.1
pyproject_hooks==1.0.0
python-dateutil==2.8.2
python-json-logger==2.0.7
python-multipart==0.0.6
PyYAML==6.0.1
pyzmq==25.1.1
referencing==0.30.2
requests==2.31.0
rich==13.5.3
rpds-py==0.10.3
schema==0.7.5
simple-di==0.1.5
six==1.16.0
sniffio==1.3.0
starlette==0.27.0
tomli==2.0.1
tornado==6.3.3
typing_extensions==4.8.0
urllib3==2.0.5
uvicorn==0.23.2
watchfiles==0.20.0
wrapt==1.15.0
yarl==1.9.2
zipp==3.17.0
```
</details>
</issue>
<code>
[start of src/bentoml/_internal/service/openapi/__init__.py]
1 from __future__ import annotations
2
3 import typing as t
4 from functools import lru_cache
5 from http import HTTPStatus
6 from typing import TYPE_CHECKING
7
8 from deepmerge.merger import Merger
9
10 from bentoml.exceptions import InternalServerError
11 from bentoml.exceptions import InvalidArgument
12 from bentoml.exceptions import NotFound
13
14 from ...types import LazyType
15 from ...utils import bentoml_cattr
16 from .specification import Components
17 from .specification import Contact
18 from .specification import Info
19 from .specification import MediaType
20 from .specification import OpenAPISpecification
21 from .specification import Operation
22 from .specification import PathItem
23 from .specification import Reference
24 from .specification import Response
25 from .specification import Tag
26 from .utils import REF_PREFIX
27 from .utils import exception_components_schema
28 from .utils import exception_schema
29
30 if TYPE_CHECKING:
31 from .. import Service
32 from ..inference_api import InferenceAPI
33
34 SUCCESS_DESCRIPTION = "Successful Response"
35
36 INFRA_DECRIPTION = {
37 "/healthz": "Health check endpoint. Expecting an empty response with status code <code>200</code> when the service is in health state. The <code>/healthz</code> endpoint is <b>deprecated</b>. (since Kubernetes v1.16)",
38 "/livez": "Health check endpoint for Kubernetes. Healthy endpoint responses with a <code>200</code> OK status.",
39 "/readyz": "A <code>200</code> OK status from <code>/readyz</code> endpoint indicated the service is ready to accept traffic. From that point and onward, Kubernetes will use <code>/livez</code> endpoint to perform periodic health checks.",
40 "/metrics": "Prometheus metrics endpoint. The <code>/metrics</code> responses with a <code>200</code>. The output can then be used by a Prometheus sidecar to scrape the metrics of the service.",
41 }
42
43 __all__ = ["generate_spec"]
44
45 INFRA_TAG = Tag(
46 name="Infrastructure",
47 description="Common infrastructure endpoints for observability.",
48 )
49 APP_TAG = Tag(
50 name="Service APIs", description="BentoML Service API endpoints for inference."
51 )
52
53 merger = Merger(
54 # merge dicts
55 [(dict, "merge")],
56 # override all other types
57 ["override"],
58 # override conflicting types
59 ["override"],
60 )
61
62
63 def make_api_path(api: InferenceAPI[t.Any]) -> str:
64 return api.route if api.route.startswith("/") else f"/{api.route}"
65
66
67 @lru_cache(maxsize=1)
68 def make_infra_endpoints() -> dict[str, PathItem]:
69 return {
70 endpoint: PathItem(
71 get=Operation(
72 responses={"200": Response(description=SUCCESS_DESCRIPTION)},
73 tags=[INFRA_TAG.name],
74 description=INFRA_DECRIPTION[endpoint],
75 )
76 )
77 for endpoint in INFRA_DECRIPTION
78 }
79
80
81 def generate_service_components(svc: Service) -> Components:
82 components: dict[str, t.Any] = {}
83 for api in svc.apis.values():
84 api_components = {}
85 input_components = api.input.openapi_components()
86 if input_components:
87 merger.merge(api_components, input_components)
88 output_components = api.output.openapi_components()
89 if output_components:
90 merger.merge(api_components, output_components)
91
92 merger.merge(components, api_components)
93
94 # merge exception at last
95 merger.merge(components, {"schemas": exception_components_schema()})
96
97 return Components(**components)
98
99
100 def generate_spec(svc: Service, *, openapi_version: str = "3.0.2"):
101 """Generate a OpenAPI specification for a service."""
102 mounted_app_paths = {}
103
104 for app, _, _ in svc.mount_apps:
105 if LazyType["fastapi.FastAPI"]("fastapi.FastAPI").isinstance(app):
106 from fastapi.openapi.utils import get_openapi
107
108 openapi = get_openapi(
109 title=app.title,
110 version=app.version,
111 routes=app.routes,
112 )
113
114 mounted_app_paths.update(
115 {
116 k: bentoml_cattr.structure(v, PathItem)
117 for k, v in openapi["paths"].items()
118 }
119 )
120
121 return OpenAPISpecification(
122 openapi=openapi_version,
123 tags=[APP_TAG, INFRA_TAG],
124 components=generate_service_components(svc),
125 info=Info(
126 title=svc.name,
127 description=svc.doc,
128 version=svc.tag.version if svc.tag and svc.tag.version else "None",
129 contact=Contact(name="BentoML Team", email="[email protected]"),
130 ),
131 servers=[{"url": "."}],
132 paths={
133 # setup infra endpoints
134 **make_infra_endpoints(),
135 # setup inference endpoints
136 **{
137 make_api_path(api): PathItem(
138 post={
139 "responses": {
140 HTTPStatus.OK.value: api.output.openapi_responses(),
141 **{
142 ex.error_code.value: Response(
143 description=filled.description,
144 content={
145 "application/json": MediaType(
146 schema=Reference(
147 f"{REF_PREFIX}{filled.title}"
148 )
149 )
150 },
151 )
152 for ex in [
153 InvalidArgument,
154 NotFound,
155 InternalServerError,
156 ]
157 for filled in exception_schema(ex)
158 },
159 },
160 "tags": [APP_TAG.name],
161 "consumes": [api.input.mime_type],
162 "produces": [api.output.mime_type],
163 "x-bentoml-name": api.name,
164 "summary": str(api),
165 "description": api.doc or "",
166 "requestBody": api.input.openapi_request_body(),
167 "operationId": f"{svc.name}__{api.name}",
168 },
169 )
170 for api in svc.apis.values()
171 },
172 **mounted_app_paths,
173 },
174 )
175
[end of src/bentoml/_internal/service/openapi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/bentoml/_internal/service/openapi/__init__.py b/src/bentoml/_internal/service/openapi/__init__.py
--- a/src/bentoml/_internal/service/openapi/__init__.py
+++ b/src/bentoml/_internal/service/openapi/__init__.py
@@ -13,7 +13,6 @@
from ...types import LazyType
from ...utils import bentoml_cattr
-from .specification import Components
from .specification import Contact
from .specification import Info
from .specification import MediaType
@@ -78,7 +77,7 @@
}
-def generate_service_components(svc: Service) -> Components:
+def generate_service_components(svc: Service) -> dict[str, t.Any]:
components: dict[str, t.Any] = {}
for api in svc.apis.values():
api_components = {}
@@ -92,14 +91,13 @@
merger.merge(components, api_components)
# merge exception at last
- merger.merge(components, {"schemas": exception_components_schema()})
-
- return Components(**components)
+ return merger.merge(components, {"schemas": exception_components_schema()})
def generate_spec(svc: Service, *, openapi_version: str = "3.0.2"):
"""Generate a OpenAPI specification for a service."""
mounted_app_paths = {}
+ schema_components = {}
for app, _, _ in svc.mount_apps:
if LazyType["fastapi.FastAPI"]("fastapi.FastAPI").isinstance(app):
@@ -118,10 +116,15 @@
}
)
+ if "components" in openapi:
+ merger.merge(schema_components, openapi["components"])
+
+ merger.merge(schema_components, generate_service_components(svc))
+
return OpenAPISpecification(
openapi=openapi_version,
tags=[APP_TAG, INFRA_TAG],
- components=generate_service_components(svc),
+ components=schema_components,
info=Info(
title=svc.name,
description=svc.doc,
| {"golden_diff": "diff --git a/src/bentoml/_internal/service/openapi/__init__.py b/src/bentoml/_internal/service/openapi/__init__.py\n--- a/src/bentoml/_internal/service/openapi/__init__.py\n+++ b/src/bentoml/_internal/service/openapi/__init__.py\n@@ -13,7 +13,6 @@\n \n from ...types import LazyType\n from ...utils import bentoml_cattr\n-from .specification import Components\n from .specification import Contact\n from .specification import Info\n from .specification import MediaType\n@@ -78,7 +77,7 @@\n }\n \n \n-def generate_service_components(svc: Service) -> Components:\n+def generate_service_components(svc: Service) -> dict[str, t.Any]:\n components: dict[str, t.Any] = {}\n for api in svc.apis.values():\n api_components = {}\n@@ -92,14 +91,13 @@\n merger.merge(components, api_components)\n \n # merge exception at last\n- merger.merge(components, {\"schemas\": exception_components_schema()})\n-\n- return Components(**components)\n+ return merger.merge(components, {\"schemas\": exception_components_schema()})\n \n \n def generate_spec(svc: Service, *, openapi_version: str = \"3.0.2\"):\n \"\"\"Generate a OpenAPI specification for a service.\"\"\"\n mounted_app_paths = {}\n+ schema_components = {}\n \n for app, _, _ in svc.mount_apps:\n if LazyType[\"fastapi.FastAPI\"](\"fastapi.FastAPI\").isinstance(app):\n@@ -118,10 +116,15 @@\n }\n )\n \n+ if \"components\" in openapi:\n+ merger.merge(schema_components, openapi[\"components\"])\n+\n+ merger.merge(schema_components, generate_service_components(svc))\n+\n return OpenAPISpecification(\n openapi=openapi_version,\n tags=[APP_TAG, INFRA_TAG],\n- components=generate_service_components(svc),\n+ components=schema_components,\n info=Info(\n title=svc.name,\n description=svc.doc,\n", "issue": "bug: OpenAPI Schema components from mounted ASGI apps are not being included \n### Describe the bug\n\nIf a mounted ASGI app has an OpenAPI spec that defines schema components, these are not included in the bento's generated OpenAPI spec.\r\n\r\nThe following service file reproduces the issue:\r\n\r\n```python\r\nimport bentoml\r\nimport pydantic\r\nfrom fastapi import FastAPI\r\n\r\nsvc = bentoml.Service(name=\"test\", runners=[])\r\n\r\nfastapi_app = FastAPI()\r\nsvc.mount_asgi_app(fastapi_app)\r\n\r\n\r\nclass TestSchema(pydantic.BaseModel):\r\n text_field: str\r\n\r\n\r\n@fastapi_app.get(\"/metadata\")\r\ndef metadata() -> TestSchema:\r\n return TestSchema(text_field=\"Hello world\")\r\n\r\n```\r\n\r\nIf I serve this bento and navigate to the OpenAPI docs, the following error is raised: \r\n```\r\nCould not resolve reference: Could not resolve pointer: /components/schemas/TestSchema does not exist in document\r\n```\r\n\r\nThis is happening because the OpenAPI path components are being pulled in from the mounted app, but the schema component (the TestSchema class in this case) are not being pulled in. I've got a fix ready for this and will open a PR shortly\n\n### To reproduce\n\nRequires fastapi and pydantic:\r\n`pip install fastapi pydantic`\r\n\r\nThis service file reproduces the issue:\r\n```python\r\nimport bentoml\r\nimport pydantic\r\nfrom fastapi import FastAPI\r\n\r\nsvc = bentoml.Service(name=\"test\", runners=[])\r\n\r\nfastapi_app = FastAPI()\r\nsvc.mount_asgi_app(fastapi_app)\r\n\r\n\r\nclass TestSchema(pydantic.BaseModel):\r\n text_field: str\r\n\r\n\r\n@fastapi_app.get(\"/metadata\")\r\ndef metadata() -> TestSchema:\r\n return TestSchema(text_field=\"Hello world\")\r\n\r\n```\r\n\r\n`bentoml serve service.py:svc`\n\n### Expected behavior\n\nThe FastAPI app's schema definitions should be included in the generated OpenAPI spec.\n\n### Environment\n\n#### Environment variable\r\n\r\n```bash\r\nBENTOML_DEBUG=''\r\nBENTOML_QUIET=''\r\nBENTOML_BUNDLE_LOCAL_BUILD=''\r\nBENTOML_DO_NOT_TRACK=''\r\nBENTOML_CONFIG=''\r\nBENTOML_CONFIG_OPTIONS=''\r\nBENTOML_PORT=''\r\nBENTOML_HOST=''\r\nBENTOML_API_WORKERS=''\r\n```\r\n\r\n#### System information\r\n\r\n`bentoml`: 1.1.6\r\n`python`: 3.10.12\r\n`platform`: Linux-6.2.0-33-generic-x86_64-with-glibc2.35\r\n`uid_gid`: 1000:1000\r\n<details><summary><code>pip_packages</code></summary>\r\n\r\n<br>\r\n\r\n```\r\naiohttp==3.8.5\r\naiosignal==1.3.1\r\nannotated-types==0.5.0\r\nanyio==3.7.1\r\nappdirs==1.4.4\r\nasgiref==3.7.2\r\nasync-timeout==4.0.3\r\nattrs==23.1.0\r\nbentoml==1.1.6\r\nbuild==1.0.3\r\ncattrs==23.1.2\r\ncertifi==2023.7.22\r\ncharset-normalizer==3.2.0\r\ncircus==0.18.0\r\nclick==8.1.7\r\nclick-option-group==0.5.6\r\ncloudpickle==2.2.1\r\ncontextlib2==21.6.0\r\ndeepmerge==1.1.0\r\nDeprecated==1.2.14\r\nexceptiongroup==1.1.3\r\nfastapi==0.103.1\r\nfrozenlist==1.4.0\r\nfs==2.4.16\r\nh11==0.14.0\r\nidna==3.4\r\nimportlib-metadata==6.0.1\r\ninflection==0.5.1\r\nJinja2==3.1.2\r\njsonschema==4.19.1\r\njsonschema-specifications==2023.7.1\r\nmarkdown-it-py==3.0.0\r\nMarkupSafe==2.1.3\r\nmdurl==0.1.2\r\nmultidict==6.0.4\r\nnumpy==1.26.0\r\nopenapi==1.1.0\r\nopentelemetry-api==1.18.0\r\nopentelemetry-instrumentation==0.39b0\r\nopentelemetry-instrumentation-aiohttp-client==0.39b0\r\nopentelemetry-instrumentation-asgi==0.39b0\r\nopentelemetry-sdk==1.18.0\r\nopentelemetry-semantic-conventions==0.39b0\r\nopentelemetry-util-http==0.39b0\r\npackaging==23.1\r\npathspec==0.11.2\r\npip-requirements-parser==32.0.1\r\npip-tools==7.3.0\r\nprometheus-client==0.17.1\r\npsutil==5.9.5\r\npydantic==2.4.1\r\npydantic_core==2.10.1\r\nPygments==2.16.1\r\npynvml==11.5.0\r\npyparsing==3.1.1\r\npyproject_hooks==1.0.0\r\npython-dateutil==2.8.2\r\npython-json-logger==2.0.7\r\npython-multipart==0.0.6\r\nPyYAML==6.0.1\r\npyzmq==25.1.1\r\nreferencing==0.30.2\r\nrequests==2.31.0\r\nrich==13.5.3\r\nrpds-py==0.10.3\r\nschema==0.7.5\r\nsimple-di==0.1.5\r\nsix==1.16.0\r\nsniffio==1.3.0\r\nstarlette==0.27.0\r\ntomli==2.0.1\r\ntornado==6.3.3\r\ntyping_extensions==4.8.0\r\nurllib3==2.0.5\r\nuvicorn==0.23.2\r\nwatchfiles==0.20.0\r\nwrapt==1.15.0\r\nyarl==1.9.2\r\nzipp==3.17.0\r\n```\r\n\r\n</details>\r\n\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport typing as t\nfrom functools import lru_cache\nfrom http import HTTPStatus\nfrom typing import TYPE_CHECKING\n\nfrom deepmerge.merger import Merger\n\nfrom bentoml.exceptions import InternalServerError\nfrom bentoml.exceptions import InvalidArgument\nfrom bentoml.exceptions import NotFound\n\nfrom ...types import LazyType\nfrom ...utils import bentoml_cattr\nfrom .specification import Components\nfrom .specification import Contact\nfrom .specification import Info\nfrom .specification import MediaType\nfrom .specification import OpenAPISpecification\nfrom .specification import Operation\nfrom .specification import PathItem\nfrom .specification import Reference\nfrom .specification import Response\nfrom .specification import Tag\nfrom .utils import REF_PREFIX\nfrom .utils import exception_components_schema\nfrom .utils import exception_schema\n\nif TYPE_CHECKING:\n from .. import Service\n from ..inference_api import InferenceAPI\n\nSUCCESS_DESCRIPTION = \"Successful Response\"\n\nINFRA_DECRIPTION = {\n \"/healthz\": \"Health check endpoint. Expecting an empty response with status code <code>200</code> when the service is in health state. The <code>/healthz</code> endpoint is <b>deprecated</b>. (since Kubernetes v1.16)\",\n \"/livez\": \"Health check endpoint for Kubernetes. Healthy endpoint responses with a <code>200</code> OK status.\",\n \"/readyz\": \"A <code>200</code> OK status from <code>/readyz</code> endpoint indicated the service is ready to accept traffic. From that point and onward, Kubernetes will use <code>/livez</code> endpoint to perform periodic health checks.\",\n \"/metrics\": \"Prometheus metrics endpoint. The <code>/metrics</code> responses with a <code>200</code>. The output can then be used by a Prometheus sidecar to scrape the metrics of the service.\",\n}\n\n__all__ = [\"generate_spec\"]\n\nINFRA_TAG = Tag(\n name=\"Infrastructure\",\n description=\"Common infrastructure endpoints for observability.\",\n)\nAPP_TAG = Tag(\n name=\"Service APIs\", description=\"BentoML Service API endpoints for inference.\"\n)\n\nmerger = Merger(\n # merge dicts\n [(dict, \"merge\")],\n # override all other types\n [\"override\"],\n # override conflicting types\n [\"override\"],\n)\n\n\ndef make_api_path(api: InferenceAPI[t.Any]) -> str:\n return api.route if api.route.startswith(\"/\") else f\"/{api.route}\"\n\n\n@lru_cache(maxsize=1)\ndef make_infra_endpoints() -> dict[str, PathItem]:\n return {\n endpoint: PathItem(\n get=Operation(\n responses={\"200\": Response(description=SUCCESS_DESCRIPTION)},\n tags=[INFRA_TAG.name],\n description=INFRA_DECRIPTION[endpoint],\n )\n )\n for endpoint in INFRA_DECRIPTION\n }\n\n\ndef generate_service_components(svc: Service) -> Components:\n components: dict[str, t.Any] = {}\n for api in svc.apis.values():\n api_components = {}\n input_components = api.input.openapi_components()\n if input_components:\n merger.merge(api_components, input_components)\n output_components = api.output.openapi_components()\n if output_components:\n merger.merge(api_components, output_components)\n\n merger.merge(components, api_components)\n\n # merge exception at last\n merger.merge(components, {\"schemas\": exception_components_schema()})\n\n return Components(**components)\n\n\ndef generate_spec(svc: Service, *, openapi_version: str = \"3.0.2\"):\n \"\"\"Generate a OpenAPI specification for a service.\"\"\"\n mounted_app_paths = {}\n\n for app, _, _ in svc.mount_apps:\n if LazyType[\"fastapi.FastAPI\"](\"fastapi.FastAPI\").isinstance(app):\n from fastapi.openapi.utils import get_openapi\n\n openapi = get_openapi(\n title=app.title,\n version=app.version,\n routes=app.routes,\n )\n\n mounted_app_paths.update(\n {\n k: bentoml_cattr.structure(v, PathItem)\n for k, v in openapi[\"paths\"].items()\n }\n )\n\n return OpenAPISpecification(\n openapi=openapi_version,\n tags=[APP_TAG, INFRA_TAG],\n components=generate_service_components(svc),\n info=Info(\n title=svc.name,\n description=svc.doc,\n version=svc.tag.version if svc.tag and svc.tag.version else \"None\",\n contact=Contact(name=\"BentoML Team\", email=\"[email protected]\"),\n ),\n servers=[{\"url\": \".\"}],\n paths={\n # setup infra endpoints\n **make_infra_endpoints(),\n # setup inference endpoints\n **{\n make_api_path(api): PathItem(\n post={\n \"responses\": {\n HTTPStatus.OK.value: api.output.openapi_responses(),\n **{\n ex.error_code.value: Response(\n description=filled.description,\n content={\n \"application/json\": MediaType(\n schema=Reference(\n f\"{REF_PREFIX}{filled.title}\"\n )\n )\n },\n )\n for ex in [\n InvalidArgument,\n NotFound,\n InternalServerError,\n ]\n for filled in exception_schema(ex)\n },\n },\n \"tags\": [APP_TAG.name],\n \"consumes\": [api.input.mime_type],\n \"produces\": [api.output.mime_type],\n \"x-bentoml-name\": api.name,\n \"summary\": str(api),\n \"description\": api.doc or \"\",\n \"requestBody\": api.input.openapi_request_body(),\n \"operationId\": f\"{svc.name}__{api.name}\",\n },\n )\n for api in svc.apis.values()\n },\n **mounted_app_paths,\n },\n )\n", "path": "src/bentoml/_internal/service/openapi/__init__.py"}]} | 3,625 | 457 |
gh_patches_debug_5637 | rasdani/github-patches | git_diff | marshmallow-code__webargs-537 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Errors while validating arguments in headers result in a flask crash
If you make a view with header arguments `@bp.arguments(someschema, location='headers')`
Then feed it headers that are not defined in the schema, it will (rightfully) cause a schema validation error, however the error created includes the entire header tuple as a dictionary key, instead of just the 'key' (tuple position 0). This causes flask to error out while trying to convert the response to a valid JSON response.
This is the response returned (grabbed this with a pydb)
```python
{
"code": 422,
"status": "Unprocessable Entity",
"errors": {"headers": {("Someheader", "someval"): ["Unknown field."]}},
}
```
This is the stack trace flask produces, which I have included so people searching it will hopefully find their way here.
```python
../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:1006: in get
return self.open(*args, **kw)
nomitall/api/tests/_client.py:37: in open
return super().open(*args, **kwargs)
../../../../miniconda3/lib/python3.7/site-packages/flask/testing.py:227: in open
follow_redirects=follow_redirects,
../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:970: in open
response = self.run_wsgi_app(environ.copy(), buffered=buffered)
../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:861: in run_wsgi_app
rv = run_wsgi_app(self.application, environ, buffered=buffered)
../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:1096: in run_wsgi_app
app_rv = app(environ, start_response)
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2463: in __call__
return self.wsgi_app(environ, start_response)
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2449: in wsgi_app
response = self.handle_exception(e)
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:1866: in handle_exception
reraise(exc_type, exc_value, tb)
../../../../miniconda3/lib/python3.7/site-packages/flask/_compat.py:39: in reraise
raise value
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2446: in wsgi_app
response = self.full_dispatch_request()
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:1952: in full_dispatch_request
return self.finalize_request(rv)
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:1967: in finalize_request
response = self.make_response(rv)
../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2111: in make_response
rv = jsonify(rv)
../../../../miniconda3/lib/python3.7/site-packages/flask/json/__init__.py:370: in jsonify
dumps(data, indent=indent, separators=separators) + "\n",
../../../../miniconda3/lib/python3.7/site-packages/flask/json/__init__.py:211: in dumps
rv = _json.dumps(obj, **kwargs)
../../../../miniconda3/lib/python3.7/site-packages/simplejson/__init__.py:412: in dumps
**kw).encode(obj)
../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:298: in encode
chunks = list(chunks)
../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:696: in _iterencode
for chunk in _iterencode_dict(o, _current_indent_level):
../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:652: in _iterencode_dict
for chunk in chunks:
../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:652: in _iterencode_dict
for chunk in chunks:
../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:598: in _iterencode_dict
k = _stringify_key(k)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
key = ("Someheader", "someval")
def _stringify_key(key):
if isinstance(key, string_types): # pragma: no cover
pass
elif _PY3 and isinstance(key, bytes) and _encoding is not None:
key = str(key, _encoding)
elif isinstance(key, float):
key = _floatstr(key)
elif key is True:
key = 'true'
elif key is False:
key = 'false'
elif key is None:
key = 'null'
elif isinstance(key, integer_types):
if type(key) not in integer_types:
# See marshmallow-code/flask-smorest#118, do not trust custom str/repr
key = int(key)
key = str(key)
elif _use_decimal and isinstance(key, Decimal):
key = str(key)
elif _skipkeys:
key = None
else:
raise TypeError('keys must be str, int, float, bool or None, '
> 'not %s' % key.__class__.__name__)
E TypeError: keys must be str, int, float, bool or None, not tuple
../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:568: TypeError
```
I'm not sure there is a good workaround apart from disabling schema validation.
If this issue is unclear please ask for further explanation and i'll sink some time into creating some self contained reproduction code.
</issue>
<code>
[start of src/webargs/multidictproxy.py]
1 from collections.abc import Mapping
2
3 from webargs.compat import MARSHMALLOW_VERSION_INFO
4 from webargs.core import missing, is_multiple
5
6
7 class MultiDictProxy(Mapping):
8 """
9 A proxy object which wraps multidict types along with a matching schema
10 Whenever a value is looked up, it is checked against the schema to see if
11 there is a matching field where `is_multiple` is True. If there is, then
12 the data should be loaded as a list or tuple.
13
14 In all other cases, __getitem__ proxies directly to the input multidict.
15 """
16
17 def __init__(self, multidict, schema):
18 self.data = multidict
19 self.multiple_keys = self._collect_multiple_keys(schema)
20
21 @staticmethod
22 def _collect_multiple_keys(schema):
23 result = set()
24 for name, field in schema.fields.items():
25 if not is_multiple(field):
26 continue
27 if MARSHMALLOW_VERSION_INFO[0] < 3:
28 result.add(field.load_from if field.load_from is not None else name)
29 else:
30 result.add(field.data_key if field.data_key is not None else name)
31 return result
32
33 def __getitem__(self, key):
34 val = self.data.get(key, missing)
35 if val is missing or key not in self.multiple_keys:
36 return val
37 if hasattr(self.data, "getlist"):
38 return self.data.getlist(key)
39 if hasattr(self.data, "getall"):
40 return self.data.getall(key)
41 if isinstance(val, (list, tuple)):
42 return val
43 if val is None:
44 return None
45 return [val]
46
47 def __str__(self): # str(proxy) proxies to str(proxy.data)
48 return str(self.data)
49
50 def __repr__(self):
51 return "MultiDictProxy(data={!r}, multiple_keys={!r})".format(
52 self.data, self.multiple_keys
53 )
54
55 def __delitem__(self, key):
56 del self.data[key]
57
58 def __setitem__(self, key, value):
59 self.data[key] = value
60
61 def __getattr__(self, name):
62 return getattr(self.data, name)
63
64 def __iter__(self):
65 return iter(self.data)
66
67 def __contains__(self, x):
68 return x in self.data
69
70 def __len__(self):
71 return len(self.data)
72
73 def __eq__(self, other):
74 return self.data == other
75
76 def __ne__(self, other):
77 return self.data != other
78
[end of src/webargs/multidictproxy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/webargs/multidictproxy.py b/src/webargs/multidictproxy.py
--- a/src/webargs/multidictproxy.py
+++ b/src/webargs/multidictproxy.py
@@ -62,7 +62,13 @@
return getattr(self.data, name)
def __iter__(self):
- return iter(self.data)
+ for x in iter(self.data):
+ # special case for header dicts which produce an iterator of tuples
+ # instead of an iterator of strings
+ if isinstance(x, tuple):
+ yield x[0]
+ else:
+ yield x
def __contains__(self, x):
return x in self.data
| {"golden_diff": "diff --git a/src/webargs/multidictproxy.py b/src/webargs/multidictproxy.py\n--- a/src/webargs/multidictproxy.py\n+++ b/src/webargs/multidictproxy.py\n@@ -62,7 +62,13 @@\n return getattr(self.data, name)\n \n def __iter__(self):\n- return iter(self.data)\n+ for x in iter(self.data):\n+ # special case for header dicts which produce an iterator of tuples\n+ # instead of an iterator of strings\n+ if isinstance(x, tuple):\n+ yield x[0]\n+ else:\n+ yield x\n \n def __contains__(self, x):\n return x in self.data\n", "issue": "Errors while validating arguments in headers result in a flask crash\nIf you make a view with header arguments `@bp.arguments(someschema, location='headers')`\r\nThen feed it headers that are not defined in the schema, it will (rightfully) cause a schema validation error, however the error created includes the entire header tuple as a dictionary key, instead of just the 'key' (tuple position 0). This causes flask to error out while trying to convert the response to a valid JSON response.\r\n\r\nThis is the response returned (grabbed this with a pydb)\r\n```python\r\n{\r\n \"code\": 422,\r\n \"status\": \"Unprocessable Entity\",\r\n \"errors\": {\"headers\": {(\"Someheader\", \"someval\"): [\"Unknown field.\"]}},\r\n}\r\n```\r\nThis is the stack trace flask produces, which I have included so people searching it will hopefully find their way here.\r\n\r\n```python\r\n../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:1006: in get\r\n return self.open(*args, **kw)\r\nnomitall/api/tests/_client.py:37: in open\r\n return super().open(*args, **kwargs)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/testing.py:227: in open\r\n follow_redirects=follow_redirects,\r\n../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:970: in open\r\n response = self.run_wsgi_app(environ.copy(), buffered=buffered)\r\n../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:861: in run_wsgi_app\r\n rv = run_wsgi_app(self.application, environ, buffered=buffered)\r\n../../../../miniconda3/lib/python3.7/site-packages/werkzeug/test.py:1096: in run_wsgi_app\r\n app_rv = app(environ, start_response)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2463: in __call__\r\n return self.wsgi_app(environ, start_response)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2449: in wsgi_app\r\n response = self.handle_exception(e)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:1866: in handle_exception\r\n reraise(exc_type, exc_value, tb)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/_compat.py:39: in reraise\r\n raise value\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2446: in wsgi_app\r\n response = self.full_dispatch_request()\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:1952: in full_dispatch_request\r\n return self.finalize_request(rv)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:1967: in finalize_request\r\n response = self.make_response(rv)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/app.py:2111: in make_response\r\n rv = jsonify(rv)\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/json/__init__.py:370: in jsonify\r\n dumps(data, indent=indent, separators=separators) + \"\\n\",\r\n../../../../miniconda3/lib/python3.7/site-packages/flask/json/__init__.py:211: in dumps\r\n rv = _json.dumps(obj, **kwargs)\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/__init__.py:412: in dumps\r\n **kw).encode(obj)\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:298: in encode\r\n chunks = list(chunks)\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:696: in _iterencode\r\n for chunk in _iterencode_dict(o, _current_indent_level):\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:652: in _iterencode_dict\r\n for chunk in chunks:\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:652: in _iterencode_dict\r\n for chunk in chunks:\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:598: in _iterencode_dict\r\n k = _stringify_key(k)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nkey = (\"Someheader\", \"someval\")\r\n\r\n def _stringify_key(key):\r\n if isinstance(key, string_types): # pragma: no cover\r\n pass\r\n elif _PY3 and isinstance(key, bytes) and _encoding is not None:\r\n key = str(key, _encoding)\r\n elif isinstance(key, float):\r\n key = _floatstr(key)\r\n elif key is True:\r\n key = 'true'\r\n elif key is False:\r\n key = 'false'\r\n elif key is None:\r\n key = 'null'\r\n elif isinstance(key, integer_types):\r\n if type(key) not in integer_types:\r\n # See marshmallow-code/flask-smorest#118, do not trust custom str/repr\r\n key = int(key)\r\n key = str(key)\r\n elif _use_decimal and isinstance(key, Decimal):\r\n key = str(key)\r\n elif _skipkeys:\r\n key = None\r\n else:\r\n raise TypeError('keys must be str, int, float, bool or None, '\r\n> 'not %s' % key.__class__.__name__)\r\nE TypeError: keys must be str, int, float, bool or None, not tuple\r\n\r\n../../../../miniconda3/lib/python3.7/site-packages/simplejson/encoder.py:568: TypeError\r\n```\r\n\r\nI'm not sure there is a good workaround apart from disabling schema validation.\r\n\r\nIf this issue is unclear please ask for further explanation and i'll sink some time into creating some self contained reproduction code.\n", "before_files": [{"content": "from collections.abc import Mapping\n\nfrom webargs.compat import MARSHMALLOW_VERSION_INFO\nfrom webargs.core import missing, is_multiple\n\n\nclass MultiDictProxy(Mapping):\n \"\"\"\n A proxy object which wraps multidict types along with a matching schema\n Whenever a value is looked up, it is checked against the schema to see if\n there is a matching field where `is_multiple` is True. If there is, then\n the data should be loaded as a list or tuple.\n\n In all other cases, __getitem__ proxies directly to the input multidict.\n \"\"\"\n\n def __init__(self, multidict, schema):\n self.data = multidict\n self.multiple_keys = self._collect_multiple_keys(schema)\n\n @staticmethod\n def _collect_multiple_keys(schema):\n result = set()\n for name, field in schema.fields.items():\n if not is_multiple(field):\n continue\n if MARSHMALLOW_VERSION_INFO[0] < 3:\n result.add(field.load_from if field.load_from is not None else name)\n else:\n result.add(field.data_key if field.data_key is not None else name)\n return result\n\n def __getitem__(self, key):\n val = self.data.get(key, missing)\n if val is missing or key not in self.multiple_keys:\n return val\n if hasattr(self.data, \"getlist\"):\n return self.data.getlist(key)\n if hasattr(self.data, \"getall\"):\n return self.data.getall(key)\n if isinstance(val, (list, tuple)):\n return val\n if val is None:\n return None\n return [val]\n\n def __str__(self): # str(proxy) proxies to str(proxy.data)\n return str(self.data)\n\n def __repr__(self):\n return \"MultiDictProxy(data={!r}, multiple_keys={!r})\".format(\n self.data, self.multiple_keys\n )\n\n def __delitem__(self, key):\n del self.data[key]\n\n def __setitem__(self, key, value):\n self.data[key] = value\n\n def __getattr__(self, name):\n return getattr(self.data, name)\n\n def __iter__(self):\n return iter(self.data)\n\n def __contains__(self, x):\n return x in self.data\n\n def __len__(self):\n return len(self.data)\n\n def __eq__(self, other):\n return self.data == other\n\n def __ne__(self, other):\n return self.data != other\n", "path": "src/webargs/multidictproxy.py"}]} | 2,639 | 158 |
gh_patches_debug_15199 | rasdani/github-patches | git_diff | qtile__qtile-3205 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CheckUpdates widget swallows crashes and shows as no updates
As per title, it's not clear if the check update command is working as any error in the command results in the widget treating it as no updates.
This makes debugging impossible.
</issue>
<code>
[start of libqtile/widget/check_updates.py]
1 # Copyright (c) 2015 Ali Mousavi
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
19 # SOFTWARE.
20
21 import os
22 from subprocess import CalledProcessError, Popen
23
24 from libqtile.log_utils import logger
25 from libqtile.widget import base
26
27
28 class CheckUpdates(base.ThreadPoolText):
29 """Shows number of pending updates in different unix systems"""
30
31 defaults = [
32 ("distro", "Arch", "Name of your distribution"),
33 (
34 "custom_command",
35 None,
36 "Custom shell command for checking updates (counts the lines of the output)",
37 ),
38 (
39 "custom_command_modify",
40 (lambda x: x),
41 "Lambda function to modify line count from custom_command",
42 ),
43 ("update_interval", 60, "Update interval in seconds."),
44 ("execute", None, "Command to execute on click"),
45 ("display_format", "Updates: {updates}", "Display format if updates available"),
46 ("colour_no_updates", "ffffff", "Colour when there's no updates."),
47 ("colour_have_updates", "ffffff", "Colour when there are updates."),
48 ("restart_indicator", "", "Indicator to represent reboot is required. (Ubuntu only)"),
49 ("no_update_string", "", "String to display if no updates available"),
50 ]
51
52 def __init__(self, **config):
53 base.ThreadPoolText.__init__(self, "", **config)
54 self.add_defaults(CheckUpdates.defaults)
55
56 # Helpful to have this as a variable as we can shorten it for testing
57 self.execute_polling_interval = 1
58
59 # format: "Distro": ("cmd", "number of lines to subtract from output")
60 self.cmd_dict = {
61 "Arch": ("pacman -Qu", 0),
62 "Arch_checkupdates": ("checkupdates", 0),
63 "Arch_Sup": ("pacman -Sup", 0),
64 "Arch_paru": ("paru -Qu", 0),
65 "Arch_paru_Sup": ("paru -Sup", 0),
66 "Arch_yay": ("yay -Qu", 0),
67 "Debian": ("apt-show-versions -u -b", 0),
68 "Gentoo_eix": ("EIX_LIMIT=0 eix -u# --world", 0),
69 "Ubuntu": ("aptitude search ~U", 0),
70 "Fedora": ("dnf list updates -q", 1),
71 "FreeBSD": ("pkg_version -I -l '<'", 0),
72 "Mandriva": ("urpmq --auto-select", 0),
73 }
74
75 if self.custom_command:
76 # Use custom_command
77 self.cmd = self.custom_command
78
79 else:
80 # Check if distro name is valid.
81 try:
82 self.cmd = self.cmd_dict[self.distro][0]
83 self.custom_command_modify = lambda x: x - self.cmd_dict[self.distro][1]
84 except KeyError:
85 distros = sorted(self.cmd_dict.keys())
86 logger.error(
87 self.distro
88 + " is not a valid distro name. "
89 + "Use one of the list: "
90 + str(distros)
91 + "."
92 )
93 self.cmd = None
94
95 if self.execute:
96 self.add_callbacks({"Button1": self.do_execute})
97
98 def _check_updates(self):
99 # type: () -> str
100 try:
101 updates = self.call_process(self.cmd, shell=True)
102 except CalledProcessError:
103 updates = ""
104 num_updates = self.custom_command_modify(len(updates.splitlines()))
105
106 if num_updates < 0:
107 num_updates = 0
108 if num_updates == 0:
109 self.layout.colour = self.colour_no_updates
110 return self.no_update_string
111 num_updates = str(num_updates)
112
113 if self.restart_indicator and os.path.exists("/var/run/reboot-required"):
114 num_updates += self.restart_indicator
115
116 self.layout.colour = self.colour_have_updates
117 return self.display_format.format(**{"updates": num_updates})
118
119 def poll(self):
120 # type: () -> str
121 if not self.cmd:
122 return "N/A"
123 return self._check_updates()
124
125 def do_execute(self):
126 self._process = Popen(self.execute, shell=True)
127 self.timeout_add(self.execute_polling_interval, self._refresh_count)
128
129 def _refresh_count(self):
130 if self._process.poll() is None:
131 self.timeout_add(self.execute_polling_interval, self._refresh_count)
132
133 else:
134 self.timer_setup()
135
[end of libqtile/widget/check_updates.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/widget/check_updates.py b/libqtile/widget/check_updates.py
--- a/libqtile/widget/check_updates.py
+++ b/libqtile/widget/check_updates.py
@@ -26,7 +26,21 @@
class CheckUpdates(base.ThreadPoolText):
- """Shows number of pending updates in different unix systems"""
+ """
+ Shows number of pending updates in different unix systems.
+
+ .. note::
+
+ It is common for package managers to return a non-zero code when there are no
+ updates. As a result, the widget will treat *any* error as if there are no updates.
+ If you are using a custom commmand/script, you should therefore ensure that it
+ returns zero when it completes if you wish to see the output of your command.
+
+ In addition, as no errors are recorded to the log, if the widget is showing no
+ updates and you believe that to be incorrect, you should run the appropriate
+ command in a terminal to view any error messages.
+
+ """
defaults = [
("distro", "Arch", "Name of your distribution"),
| {"golden_diff": "diff --git a/libqtile/widget/check_updates.py b/libqtile/widget/check_updates.py\n--- a/libqtile/widget/check_updates.py\n+++ b/libqtile/widget/check_updates.py\n@@ -26,7 +26,21 @@\n \n \n class CheckUpdates(base.ThreadPoolText):\n- \"\"\"Shows number of pending updates in different unix systems\"\"\"\n+ \"\"\"\n+ Shows number of pending updates in different unix systems.\n+\n+ .. note::\n+\n+ It is common for package managers to return a non-zero code when there are no\n+ updates. As a result, the widget will treat *any* error as if there are no updates.\n+ If you are using a custom commmand/script, you should therefore ensure that it\n+ returns zero when it completes if you wish to see the output of your command.\n+\n+ In addition, as no errors are recorded to the log, if the widget is showing no\n+ updates and you believe that to be incorrect, you should run the appropriate\n+ command in a terminal to view any error messages.\n+\n+ \"\"\"\n \n defaults = [\n (\"distro\", \"Arch\", \"Name of your distribution\"),\n", "issue": "CheckUpdates widget swallows crashes and shows as no updates\nAs per title, it's not clear if the check update command is working as any error in the command results in the widget treating it as no updates. \r\n\r\nThis makes debugging impossible.\n", "before_files": [{"content": "# Copyright (c) 2015 Ali Mousavi\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n# SOFTWARE.\n\nimport os\nfrom subprocess import CalledProcessError, Popen\n\nfrom libqtile.log_utils import logger\nfrom libqtile.widget import base\n\n\nclass CheckUpdates(base.ThreadPoolText):\n \"\"\"Shows number of pending updates in different unix systems\"\"\"\n\n defaults = [\n (\"distro\", \"Arch\", \"Name of your distribution\"),\n (\n \"custom_command\",\n None,\n \"Custom shell command for checking updates (counts the lines of the output)\",\n ),\n (\n \"custom_command_modify\",\n (lambda x: x),\n \"Lambda function to modify line count from custom_command\",\n ),\n (\"update_interval\", 60, \"Update interval in seconds.\"),\n (\"execute\", None, \"Command to execute on click\"),\n (\"display_format\", \"Updates: {updates}\", \"Display format if updates available\"),\n (\"colour_no_updates\", \"ffffff\", \"Colour when there's no updates.\"),\n (\"colour_have_updates\", \"ffffff\", \"Colour when there are updates.\"),\n (\"restart_indicator\", \"\", \"Indicator to represent reboot is required. (Ubuntu only)\"),\n (\"no_update_string\", \"\", \"String to display if no updates available\"),\n ]\n\n def __init__(self, **config):\n base.ThreadPoolText.__init__(self, \"\", **config)\n self.add_defaults(CheckUpdates.defaults)\n\n # Helpful to have this as a variable as we can shorten it for testing\n self.execute_polling_interval = 1\n\n # format: \"Distro\": (\"cmd\", \"number of lines to subtract from output\")\n self.cmd_dict = {\n \"Arch\": (\"pacman -Qu\", 0),\n \"Arch_checkupdates\": (\"checkupdates\", 0),\n \"Arch_Sup\": (\"pacman -Sup\", 0),\n \"Arch_paru\": (\"paru -Qu\", 0),\n \"Arch_paru_Sup\": (\"paru -Sup\", 0),\n \"Arch_yay\": (\"yay -Qu\", 0),\n \"Debian\": (\"apt-show-versions -u -b\", 0),\n \"Gentoo_eix\": (\"EIX_LIMIT=0 eix -u# --world\", 0),\n \"Ubuntu\": (\"aptitude search ~U\", 0),\n \"Fedora\": (\"dnf list updates -q\", 1),\n \"FreeBSD\": (\"pkg_version -I -l '<'\", 0),\n \"Mandriva\": (\"urpmq --auto-select\", 0),\n }\n\n if self.custom_command:\n # Use custom_command\n self.cmd = self.custom_command\n\n else:\n # Check if distro name is valid.\n try:\n self.cmd = self.cmd_dict[self.distro][0]\n self.custom_command_modify = lambda x: x - self.cmd_dict[self.distro][1]\n except KeyError:\n distros = sorted(self.cmd_dict.keys())\n logger.error(\n self.distro\n + \" is not a valid distro name. \"\n + \"Use one of the list: \"\n + str(distros)\n + \".\"\n )\n self.cmd = None\n\n if self.execute:\n self.add_callbacks({\"Button1\": self.do_execute})\n\n def _check_updates(self):\n # type: () -> str\n try:\n updates = self.call_process(self.cmd, shell=True)\n except CalledProcessError:\n updates = \"\"\n num_updates = self.custom_command_modify(len(updates.splitlines()))\n\n if num_updates < 0:\n num_updates = 0\n if num_updates == 0:\n self.layout.colour = self.colour_no_updates\n return self.no_update_string\n num_updates = str(num_updates)\n\n if self.restart_indicator and os.path.exists(\"/var/run/reboot-required\"):\n num_updates += self.restart_indicator\n\n self.layout.colour = self.colour_have_updates\n return self.display_format.format(**{\"updates\": num_updates})\n\n def poll(self):\n # type: () -> str\n if not self.cmd:\n return \"N/A\"\n return self._check_updates()\n\n def do_execute(self):\n self._process = Popen(self.execute, shell=True)\n self.timeout_add(self.execute_polling_interval, self._refresh_count)\n\n def _refresh_count(self):\n if self._process.poll() is None:\n self.timeout_add(self.execute_polling_interval, self._refresh_count)\n\n else:\n self.timer_setup()\n", "path": "libqtile/widget/check_updates.py"}]} | 2,071 | 250 |
gh_patches_debug_15946 | rasdani/github-patches | git_diff | microsoft__Qcodes-485 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Keithely 2600 "resolution"
@MerlinSmiles right now we are limiting the set to 8 digits (https://github.com/QCoDeS/Qcodes/blob/master/qcodes/instrument_drivers/tektronix/Keithley_2600.py#L23)
Afaik it can go to to 12 digits. Do you confirm ?
</issue>
<code>
[start of qcodes/instrument_drivers/tektronix/Keithley_2600.py]
1 from qcodes import VisaInstrument
2
3
4 class Keithley_2600(VisaInstrument):
5 """
6 channel: use channel 'a' or 'b'
7
8 This is the qcodes driver for the Keithley_2600 Source-Meter series,
9 tested with Keithley_2614B
10
11 Status: beta-version.
12 TODO:
13 - Add all parameters that are in the manual
14 - range and limit should be set according to mode
15 - add ramping and such stuff
16
17 """
18 def __init__(self, name, address, channel, **kwargs):
19 super().__init__(name, address, terminator='\n', **kwargs)
20 self._channel = channel
21
22 self.add_parameter('volt', get_cmd='measure.v()',
23 get_parser=float, set_cmd='source.levelv={:.8f}',
24 label='Voltage',
25 unit='V')
26 self.add_parameter('curr', get_cmd='measure.i()',
27 get_parser=float, set_cmd='source.leveli={:.8f}',
28 label='Current',
29 unit='A')
30 self.add_parameter('mode',
31 get_cmd='source.func',
32 set_cmd='source.func={:d}',
33 val_mapping={'current': 0, 'voltage': 1})
34 self.add_parameter('output',
35 get_cmd='source.output',
36 set_cmd='source.output={:d}',
37 val_mapping={'on': 1, 'off': 0})
38 # Source range
39 # needs get after set
40 self.add_parameter('rangev',
41 get_cmd='source.rangev',
42 get_parser=float,
43 set_cmd='source.rangev={:.4f}',
44 unit='V')
45 # Measure range
46 # needs get after set
47 self.add_parameter('rangei',
48 get_cmd='source.rangei',
49 get_parser=float,
50 set_cmd='source.rangei={:.4f}',
51 unit='A')
52 # Compliance limit
53 self.add_parameter('limitv',
54 get_cmd='source.limitv',
55 get_parser=float,
56 set_cmd='source.limitv={:.4f}',
57 unit='V')
58 # Compliance limit
59 self.add_parameter('limiti',
60 get_cmd='source.limiti',
61 get_parser=float,
62 set_cmd='source.limiti={:.4f}',
63 unit='A')
64
65 self.connect_message()
66
67 def get_idn(self):
68 IDN = self.ask_raw('*IDN?')
69 vendor, model, serial, firmware = map(str.strip, IDN.split(','))
70 model = model[6:]
71
72 IDN = {'vendor': vendor, 'model': model,
73 'serial': serial, 'firmware': firmware}
74 return IDN
75
76 def reset(self):
77 self.write('reset()')
78
79 def ask(self, cmd):
80 return super().ask('print(smu{:s}.{:s})'.format(self._channel, cmd))
81
82 def write(self, cmd):
83 super().write('smu{:s}.{:s}'.format(self._channel, cmd))
84
[end of qcodes/instrument_drivers/tektronix/Keithley_2600.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qcodes/instrument_drivers/tektronix/Keithley_2600.py b/qcodes/instrument_drivers/tektronix/Keithley_2600.py
--- a/qcodes/instrument_drivers/tektronix/Keithley_2600.py
+++ b/qcodes/instrument_drivers/tektronix/Keithley_2600.py
@@ -20,11 +20,11 @@
self._channel = channel
self.add_parameter('volt', get_cmd='measure.v()',
- get_parser=float, set_cmd='source.levelv={:.8f}',
+ get_parser=float, set_cmd='source.levelv={:.12f}',
label='Voltage',
unit='V')
self.add_parameter('curr', get_cmd='measure.i()',
- get_parser=float, set_cmd='source.leveli={:.8f}',
+ get_parser=float, set_cmd='source.leveli={:.12f}',
label='Current',
unit='A')
self.add_parameter('mode',
| {"golden_diff": "diff --git a/qcodes/instrument_drivers/tektronix/Keithley_2600.py b/qcodes/instrument_drivers/tektronix/Keithley_2600.py\n--- a/qcodes/instrument_drivers/tektronix/Keithley_2600.py\n+++ b/qcodes/instrument_drivers/tektronix/Keithley_2600.py\n@@ -20,11 +20,11 @@\n self._channel = channel\n \n self.add_parameter('volt', get_cmd='measure.v()',\n- get_parser=float, set_cmd='source.levelv={:.8f}',\n+ get_parser=float, set_cmd='source.levelv={:.12f}',\n label='Voltage',\n unit='V')\n self.add_parameter('curr', get_cmd='measure.i()',\n- get_parser=float, set_cmd='source.leveli={:.8f}',\n+ get_parser=float, set_cmd='source.leveli={:.12f}',\n label='Current',\n unit='A')\n self.add_parameter('mode',\n", "issue": "Keithely 2600 \"resolution\"\n@MerlinSmiles right now we are limiting the set to 8 digits (https://github.com/QCoDeS/Qcodes/blob/master/qcodes/instrument_drivers/tektronix/Keithley_2600.py#L23)\r\nAfaik it can go to to 12 digits. Do you confirm ? \r\n\n", "before_files": [{"content": "from qcodes import VisaInstrument\n\n\nclass Keithley_2600(VisaInstrument):\n \"\"\"\n channel: use channel 'a' or 'b'\n\n This is the qcodes driver for the Keithley_2600 Source-Meter series,\n tested with Keithley_2614B\n\n Status: beta-version.\n TODO:\n - Add all parameters that are in the manual\n - range and limit should be set according to mode\n - add ramping and such stuff\n\n \"\"\"\n def __init__(self, name, address, channel, **kwargs):\n super().__init__(name, address, terminator='\\n', **kwargs)\n self._channel = channel\n\n self.add_parameter('volt', get_cmd='measure.v()',\n get_parser=float, set_cmd='source.levelv={:.8f}',\n label='Voltage',\n unit='V')\n self.add_parameter('curr', get_cmd='measure.i()',\n get_parser=float, set_cmd='source.leveli={:.8f}',\n label='Current',\n unit='A')\n self.add_parameter('mode',\n get_cmd='source.func',\n set_cmd='source.func={:d}',\n val_mapping={'current': 0, 'voltage': 1})\n self.add_parameter('output',\n get_cmd='source.output',\n set_cmd='source.output={:d}',\n val_mapping={'on': 1, 'off': 0})\n # Source range\n # needs get after set\n self.add_parameter('rangev',\n get_cmd='source.rangev',\n get_parser=float,\n set_cmd='source.rangev={:.4f}',\n unit='V')\n # Measure range\n # needs get after set\n self.add_parameter('rangei',\n get_cmd='source.rangei',\n get_parser=float,\n set_cmd='source.rangei={:.4f}',\n unit='A')\n # Compliance limit\n self.add_parameter('limitv',\n get_cmd='source.limitv',\n get_parser=float,\n set_cmd='source.limitv={:.4f}',\n unit='V')\n # Compliance limit\n self.add_parameter('limiti',\n get_cmd='source.limiti',\n get_parser=float,\n set_cmd='source.limiti={:.4f}',\n unit='A')\n\n self.connect_message()\n\n def get_idn(self):\n IDN = self.ask_raw('*IDN?')\n vendor, model, serial, firmware = map(str.strip, IDN.split(','))\n model = model[6:]\n\n IDN = {'vendor': vendor, 'model': model,\n 'serial': serial, 'firmware': firmware}\n return IDN\n\n def reset(self):\n self.write('reset()')\n\n def ask(self, cmd):\n return super().ask('print(smu{:s}.{:s})'.format(self._channel, cmd))\n\n def write(self, cmd):\n super().write('smu{:s}.{:s}'.format(self._channel, cmd))\n", "path": "qcodes/instrument_drivers/tektronix/Keithley_2600.py"}]} | 1,463 | 234 |
gh_patches_debug_5064 | rasdani/github-patches | git_diff | pytorch__pytorch-255 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
batch_first broken in AutogradRNN
The last line here fails on CPU or when CUDNN is otherwise unavailable:
```python
l, b, t, x, h = 2, 3, 5, 10, 20
rnn = nn.LSTM(x, h, l, batch_first=True)
inpt = Variable(torch.randn(b, t, x))
h0 = Variable(torch.randn(l, b, h))
c0 = Variable(torch.randn(l, b, h))
output, hn = rnn(inpt, (h0, c0))
```
This is because `AutogradRNN.forward` accidentally assumes `Tensor`'s in-place `transpose` semantics rather than the functional semantics of `Variable` (`cudnn.rnn.forward` gets it right):
```python
def forward(input, weight, hidden):
if batch_first:
input.transpose(0, 1)
nexth, output = func(input, hidden, weight)
if batch_first:
output.transpose(0, 1)
```
I can push a PR that fixes this, or one of the devs can put it in the next bugfix PR:
```python
def forward(input, weight, hidden):
if batch_first:
input = input.transpose(0, 1)
nexth, output = func(input, hidden, weight)
if batch_first:
output = output.transpose(0, 1)
```
</issue>
<code>
[start of torch/nn/functions/rnn.py]
1 from torch.autograd import Function, NestedIOFunction, Variable
2 from torch._thnn import type2backend
3 import torch.backends.cudnn as cudnn
4 try:
5 import torch.backends.cudnn.rnn
6 except ImportError:
7 pass
8
9
10 # FIXME: write a proper function library
11 from .thnn import Tanh, Sigmoid, Threshold
12 from .linear import Linear
13 from .dropout import Dropout
14
15
16 def _wrap(fn, *args):
17 def inner(*inner_args):
18 return fn(*args)(*inner_args)
19 return inner
20 tanh = _wrap(Tanh)
21 sigmoid = _wrap(Sigmoid)
22 ReLU = _wrap(Threshold, 0, 0, False)
23
24
25 # get around autograd's lack of None-handling
26 def linear(input, w, b):
27 if b is not None:
28 return Linear()(input, w, b)
29 else:
30 return Linear()(input, w)
31
32
33 def RNNReLUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):
34 hy = ReLU(linear(input, w_ih, b_ih) + linear(hidden, w_hh, b_hh))
35 return hy
36
37
38 def RNNTanhCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):
39 hy = tanh(linear(input, w_ih, b_ih) + linear(hidden, w_hh, b_hh))
40 return hy
41
42
43 def LSTMCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):
44 hx, cx = hidden
45 gates = linear(input, w_ih, b_ih) + linear(hx, w_hh, b_hh)
46 ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)
47
48 ingate = sigmoid(ingate)
49 forgetgate = sigmoid(forgetgate)
50 cellgate = tanh(cellgate)
51 outgate = sigmoid(outgate)
52
53 cy = (forgetgate * cx) + (ingate * cellgate)
54 hy = outgate * tanh(cy)
55
56 return hy, cy
57
58
59 def GRUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):
60 gi = linear(input, w_ih, b_ih)
61 gh = linear(hidden, w_hh, b_hh)
62 i_r, i_i, i_n = gi.chunk(3, 1)
63 h_r, h_i, h_n = gh.chunk(3, 1)
64
65 resetgate = sigmoid(i_r + h_r)
66 inputgate = sigmoid(i_i + h_i)
67 newgate = tanh(i_n + resetgate * h_n)
68 hy = newgate + inputgate * (hidden - newgate)
69
70 return hy
71
72
73 def StackedRNN(inners, num_layers, lstm=False, dropout=0, train=True):
74
75 num_directions = len(inners)
76 total_layers = num_layers * num_directions
77
78 def forward(input, hidden, weight):
79 assert(len(weight) == total_layers)
80 next_hidden = []
81
82 if lstm:
83 hidden = list(zip(*hidden))
84
85 for i in range(num_layers):
86 all_output = []
87 for j, inner in enumerate(inners):
88 l = i * num_directions + j
89
90 hy, output = inner(input, hidden[l], weight[l])
91 next_hidden.append(hy)
92 all_output.append(output)
93
94 input = torch.cat(all_output, 2)
95
96 if dropout != 0 and i < num_layers - 1:
97 input = Dropout(p=dropout, train=train, inplace=False)(input)
98
99 if lstm:
100 next_h, next_c = zip(*next_hidden)
101 next_hidden = (
102 torch.cat(next_h, 0).view(total_layers, *next_h[0].size()),
103 torch.cat(next_c, 0).view(total_layers, *next_c[0].size())
104 )
105 else:
106 next_hidden = torch.cat(next_hidden, 0).view(
107 total_layers, *next_hidden[0].size())
108
109 return next_hidden, input
110
111 return forward
112
113 def Recurrent(inner, reverse=False):
114 def forward(input, hidden, weight):
115 output = []
116 steps = range(input.size(0) - 1, -1, -1) if reverse else range(input.size(0))
117 for i in steps:
118 hidden = inner(input[i], hidden, *weight)
119 # hack to handle LSTM
120 output.append(isinstance(hidden, tuple) and hidden[0] or hidden)
121
122 if reverse:
123 output.reverse()
124 output = torch.cat(output, 0).view(input.size(0), *output[0].size())
125
126 return hidden, output
127
128 return forward
129
130
131 def AutogradRNN(mode, input_size, hidden_size, num_layers=1, batch_first=False, dropout=0, train=True, bidirectional=False):
132
133 if mode == 'RNN_RELU':
134 cell = RNNReLUCell
135 elif mode == 'RNN_TANH':
136 cell = RNNTanhCell
137 elif mode == 'LSTM':
138 cell = LSTMCell
139 elif mode == 'GRU':
140 cell = GRUCell
141 else:
142 raise Exception('Unknown mode: {}'.format(mode))
143
144 if bidirectional:
145 layer = (Recurrent(cell), Recurrent(cell, reverse=True))
146 else:
147 layer = (Recurrent(cell),)
148
149 func = StackedRNN(layer,
150 num_layers,
151 (mode == 'LSTM'),
152 dropout=dropout,
153 train=train)
154
155 def forward(input, weight, hidden):
156 if batch_first:
157 input.transpose(0, 1)
158
159 nexth, output = func(input, hidden, weight)
160
161 if batch_first:
162 output.transpose(0, 1)
163
164 return output, nexth
165
166 return forward
167
168
169 class CudnnRNN(NestedIOFunction):
170 def __init__(self, mode, input_size, hidden_size, num_layers=1, batch_first=False, dropout=0, train=True, bidirectional=False):
171 super(CudnnRNN, self).__init__()
172 self.mode = cudnn.rnn.get_cudnn_mode(mode)
173 self.input_mode = cudnn.CUDNN_LINEAR_INPUT
174 self.input_size = input_size
175 self.hidden_size = hidden_size
176 self.num_layers = num_layers
177 self.batch_first = batch_first
178 self.dropout = dropout
179 self.train = train
180 self.bidirectional = 1 if bidirectional else 0
181 self.num_directions = 2 if bidirectional else 1
182 self.seed = torch.IntTensor(1).random_()[0]
183
184 def forward_extended(self, input, weight, hx):
185
186 assert(cudnn.is_acceptable(input))
187
188 output = input.new()
189
190 if torch.is_tensor(hx):
191 hy = hx.new()
192 else:
193 hy = tuple(h.new() for h in hx)
194
195 cudnn.rnn.forward(self, input, hx, weight, output, hy)
196
197 self.save_for_backward(input, hx, weight, output)
198 return output, hy
199
200
201 def backward_extended(self, grad_output, grad_hy):
202 input, hx, weight, output = self.saved_tensors
203
204 grad_input, grad_weight, grad_hx = None, None, None
205
206 assert(cudnn.is_acceptable(input))
207
208 grad_input = input.new()
209 grad_weight = input.new()
210 grad_hx = input.new()
211 if torch.is_tensor(hx):
212 grad_hx = input.new()
213 else:
214 grad_hx = tuple(h.new() for h in hx)
215
216 cudnn.rnn.backward_grad(
217 self,
218 input,
219 hx,
220 weight,
221 output,
222 grad_output,
223 grad_hy,
224 grad_input,
225 grad_hx)
226
227 if self.needs_input_grad[1]:
228 grad_weight = [tuple(w.new().resize_as_(w).zero_() for w in layer_weight) for layer_weight in weight]
229 cudnn.rnn.backward_weight(
230 self,
231 input,
232 hx,
233 output,
234 weight,
235 grad_weight)
236
237 return grad_input, grad_weight, grad_hx
238
239
240 def RNN(*args, **kwargs):
241 def forward(input, *fargs, **fkwargs):
242 if cudnn.is_acceptable(input.data):
243 func = CudnnRNN(*args, **kwargs)
244 else:
245 func = AutogradRNN(*args, **kwargs)
246 return func(input, *fargs, **fkwargs)
247
248 return forward
249
[end of torch/nn/functions/rnn.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torch/nn/functions/rnn.py b/torch/nn/functions/rnn.py
--- a/torch/nn/functions/rnn.py
+++ b/torch/nn/functions/rnn.py
@@ -154,12 +154,12 @@
def forward(input, weight, hidden):
if batch_first:
- input.transpose(0, 1)
+ input = input.transpose(0, 1)
nexth, output = func(input, hidden, weight)
if batch_first:
- output.transpose(0, 1)
+ output = output.transpose(0, 1)
return output, nexth
| {"golden_diff": "diff --git a/torch/nn/functions/rnn.py b/torch/nn/functions/rnn.py\n--- a/torch/nn/functions/rnn.py\n+++ b/torch/nn/functions/rnn.py\n@@ -154,12 +154,12 @@\n \n def forward(input, weight, hidden):\n if batch_first:\n- input.transpose(0, 1)\n+ input = input.transpose(0, 1)\n \n nexth, output = func(input, hidden, weight)\n \n if batch_first:\n- output.transpose(0, 1)\n+ output = output.transpose(0, 1)\n \n return output, nexth\n", "issue": "batch_first broken in AutogradRNN\nThe last line here fails on CPU or when CUDNN is otherwise unavailable:\r\n\r\n```python\r\nl, b, t, x, h = 2, 3, 5, 10, 20\r\n\r\nrnn = nn.LSTM(x, h, l, batch_first=True)\r\ninpt = Variable(torch.randn(b, t, x))\r\nh0 = Variable(torch.randn(l, b, h))\r\nc0 = Variable(torch.randn(l, b, h))\r\noutput, hn = rnn(inpt, (h0, c0))\r\n```\r\n\r\nThis is because `AutogradRNN.forward` accidentally assumes `Tensor`'s in-place `transpose` semantics rather than the functional semantics of `Variable` (`cudnn.rnn.forward` gets it right):\r\n\r\n```python\r\ndef forward(input, weight, hidden):\r\n if batch_first:\r\n input.transpose(0, 1)\r\n nexth, output = func(input, hidden, weight)\r\n if batch_first:\r\n output.transpose(0, 1)\r\n```\r\n\r\nI can push a PR that fixes this, or one of the devs can put it in the next bugfix PR:\r\n```python\r\ndef forward(input, weight, hidden):\r\n if batch_first:\r\n input = input.transpose(0, 1)\r\n nexth, output = func(input, hidden, weight)\r\n if batch_first:\r\n output = output.transpose(0, 1)\r\n```\n", "before_files": [{"content": "from torch.autograd import Function, NestedIOFunction, Variable\nfrom torch._thnn import type2backend\nimport torch.backends.cudnn as cudnn\ntry:\n import torch.backends.cudnn.rnn\nexcept ImportError:\n pass\n\n\n# FIXME: write a proper function library\nfrom .thnn import Tanh, Sigmoid, Threshold\nfrom .linear import Linear\nfrom .dropout import Dropout\n\n\ndef _wrap(fn, *args):\n def inner(*inner_args):\n return fn(*args)(*inner_args)\n return inner\ntanh = _wrap(Tanh)\nsigmoid = _wrap(Sigmoid)\nReLU = _wrap(Threshold, 0, 0, False)\n\n\n# get around autograd's lack of None-handling\ndef linear(input, w, b):\n if b is not None:\n return Linear()(input, w, b)\n else:\n return Linear()(input, w)\n\n\ndef RNNReLUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):\n hy = ReLU(linear(input, w_ih, b_ih) + linear(hidden, w_hh, b_hh))\n return hy\n\n\ndef RNNTanhCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):\n hy = tanh(linear(input, w_ih, b_ih) + linear(hidden, w_hh, b_hh))\n return hy\n\n\ndef LSTMCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):\n hx, cx = hidden\n gates = linear(input, w_ih, b_ih) + linear(hx, w_hh, b_hh)\n ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)\n\n ingate = sigmoid(ingate)\n forgetgate = sigmoid(forgetgate)\n cellgate = tanh(cellgate)\n outgate = sigmoid(outgate)\n\n cy = (forgetgate * cx) + (ingate * cellgate)\n hy = outgate * tanh(cy)\n\n return hy, cy\n\n\ndef GRUCell(input, hidden, w_ih, w_hh, b_ih=None, b_hh=None):\n gi = linear(input, w_ih, b_ih)\n gh = linear(hidden, w_hh, b_hh)\n i_r, i_i, i_n = gi.chunk(3, 1)\n h_r, h_i, h_n = gh.chunk(3, 1)\n\n resetgate = sigmoid(i_r + h_r)\n inputgate = sigmoid(i_i + h_i)\n newgate = tanh(i_n + resetgate * h_n)\n hy = newgate + inputgate * (hidden - newgate)\n\n return hy\n\n\ndef StackedRNN(inners, num_layers, lstm=False, dropout=0, train=True):\n\n num_directions = len(inners)\n total_layers = num_layers * num_directions\n\n def forward(input, hidden, weight):\n assert(len(weight) == total_layers)\n next_hidden = []\n\n if lstm:\n hidden = list(zip(*hidden))\n\n for i in range(num_layers):\n all_output = []\n for j, inner in enumerate(inners):\n l = i * num_directions + j\n\n hy, output = inner(input, hidden[l], weight[l])\n next_hidden.append(hy)\n all_output.append(output)\n\n input = torch.cat(all_output, 2)\n\n if dropout != 0 and i < num_layers - 1:\n input = Dropout(p=dropout, train=train, inplace=False)(input)\n\n if lstm:\n next_h, next_c = zip(*next_hidden)\n next_hidden = (\n torch.cat(next_h, 0).view(total_layers, *next_h[0].size()),\n torch.cat(next_c, 0).view(total_layers, *next_c[0].size())\n )\n else:\n next_hidden = torch.cat(next_hidden, 0).view(\n total_layers, *next_hidden[0].size())\n\n return next_hidden, input\n\n return forward\n\ndef Recurrent(inner, reverse=False):\n def forward(input, hidden, weight):\n output = []\n steps = range(input.size(0) - 1, -1, -1) if reverse else range(input.size(0))\n for i in steps:\n hidden = inner(input[i], hidden, *weight)\n # hack to handle LSTM\n output.append(isinstance(hidden, tuple) and hidden[0] or hidden)\n\n if reverse:\n output.reverse()\n output = torch.cat(output, 0).view(input.size(0), *output[0].size())\n\n return hidden, output\n\n return forward\n\n\ndef AutogradRNN(mode, input_size, hidden_size, num_layers=1, batch_first=False, dropout=0, train=True, bidirectional=False):\n\n if mode == 'RNN_RELU':\n cell = RNNReLUCell\n elif mode == 'RNN_TANH':\n cell = RNNTanhCell\n elif mode == 'LSTM':\n cell = LSTMCell\n elif mode == 'GRU':\n cell = GRUCell\n else:\n raise Exception('Unknown mode: {}'.format(mode))\n\n if bidirectional:\n layer = (Recurrent(cell), Recurrent(cell, reverse=True))\n else:\n layer = (Recurrent(cell),)\n\n func = StackedRNN(layer,\n num_layers,\n (mode == 'LSTM'),\n dropout=dropout,\n train=train)\n\n def forward(input, weight, hidden):\n if batch_first:\n input.transpose(0, 1)\n\n nexth, output = func(input, hidden, weight)\n\n if batch_first:\n output.transpose(0, 1)\n\n return output, nexth\n\n return forward\n\n\nclass CudnnRNN(NestedIOFunction):\n def __init__(self, mode, input_size, hidden_size, num_layers=1, batch_first=False, dropout=0, train=True, bidirectional=False):\n super(CudnnRNN, self).__init__()\n self.mode = cudnn.rnn.get_cudnn_mode(mode)\n self.input_mode = cudnn.CUDNN_LINEAR_INPUT\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.num_layers = num_layers\n self.batch_first = batch_first\n self.dropout = dropout\n self.train = train\n self.bidirectional = 1 if bidirectional else 0\n self.num_directions = 2 if bidirectional else 1\n self.seed = torch.IntTensor(1).random_()[0]\n\n def forward_extended(self, input, weight, hx):\n\n assert(cudnn.is_acceptable(input))\n\n output = input.new()\n\n if torch.is_tensor(hx):\n hy = hx.new()\n else:\n hy = tuple(h.new() for h in hx)\n\n cudnn.rnn.forward(self, input, hx, weight, output, hy)\n\n self.save_for_backward(input, hx, weight, output)\n return output, hy\n\n\n def backward_extended(self, grad_output, grad_hy):\n input, hx, weight, output = self.saved_tensors\n\n grad_input, grad_weight, grad_hx = None, None, None\n\n assert(cudnn.is_acceptable(input))\n\n grad_input = input.new()\n grad_weight = input.new()\n grad_hx = input.new()\n if torch.is_tensor(hx):\n grad_hx = input.new()\n else:\n grad_hx = tuple(h.new() for h in hx)\n\n cudnn.rnn.backward_grad(\n self,\n input,\n hx,\n weight,\n output,\n grad_output,\n grad_hy,\n grad_input,\n grad_hx)\n\n if self.needs_input_grad[1]:\n grad_weight = [tuple(w.new().resize_as_(w).zero_() for w in layer_weight) for layer_weight in weight]\n cudnn.rnn.backward_weight(\n self,\n input,\n hx,\n output,\n weight,\n grad_weight)\n\n return grad_input, grad_weight, grad_hx\n\n\ndef RNN(*args, **kwargs):\n def forward(input, *fargs, **fkwargs):\n if cudnn.is_acceptable(input.data):\n func = CudnnRNN(*args, **kwargs)\n else:\n func = AutogradRNN(*args, **kwargs)\n return func(input, *fargs, **fkwargs)\n\n return forward\n", "path": "torch/nn/functions/rnn.py"}]} | 3,372 | 147 |
gh_patches_debug_4901 | rasdani/github-patches | git_diff | certbot__certbot-6349 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
KeyError handle_modules with 0.27.0 on openSUSE
## My operating system is (include version):
openSUSE Leap 42.1
## I installed Certbot with (certbot-auto, OS package manager, pip, etc):
certbot-auto
## I ran this command and it produced this output:
````
kevdev36:~ # certbot-auto --version
Upgrading certbot-auto 0.26.1 to 0.27.0...
Replacing certbot-auto...
Creating virtual environment...
Installing Python packages...
Installation succeeded.
An unexpected error occurred:
KeyError: 'handle_modules'
Please see the logfile '/tmp/tmpMAZJox' for more details.
````
## Certbot's behavior differed from what I expected because:
It did not print the version.
## Here is a Certbot log showing the issue (if available):
/tmp/tmpMAZJox
````
2018-09-06 09:59:58,652:DEBUG:certbot.main:certbot version: 0.27.0
2018-09-06 09:59:58,652:DEBUG:certbot.main:Arguments: ['--version']
2018-09-06 09:59:58,653:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#apache,PluginEntryPoint#manual,PluginEntryPoint#nginx,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)
2018-09-06 09:59:58,660:DEBUG:certbot.log:Exiting abnormally:
Traceback (most recent call last):
File "/opt/eff.org/certbot/venv/bin/letsencrypt", line 11, in <module>
sys.exit(main())
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/main.py", line 1345, in main
args = cli.prepare_and_parse_args(plugins, cli_args)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py", line 1243, in prepare_and_parse_args
_plugins_parsing(helpful, plugins)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py", line 1458, in _plugins_parsing
helpful.add_plugin_args(plugins)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py", line 840, in add_plugin_args
plugin_ep.plugin_cls.inject_parser_options(parser_or_group, name)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/plugins/common.py", line 81, in inject_parser_options
return cls.add_parser_arguments(add)
File "/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot_apache/configurator.py", line 159, in add_parser_arguments
add("handle-modules", default=cls.OS_DEFAULTS["handle_modules"],
KeyError: 'handle_modules'
2018-09-06 09:59:58,660:ERROR:certbot.log:An unexpected error occurred:
````
## Workaround
Downgrade to 0.26.1 and use `certbot-auto` with `--no-self-upgrade`.
````
kevdev36:~ # wget https://raw.githubusercontent.com/certbot/certbot/v0.26.1/certbot-auto
kevdev36:~ # chmod +x certbot-auto
kevdev36:~ # /opt/eff.org/certbot/venv/bin/pip install certbot==0.26.1 certbot-apache==0.26.1 certbot-nginx==0.26.1
kevdev36:~ # ./certbot-auto --no-self-upgrade --version
certbot 0.26.1
````
</issue>
<code>
[start of certbot-apache/certbot_apache/override_suse.py]
1 """ Distribution specific override class for OpenSUSE """
2 import pkg_resources
3
4 import zope.interface
5
6 from certbot import interfaces
7
8 from certbot_apache import configurator
9
10 @zope.interface.provider(interfaces.IPluginFactory)
11 class OpenSUSEConfigurator(configurator.ApacheConfigurator):
12 """OpenSUSE specific ApacheConfigurator override class"""
13
14 OS_DEFAULTS = dict(
15 server_root="/etc/apache2",
16 vhost_root="/etc/apache2/vhosts.d",
17 vhost_files="*.conf",
18 logs_root="/var/log/apache2",
19 ctl="apache2ctl",
20 version_cmd=['apache2ctl', '-v'],
21 restart_cmd=['apache2ctl', 'graceful'],
22 conftest_cmd=['apache2ctl', 'configtest'],
23 enmod="a2enmod",
24 dismod="a2dismod",
25 le_vhost_ext="-le-ssl.conf",
26 handle_mods=False,
27 handle_sites=False,
28 challenge_location="/etc/apache2/vhosts.d",
29 MOD_SSL_CONF_SRC=pkg_resources.resource_filename(
30 "certbot_apache", "options-ssl-apache.conf")
31 )
32
[end of certbot-apache/certbot_apache/override_suse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/certbot-apache/certbot_apache/override_suse.py b/certbot-apache/certbot_apache/override_suse.py
--- a/certbot-apache/certbot_apache/override_suse.py
+++ b/certbot-apache/certbot_apache/override_suse.py
@@ -23,7 +23,7 @@
enmod="a2enmod",
dismod="a2dismod",
le_vhost_ext="-le-ssl.conf",
- handle_mods=False,
+ handle_modules=False,
handle_sites=False,
challenge_location="/etc/apache2/vhosts.d",
MOD_SSL_CONF_SRC=pkg_resources.resource_filename(
| {"golden_diff": "diff --git a/certbot-apache/certbot_apache/override_suse.py b/certbot-apache/certbot_apache/override_suse.py\n--- a/certbot-apache/certbot_apache/override_suse.py\n+++ b/certbot-apache/certbot_apache/override_suse.py\n@@ -23,7 +23,7 @@\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n- handle_mods=False,\n+ handle_modules=False,\n handle_sites=False,\n challenge_location=\"/etc/apache2/vhosts.d\",\n MOD_SSL_CONF_SRC=pkg_resources.resource_filename(\n", "issue": "KeyError handle_modules with 0.27.0 on openSUSE\n## My operating system is (include version):\r\n\r\nopenSUSE Leap 42.1\r\n\r\n## I installed Certbot with (certbot-auto, OS package manager, pip, etc):\r\n\r\ncertbot-auto\r\n\r\n## I ran this command and it produced this output:\r\n\r\n````\r\nkevdev36:~ # certbot-auto --version\r\nUpgrading certbot-auto 0.26.1 to 0.27.0...\r\nReplacing certbot-auto...\r\nCreating virtual environment...\r\nInstalling Python packages...\r\nInstallation succeeded.\r\nAn unexpected error occurred:\r\nKeyError: 'handle_modules'\r\nPlease see the logfile '/tmp/tmpMAZJox' for more details.\r\n````\r\n\r\n## Certbot's behavior differed from what I expected because:\r\n\r\nIt did not print the version.\r\n\r\n## Here is a Certbot log showing the issue (if available):\r\n\r\n/tmp/tmpMAZJox\r\n\r\n````\r\n2018-09-06 09:59:58,652:DEBUG:certbot.main:certbot version: 0.27.0\r\n2018-09-06 09:59:58,652:DEBUG:certbot.main:Arguments: ['--version']\r\n2018-09-06 09:59:58,653:DEBUG:certbot.main:Discovered plugins: PluginsRegistry(PluginEntryPoint#apache,PluginEntryPoint#manual,PluginEntryPoint#nginx,PluginEntryPoint#null,PluginEntryPoint#standalone,PluginEntryPoint#webroot)\r\n2018-09-06 09:59:58,660:DEBUG:certbot.log:Exiting abnormally:\r\nTraceback (most recent call last):\r\n File \"/opt/eff.org/certbot/venv/bin/letsencrypt\", line 11, in <module>\r\n sys.exit(main())\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/main.py\", line 1345, in main\r\n args = cli.prepare_and_parse_args(plugins, cli_args)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py\", line 1243, in prepare_and_parse_args\r\n _plugins_parsing(helpful, plugins)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py\", line 1458, in _plugins_parsing\r\n helpful.add_plugin_args(plugins)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/cli.py\", line 840, in add_plugin_args\r\n plugin_ep.plugin_cls.inject_parser_options(parser_or_group, name)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot/plugins/common.py\", line 81, in inject_parser_options\r\n return cls.add_parser_arguments(add)\r\n File \"/opt/eff.org/certbot/venv/lib/python2.7/site-packages/certbot_apache/configurator.py\", line 159, in add_parser_arguments\r\n add(\"handle-modules\", default=cls.OS_DEFAULTS[\"handle_modules\"],\r\nKeyError: 'handle_modules'\r\n2018-09-06 09:59:58,660:ERROR:certbot.log:An unexpected error occurred:\r\n````\r\n\r\n## Workaround\r\n\r\nDowngrade to 0.26.1 and use `certbot-auto` with `--no-self-upgrade`.\r\n\r\n````\r\nkevdev36:~ # wget https://raw.githubusercontent.com/certbot/certbot/v0.26.1/certbot-auto\r\nkevdev36:~ # chmod +x certbot-auto\r\nkevdev36:~ # /opt/eff.org/certbot/venv/bin/pip install certbot==0.26.1 certbot-apache==0.26.1 certbot-nginx==0.26.1\r\nkevdev36:~ # ./certbot-auto --no-self-upgrade --version\r\ncertbot 0.26.1\r\n````\n", "before_files": [{"content": "\"\"\" Distribution specific override class for OpenSUSE \"\"\"\nimport pkg_resources\n\nimport zope.interface\n\nfrom certbot import interfaces\n\nfrom certbot_apache import configurator\n\[email protected](interfaces.IPluginFactory)\nclass OpenSUSEConfigurator(configurator.ApacheConfigurator):\n \"\"\"OpenSUSE specific ApacheConfigurator override class\"\"\"\n\n OS_DEFAULTS = dict(\n server_root=\"/etc/apache2\",\n vhost_root=\"/etc/apache2/vhosts.d\",\n vhost_files=\"*.conf\",\n logs_root=\"/var/log/apache2\",\n ctl=\"apache2ctl\",\n version_cmd=['apache2ctl', '-v'],\n restart_cmd=['apache2ctl', 'graceful'],\n conftest_cmd=['apache2ctl', 'configtest'],\n enmod=\"a2enmod\",\n dismod=\"a2dismod\",\n le_vhost_ext=\"-le-ssl.conf\",\n handle_mods=False,\n handle_sites=False,\n challenge_location=\"/etc/apache2/vhosts.d\",\n MOD_SSL_CONF_SRC=pkg_resources.resource_filename(\n \"certbot_apache\", \"options-ssl-apache.conf\")\n )\n", "path": "certbot-apache/certbot_apache/override_suse.py"}]} | 1,782 | 154 |
gh_patches_debug_21031 | rasdani/github-patches | git_diff | spack__spack-15252 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
py-pyfftw import issue with scipy.fftpack
Hi,
Sorry to bother you all.
After loading the spack modules via:
```console
spack load -r [email protected]
spack load py-h5py
spack load py-scipy
spack load py-pyfftw
spack load py-mpi4py
```
When in the python code I am using I try to do `import spicy_fftpack`, I have been getting an error message that ends with:
### Error Message
```python
from scipy.fftpack import (dct, idct, dst, idst, diff, tilbert, itilbert,
ImportError: cannot import name '_fftpack' from 'scipy.fftpack'
```
The full error output is in [error.txt](https://github.com/spack/spack/files/4252499/error.txt).
I think that that error is solved in the recent version of pfftw (https://github.com/pyFFTW/pyFFTW/pull/265 and https://github.com/pyFFTW/pyFFTW/issues/279).
But in my machine I still get that error.
I am not sure if I am installing py-pyfftw or py-scipy incorrectly, or making another mistake.
Or if I would just need to add an equivalent line to:
```vim
version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')
```
but for version `0.12.0`, to the package.py of py-pyfftw of spack.
Do you have any suggestion on how I can fix this issue and correctly import the library?
Thank you,
Diana
### System
1. macOS Catalina - %[email protected] (but with [email protected] fortran compilers - see compilers.yaml below)
2. spack installed python (@3.7.6)
3. spack installed py-scipy (@1.4.1)
4. spack installed py-pfftw (@0.11.1)
-----
**compilers.yaml**
```vim
compilers:
- compiler:
spec: [email protected]
paths:
cc: /usr/bin/clang
cxx: /usr/bin/clang++
f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
flags: {}
operating_system: catalina
target: x86_64
modules: []
environment: {}
extra_rpaths: []
- compiler:
spec: [email protected]
paths:
cc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gcc
cxx: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/g++
f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran
flags: {}
operating_system: catalina
target: x86_64
modules: []
environment: {}
extra_rpaths: []
```
-----
**packages.yaml**
```vim
packages:
all:
providers:
mpi: [mpich, openmpi]
```
</issue>
<code>
[start of var/spack/repos/builtin/packages/py-pyfftw/package.py]
1 # Copyright 2013-2020 Lawrence Livermore National Security, LLC and other
2 # Spack Project Developers. See the top-level COPYRIGHT file for details.
3 #
4 # SPDX-License-Identifier: (Apache-2.0 OR MIT)
5
6 from spack import *
7
8
9 class PyPyfftw(PythonPackage):
10 """A pythonic wrapper around FFTW, the FFT library,
11 presenting a unified interface for all the supported transforms."""
12
13 homepage = "http://hgomersall.github.com/pyFFTW"
14 url = "https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz"
15
16 version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')
17 version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')
18
19 depends_on('fftw')
20 depends_on('py-setuptools', type='build')
21 depends_on('py-cython', type='build')
22 depends_on('[email protected]:', type=('build', 'run'))
23 depends_on('[email protected]:', type=('build', 'run'))
24
[end of var/spack/repos/builtin/packages/py-pyfftw/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/py-pyfftw/package.py b/var/spack/repos/builtin/packages/py-pyfftw/package.py
--- a/var/spack/repos/builtin/packages/py-pyfftw/package.py
+++ b/var/spack/repos/builtin/packages/py-pyfftw/package.py
@@ -13,11 +13,12 @@
homepage = "http://hgomersall.github.com/pyFFTW"
url = "https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz"
+ version('0.12.0', sha256='60988e823ca75808a26fd79d88dbae1de3699e72a293f812aa4534f8a0a58cb0')
version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')
version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')
depends_on('fftw')
- depends_on('py-setuptools', type='build')
- depends_on('py-cython', type='build')
- depends_on('[email protected]:', type=('build', 'run'))
- depends_on('[email protected]:', type=('build', 'run'))
+ depends_on('py-setuptools', type='build')
+ depends_on('[email protected]:0.999', type='build')
+ depends_on('[email protected]:', type=('build', 'run'), when='@:0.10.4')
+ depends_on('[email protected]:1.999', type=('build', 'run'), when='@0.11.0:')
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/py-pyfftw/package.py b/var/spack/repos/builtin/packages/py-pyfftw/package.py\n--- a/var/spack/repos/builtin/packages/py-pyfftw/package.py\n+++ b/var/spack/repos/builtin/packages/py-pyfftw/package.py\n@@ -13,11 +13,12 @@\n homepage = \"http://hgomersall.github.com/pyFFTW\"\n url = \"https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz\"\n \n+ version('0.12.0', sha256='60988e823ca75808a26fd79d88dbae1de3699e72a293f812aa4534f8a0a58cb0')\n version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\n version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')\n \n depends_on('fftw')\n- depends_on('py-setuptools', type='build')\n- depends_on('py-cython', type='build')\n- depends_on('[email protected]:', type=('build', 'run'))\n- depends_on('[email protected]:', type=('build', 'run'))\n+ depends_on('py-setuptools', type='build')\n+ depends_on('[email protected]:0.999', type='build')\n+ depends_on('[email protected]:', type=('build', 'run'), when='@:0.10.4')\n+ depends_on('[email protected]:1.999', type=('build', 'run'), when='@0.11.0:')\n", "issue": "py-pyfftw import issue with scipy.fftpack\nHi,\r\nSorry to bother you all.\r\nAfter loading the spack modules via:\r\n```console\r\n spack load -r [email protected]\r\n spack load py-h5py\r\n spack load py-scipy\r\n spack load py-pyfftw\r\n spack load py-mpi4py\r\n```\r\nWhen in the python code I am using I try to do `import spicy_fftpack`, I have been getting an error message that ends with:\r\n\r\n### Error Message\r\n```python\r\nfrom scipy.fftpack import (dct, idct, dst, idst, diff, tilbert, itilbert,\r\nImportError: cannot import name '_fftpack' from 'scipy.fftpack'\r\n```\r\nThe full error output is in [error.txt](https://github.com/spack/spack/files/4252499/error.txt).\r\n\r\nI think that that error is solved in the recent version of pfftw (https://github.com/pyFFTW/pyFFTW/pull/265 and https://github.com/pyFFTW/pyFFTW/issues/279).\r\n\r\nBut in my machine I still get that error.\r\nI am not sure if I am installing py-pyfftw or py-scipy incorrectly, or making another mistake.\r\nOr if I would just need to add an equivalent line to:\r\n```vim\r\nversion('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\r\n```\r\nbut for version `0.12.0`, to the package.py of py-pyfftw of spack.\r\n\r\nDo you have any suggestion on how I can fix this issue and correctly import the library?\r\n\r\nThank you,\r\nDiana\r\n\r\n### System\r\n\r\n 1. macOS Catalina - %[email protected] (but with [email protected] fortran compilers - see compilers.yaml below)\r\n 2. spack installed python (@3.7.6)\r\n 3. spack installed py-scipy (@1.4.1)\r\n 4. spack installed py-pfftw (@0.11.1)\r\n\r\n-----\r\n\r\n**compilers.yaml**\r\n```vim\r\ncompilers:\r\n- compiler:\r\n spec: [email protected]\r\n paths:\r\n cc: /usr/bin/clang\r\n cxx: /usr/bin/clang++\r\n f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n flags: {}\r\n operating_system: catalina\r\n target: x86_64\r\n modules: []\r\n environment: {}\r\n extra_rpaths: []\r\n- compiler:\r\n spec: [email protected]\r\n paths:\r\n cc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gcc\r\n cxx: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/g++\r\n f77: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n fc: /Users/LDianaAmorim/Documents/opt/spack/opt/spack/darwin-catalina-x86_64/clang-11.0.0-apple/gcc-9.2.0-exw25ccpcwqlkcvuwn266kvwqzxbyelp/bin/gfortran\r\n flags: {}\r\n operating_system: catalina\r\n target: x86_64\r\n modules: []\r\n environment: {}\r\n extra_rpaths: []\r\n```\r\n-----\r\n\r\n**packages.yaml**\r\n```vim\r\npackages:\r\n all:\r\n providers:\r\n mpi: [mpich, openmpi]\r\n```\n", "before_files": [{"content": "# Copyright 2013-2020 Lawrence Livermore National Security, LLC and other\n# Spack Project Developers. See the top-level COPYRIGHT file for details.\n#\n# SPDX-License-Identifier: (Apache-2.0 OR MIT)\n\nfrom spack import *\n\n\nclass PyPyfftw(PythonPackage):\n \"\"\"A pythonic wrapper around FFTW, the FFT library,\n presenting a unified interface for all the supported transforms.\"\"\"\n\n homepage = \"http://hgomersall.github.com/pyFFTW\"\n url = \"https://pypi.io/packages/source/p/pyFFTW/pyFFTW-0.10.4.tar.gz\"\n\n version('0.11.1', sha256='05ea28dede4c3aaaf5c66f56eb0f71849d0d50f5bc0f53ca0ffa69534af14926')\n version('0.10.4', sha256='739b436b7c0aeddf99a48749380260364d2dc027cf1d5f63dafb5f50068ede1a')\n\n depends_on('fftw')\n depends_on('py-setuptools', type='build')\n depends_on('py-cython', type='build')\n depends_on('[email protected]:', type=('build', 'run'))\n depends_on('[email protected]:', type=('build', 'run'))\n", "path": "var/spack/repos/builtin/packages/py-pyfftw/package.py"}]} | 2,075 | 530 |
gh_patches_debug_11159 | rasdani/github-patches | git_diff | mozilla__kitsune-3192 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve performance of _get_creator_counts util function
`kitsune.community.utils._get_creator_counts` until function is DB heavy and takes a lot of time to execute. Evaluate its usefulness and provide a way to optimize the query and/or cache the results.
This issue is related to the degraded performance SUMO experienced on Fri March 30th ([NR Error](https://rpm.newrelic.com/accounts/1299394/applications/45097089/downtime/34422892))
</issue>
<code>
[start of kitsune/community/utils.py]
1 import hashlib
2
3 from datetime import datetime, date, timedelta
4 from django.conf import settings
5 from django.core.cache import cache
6 from django.db.models import Count, F
7
8 from kitsune.products.models import Product
9 from kitsune.questions.models import Answer
10 from kitsune.users.models import User, UserMappingType
11 from kitsune.wiki.models import Revision
12
13
14 def top_contributors_questions(start=None, end=None, locale=None, product=None,
15 count=10, page=1, use_cache=True):
16 """Get the top Support Forum contributors."""
17 if use_cache:
18 cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)
19 cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()
20 cache_key = 'top_contributors_questions_{}'.format(cache_key)
21 cached = cache.get(cache_key, None)
22 if cached:
23 return cached
24
25 answers = (Answer.objects
26 .exclude(is_spam=True)
27 .exclude(question__is_spam=True)
28 # Adding answer to your own question, isn't a contribution.
29 .exclude(creator_id=F('question__creator_id')))
30
31 if start is None:
32 # By default we go back 90 days.
33 start = date.today() - timedelta(days=90)
34 answers = answers.filter(created__gte=start)
35 if end:
36 # If no end is specified, we don't need to filter by it.
37 answers = answers.filter(created__lt=end)
38 if locale:
39 answers = answers.filter(question__locale=locale)
40 if product:
41 if isinstance(product, Product):
42 product = product.slug
43 answers = answers.filter(question__product__slug=product)
44
45 users = (User.objects
46 .filter(answers__in=answers)
47 .annotate(query_count=Count('answers'))
48 .order_by('-query_count'))
49 counts = _get_creator_counts(users, count, page)
50
51 if use_cache:
52 cache.set(cache_key, counts, 60*15) # 15 minutes
53 return counts
54
55
56 def top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):
57 """Get the top KB editors (locale='en-US')."""
58 return top_contributors_l10n(
59 start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)
60
61
62 def top_contributors_l10n(start=None, end=None, locale=None, product=None,
63 count=10, page=1, use_cache=True):
64 """Get the top l10n contributors for the KB."""
65 if use_cache:
66 cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)
67 cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()
68 cache_key = u'top_contributors_l10n_{}'.format(cache_key)
69 cached = cache.get(cache_key, None)
70 if cached:
71 return cached
72
73 # Get the user ids and contribution count of the top contributors.
74 revisions = Revision.objects.all()
75 if locale is None:
76 # If there is no locale specified, exclude en-US only. The rest are
77 # l10n.
78 revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)
79 if start is None:
80 # By default we go back 90 days.
81 start = date.today() - timedelta(days=90)
82 revisions = revisions.filter(created__gte=start)
83 if end:
84 # If no end is specified, we don't need to filter by it.
85 revisions = revisions.filter(created__lt=end)
86 if locale:
87 revisions = revisions.filter(document__locale=locale)
88 if product:
89 if isinstance(product, Product):
90 product = product.slug
91 revisions = revisions.filter(document__products__slug=product)
92
93 users = (User.objects
94 .filter(created_revisions__in=revisions)
95 .annotate(query_count=Count('created_revisions'))
96 .order_by('-query_count'))
97 counts = _get_creator_counts(users, count, page)
98
99 if use_cache:
100 cache.set(cache_key, counts, 60*15) # 15 minutes
101 return counts
102
103
104 def top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):
105 """Get the top Army of Awesome contributors."""
106 # AoA is deprecated, return 0 until we remove all related code.
107 return ([], 0)
108
109
110 def _get_creator_counts(query, count, page):
111 total = query.count()
112
113 start = (page - 1) * count
114 end = page * count
115 query_data = query.values('id', 'query_count')[start:end]
116
117 query_data = {obj['id']: obj['query_count'] for obj in query_data}
118
119 users_data = (UserMappingType.search().filter(id__in=query_data.keys())
120 .values_dict('id', 'username', 'display_name',
121 'avatar', 'twitter_usernames',
122 'last_contribution_date')[:count])
123
124 users_data = UserMappingType.reshape(users_data)
125
126 results = []
127 now = datetime.now()
128
129 for u_data in users_data:
130 user_id = u_data.get('id')
131 last_contribution_date = u_data.get('last_contribution_date', None)
132
133 u_data['days_since_last_activity'] = ((now - last_contribution_date).days
134 if last_contribution_date else None)
135
136 data = {
137 'count': query_data.get(user_id),
138 'term': user_id,
139 'user': u_data
140 }
141
142 results.append(data)
143
144 return results, total
145
[end of kitsune/community/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kitsune/community/utils.py b/kitsune/community/utils.py
--- a/kitsune/community/utils.py
+++ b/kitsune/community/utils.py
@@ -1,6 +1,8 @@
import hashlib
from datetime import datetime, date, timedelta
+from operator import itemgetter
+
from django.conf import settings
from django.core.cache import cache
from django.db.models import Count, F
@@ -141,4 +143,8 @@
results.append(data)
+ # Descending Order the list according to count.
+ # As the top number of contributor should be at first
+ results = sorted(results, key=itemgetter('count'), reverse=True)
+
return results, total
| {"golden_diff": "diff --git a/kitsune/community/utils.py b/kitsune/community/utils.py\n--- a/kitsune/community/utils.py\n+++ b/kitsune/community/utils.py\n@@ -1,6 +1,8 @@\n import hashlib\n \n from datetime import datetime, date, timedelta\n+from operator import itemgetter\n+\n from django.conf import settings\n from django.core.cache import cache\n from django.db.models import Count, F\n@@ -141,4 +143,8 @@\n \n results.append(data)\n \n+ # Descending Order the list according to count.\n+ # As the top number of contributor should be at first\n+ results = sorted(results, key=itemgetter('count'), reverse=True)\n+\n return results, total\n", "issue": "Improve performance of _get_creator_counts util function\n`kitsune.community.utils._get_creator_counts` until function is DB heavy and takes a lot of time to execute. Evaluate its usefulness and provide a way to optimize the query and/or cache the results. \r\n\r\nThis issue is related to the degraded performance SUMO experienced on Fri March 30th ([NR Error](https://rpm.newrelic.com/accounts/1299394/applications/45097089/downtime/34422892))\n", "before_files": [{"content": "import hashlib\n\nfrom datetime import datetime, date, timedelta\nfrom django.conf import settings\nfrom django.core.cache import cache\nfrom django.db.models import Count, F\n\nfrom kitsune.products.models import Product\nfrom kitsune.questions.models import Answer\nfrom kitsune.users.models import User, UserMappingType\nfrom kitsune.wiki.models import Revision\n\n\ndef top_contributors_questions(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top Support Forum contributors.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = 'top_contributors_questions_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n answers = (Answer.objects\n .exclude(is_spam=True)\n .exclude(question__is_spam=True)\n # Adding answer to your own question, isn't a contribution.\n .exclude(creator_id=F('question__creator_id')))\n\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n answers = answers.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n answers = answers.filter(created__lt=end)\n if locale:\n answers = answers.filter(question__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n answers = answers.filter(question__product__slug=product)\n\n users = (User.objects\n .filter(answers__in=answers)\n .annotate(query_count=Count('answers'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_kb(start=None, end=None, product=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top KB editors (locale='en-US').\"\"\"\n return top_contributors_l10n(\n start, end, settings.WIKI_DEFAULT_LANGUAGE, product, count, use_cache)\n\n\ndef top_contributors_l10n(start=None, end=None, locale=None, product=None,\n count=10, page=1, use_cache=True):\n \"\"\"Get the top l10n contributors for the KB.\"\"\"\n if use_cache:\n cache_key = u'{}_{}_{}_{}_{}_{}'.format(start, end, locale, product, count, page)\n cache_key = hashlib.sha1(cache_key.encode('utf-8')).hexdigest()\n cache_key = u'top_contributors_l10n_{}'.format(cache_key)\n cached = cache.get(cache_key, None)\n if cached:\n return cached\n\n # Get the user ids and contribution count of the top contributors.\n revisions = Revision.objects.all()\n if locale is None:\n # If there is no locale specified, exclude en-US only. The rest are\n # l10n.\n revisions = revisions.exclude(document__locale=settings.WIKI_DEFAULT_LANGUAGE)\n if start is None:\n # By default we go back 90 days.\n start = date.today() - timedelta(days=90)\n revisions = revisions.filter(created__gte=start)\n if end:\n # If no end is specified, we don't need to filter by it.\n revisions = revisions.filter(created__lt=end)\n if locale:\n revisions = revisions.filter(document__locale=locale)\n if product:\n if isinstance(product, Product):\n product = product.slug\n revisions = revisions.filter(document__products__slug=product)\n\n users = (User.objects\n .filter(created_revisions__in=revisions)\n .annotate(query_count=Count('created_revisions'))\n .order_by('-query_count'))\n counts = _get_creator_counts(users, count, page)\n\n if use_cache:\n cache.set(cache_key, counts, 60*15) # 15 minutes\n return counts\n\n\ndef top_contributors_aoa(start=None, end=None, locale=None, count=10, page=1, use_cache=True):\n \"\"\"Get the top Army of Awesome contributors.\"\"\"\n # AoA is deprecated, return 0 until we remove all related code.\n return ([], 0)\n\n\ndef _get_creator_counts(query, count, page):\n total = query.count()\n\n start = (page - 1) * count\n end = page * count\n query_data = query.values('id', 'query_count')[start:end]\n\n query_data = {obj['id']: obj['query_count'] for obj in query_data}\n\n users_data = (UserMappingType.search().filter(id__in=query_data.keys())\n .values_dict('id', 'username', 'display_name',\n 'avatar', 'twitter_usernames',\n 'last_contribution_date')[:count])\n\n users_data = UserMappingType.reshape(users_data)\n\n results = []\n now = datetime.now()\n\n for u_data in users_data:\n user_id = u_data.get('id')\n last_contribution_date = u_data.get('last_contribution_date', None)\n\n u_data['days_since_last_activity'] = ((now - last_contribution_date).days\n if last_contribution_date else None)\n\n data = {\n 'count': query_data.get(user_id),\n 'term': user_id,\n 'user': u_data\n }\n\n results.append(data)\n\n return results, total\n", "path": "kitsune/community/utils.py"}]} | 2,225 | 158 |
gh_patches_debug_25001 | rasdani/github-patches | git_diff | awslabs__gluonts-1652 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
_tsf_reader.convert_base doesn't handle "10 minutes" frequency correctly
## Description
For Monash datasets with the "10 minutes" frequency, the frequency converter will convert it to a frequency 10 MonthEnd (10M), instead of the expect 10 Minutes (10T) frequency.
There is already code to properly handle the "minutely" frequency, but it checks for that string explicitly, so it doesn't catch the "10 minutes" case.
## To Reproduce
One dataset which has this frequency is the 10 minutes observation Solar dataset: https://zenodo.org/record/4656144
filename: `"solar_10_minutes_dataset.zip"`
record: `"4656132"`
</issue>
<code>
[start of src/gluonts/dataset/repository/_tsf_reader.py]
1 # Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License").
4 # You may not use this file except in compliance with the License.
5 # A copy of the License is located at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # or in the "license" file accompanying this file. This file is distributed
10 # on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
11 # express or implied. See the License for the specific language governing
12 # permissions and limitations under the License.
13
14 from datetime import datetime
15 from distutils.util import strtobool
16 from multiprocessing import cpu_count
17 from types import SimpleNamespace
18
19 import numpy as np
20 from toolz import compose_left
21
22 from gluonts import json
23 from gluonts.nursery import glide
24
25 parse_bool = compose_left(strtobool, bool)
26
27
28 def parse_attribute(ty, value: str):
29 if ty == "numeric":
30 return int(value)
31
32 if ty == "string":
33 return value
34
35 if ty == "date":
36 return datetime.strptime(value, "%Y-%m-%d %H-%M-%S")
37
38 raise AttributeError(ty)
39
40
41 def frequency_converter(freq: str):
42 parts = freq.split("_")
43 if len(parts) == 1:
44 return convert_base(parts[0])
45 if len(parts) == 2:
46 return convert_multiple(parts[0]) + convert_base(parts[1])
47 raise ValueError(f"Invalid frequency string {freq}.")
48
49
50 def convert_base(text: str) -> str:
51 if text.lower() == "minutely":
52 return "T"
53 return text[0].upper()
54
55
56 def convert_multiple(text: str) -> str:
57 if text.isnumeric():
58 return text
59 if text == "half":
60 return "0.5"
61 raise ValueError(f"Unknown frequency multiple {text}.")
62
63
64 class TSFReader:
65 def __init__(
66 self,
67 path,
68 target_name="target",
69 ):
70 self.path = path
71 self.target_name = target_name
72
73 self.meta = SimpleNamespace(columns={})
74
75 def read(self):
76 with open(self.path, encoding="latin1") as in_file:
77 # strip whitespace
78 lines = map(str.strip, in_file)
79
80 # ignore all lines starting with #
81 lines = filter(lambda line: not line.startswith("#"), lines)
82
83 data_tag_found = self._read_header(lines)
84 assert data_tag_found, "Missing @data tag."
85 assert (
86 self.meta.columns
87 ), "Missing attribute section. Attribute section must come before data."
88
89 assert self.target_name not in self.meta.columns
90 self.meta.columns[self.target_name] = None
91
92 data = self._read_data_section(lines)
93
94 return self.meta, data
95
96 def _read_header(self, lines):
97 for line in lines:
98 assert line.startswith("@")
99 stop = self._tag(line[1:])
100
101 if stop:
102 return True
103
104 return False
105
106 def _read_data_section(self, lines):
107 lines = list(lines)
108
109 lines = glide.imap_unordered(
110 self._read_data, lines, num_workers=cpu_count(), batch_size=8092
111 )
112
113 return list(lines)
114
115 def _read_data(self, line):
116 parts = line.split(":")
117
118 assert len(parts) == len(
119 self.meta.columns
120 ), "Missing attributes/values in series."
121
122 *attributes, target = parts
123
124 record = {}
125
126 record[self.target_name] = self._data_target(target)
127
128 for (column, ty), attr in zip(self.meta.columns.items(), attributes):
129 record[column] = parse_attribute(ty, attr)
130
131 return record
132
133 def _data_target(self, s):
134 s = s.replace("?", '"nan"')
135
136 values = json.loads(f"[{s}]")
137 assert (
138 values
139 ), "A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol"
140
141 return np.array(values, dtype=float)
142
143 def _tag(self, line):
144 fn_by_tag = {
145 "attribute": self._tag_attribute,
146 "frequency": self._tag_frequency,
147 "horizon": self._tag_horizon,
148 "missing": self._tag_missing,
149 "equallength": self._tag_equallength,
150 "data": self._tag_data,
151 }
152 tag, *args = line.split(" ")
153
154 if tag not in fn_by_tag:
155 return
156
157 return fn_by_tag[tag](*args)
158
159 def _tag_attribute(self, name, ty):
160 self.meta.columns[name] = ty
161
162 def _tag_frequency(self, frequency):
163 self.meta.frequency = frequency
164
165 def _tag_horizon(self, horizon):
166 self.meta.forecast_horizon = horizon
167
168 def _tag_missing(self, missing):
169 self.meta.has_missing_values = parse_bool(missing)
170
171 def _tag_equallength(self, equallength):
172 self.meta.has_equal_length = parse_bool(equallength)
173
174 def _tag_data(self):
175 return True
176
[end of src/gluonts/dataset/repository/_tsf_reader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/gluonts/dataset/repository/_tsf_reader.py b/src/gluonts/dataset/repository/_tsf_reader.py
--- a/src/gluonts/dataset/repository/_tsf_reader.py
+++ b/src/gluonts/dataset/repository/_tsf_reader.py
@@ -15,11 +15,13 @@
from distutils.util import strtobool
from multiprocessing import cpu_count
from types import SimpleNamespace
+from typing import Dict
import numpy as np
from toolz import compose_left
from gluonts import json
+from gluonts.exceptions import GluonTSDataError
from gluonts.nursery import glide
parse_bool = compose_left(strtobool, bool)
@@ -47,10 +49,32 @@
raise ValueError(f"Invalid frequency string {freq}.")
+BASE_FREQ_TO_PANDAS_OFFSET: Dict[str, str] = {
+ "seconds": "S",
+ "minutely": "T",
+ "minutes": "T",
+ "hourly": "H",
+ "hours": "H",
+ "daily": "D",
+ "days": "D",
+ "weekly": "W",
+ "weeks": "W",
+ "monthly": "M",
+ "months": "M",
+ "quarterly": "Q",
+ "quarters": "Q",
+ "yearly": "Y",
+ "years": "Y",
+}
+
+
def convert_base(text: str) -> str:
- if text.lower() == "minutely":
- return "T"
- return text[0].upper()
+ try:
+ return BASE_FREQ_TO_PANDAS_OFFSET[text]
+ except KeyError:
+ raise GluonTSDataError(
+ f'"{text}" is not recognized as a frequency string'
+ )
def convert_multiple(text: str) -> str:
| {"golden_diff": "diff --git a/src/gluonts/dataset/repository/_tsf_reader.py b/src/gluonts/dataset/repository/_tsf_reader.py\n--- a/src/gluonts/dataset/repository/_tsf_reader.py\n+++ b/src/gluonts/dataset/repository/_tsf_reader.py\n@@ -15,11 +15,13 @@\n from distutils.util import strtobool\n from multiprocessing import cpu_count\n from types import SimpleNamespace\n+from typing import Dict\n \n import numpy as np\n from toolz import compose_left\n \n from gluonts import json\n+from gluonts.exceptions import GluonTSDataError\n from gluonts.nursery import glide\n \n parse_bool = compose_left(strtobool, bool)\n@@ -47,10 +49,32 @@\n raise ValueError(f\"Invalid frequency string {freq}.\")\n \n \n+BASE_FREQ_TO_PANDAS_OFFSET: Dict[str, str] = {\n+ \"seconds\": \"S\",\n+ \"minutely\": \"T\",\n+ \"minutes\": \"T\",\n+ \"hourly\": \"H\",\n+ \"hours\": \"H\",\n+ \"daily\": \"D\",\n+ \"days\": \"D\",\n+ \"weekly\": \"W\",\n+ \"weeks\": \"W\",\n+ \"monthly\": \"M\",\n+ \"months\": \"M\",\n+ \"quarterly\": \"Q\",\n+ \"quarters\": \"Q\",\n+ \"yearly\": \"Y\",\n+ \"years\": \"Y\",\n+}\n+\n+\n def convert_base(text: str) -> str:\n- if text.lower() == \"minutely\":\n- return \"T\"\n- return text[0].upper()\n+ try:\n+ return BASE_FREQ_TO_PANDAS_OFFSET[text]\n+ except KeyError:\n+ raise GluonTSDataError(\n+ f'\"{text}\" is not recognized as a frequency string'\n+ )\n \n \n def convert_multiple(text: str) -> str:\n", "issue": "_tsf_reader.convert_base doesn't handle \"10 minutes\" frequency correctly\n## Description\r\nFor Monash datasets with the \"10 minutes\" frequency, the frequency converter will convert it to a frequency 10 MonthEnd (10M), instead of the expect 10 Minutes (10T) frequency.\r\n\r\nThere is already code to properly handle the \"minutely\" frequency, but it checks for that string explicitly, so it doesn't catch the \"10 minutes\" case.\r\n\r\n## To Reproduce\r\nOne dataset which has this frequency is the 10 minutes observation Solar dataset: https://zenodo.org/record/4656144\r\nfilename: `\"solar_10_minutes_dataset.zip\"`\r\nrecord: `\"4656132\"`\n", "before_files": [{"content": "# Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\").\n# You may not use this file except in compliance with the License.\n# A copy of the License is located at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# or in the \"license\" file accompanying this file. This file is distributed\n# on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either\n# express or implied. See the License for the specific language governing\n# permissions and limitations under the License.\n\nfrom datetime import datetime\nfrom distutils.util import strtobool\nfrom multiprocessing import cpu_count\nfrom types import SimpleNamespace\n\nimport numpy as np\nfrom toolz import compose_left\n\nfrom gluonts import json\nfrom gluonts.nursery import glide\n\nparse_bool = compose_left(strtobool, bool)\n\n\ndef parse_attribute(ty, value: str):\n if ty == \"numeric\":\n return int(value)\n\n if ty == \"string\":\n return value\n\n if ty == \"date\":\n return datetime.strptime(value, \"%Y-%m-%d %H-%M-%S\")\n\n raise AttributeError(ty)\n\n\ndef frequency_converter(freq: str):\n parts = freq.split(\"_\")\n if len(parts) == 1:\n return convert_base(parts[0])\n if len(parts) == 2:\n return convert_multiple(parts[0]) + convert_base(parts[1])\n raise ValueError(f\"Invalid frequency string {freq}.\")\n\n\ndef convert_base(text: str) -> str:\n if text.lower() == \"minutely\":\n return \"T\"\n return text[0].upper()\n\n\ndef convert_multiple(text: str) -> str:\n if text.isnumeric():\n return text\n if text == \"half\":\n return \"0.5\"\n raise ValueError(f\"Unknown frequency multiple {text}.\")\n\n\nclass TSFReader:\n def __init__(\n self,\n path,\n target_name=\"target\",\n ):\n self.path = path\n self.target_name = target_name\n\n self.meta = SimpleNamespace(columns={})\n\n def read(self):\n with open(self.path, encoding=\"latin1\") as in_file:\n # strip whitespace\n lines = map(str.strip, in_file)\n\n # ignore all lines starting with #\n lines = filter(lambda line: not line.startswith(\"#\"), lines)\n\n data_tag_found = self._read_header(lines)\n assert data_tag_found, \"Missing @data tag.\"\n assert (\n self.meta.columns\n ), \"Missing attribute section. Attribute section must come before data.\"\n\n assert self.target_name not in self.meta.columns\n self.meta.columns[self.target_name] = None\n\n data = self._read_data_section(lines)\n\n return self.meta, data\n\n def _read_header(self, lines):\n for line in lines:\n assert line.startswith(\"@\")\n stop = self._tag(line[1:])\n\n if stop:\n return True\n\n return False\n\n def _read_data_section(self, lines):\n lines = list(lines)\n\n lines = glide.imap_unordered(\n self._read_data, lines, num_workers=cpu_count(), batch_size=8092\n )\n\n return list(lines)\n\n def _read_data(self, line):\n parts = line.split(\":\")\n\n assert len(parts) == len(\n self.meta.columns\n ), \"Missing attributes/values in series.\"\n\n *attributes, target = parts\n\n record = {}\n\n record[self.target_name] = self._data_target(target)\n\n for (column, ty), attr in zip(self.meta.columns.items(), attributes):\n record[column] = parse_attribute(ty, attr)\n\n return record\n\n def _data_target(self, s):\n s = s.replace(\"?\", '\"nan\"')\n\n values = json.loads(f\"[{s}]\")\n assert (\n values\n ), \"A given series should contains a set of comma separated numeric values. At least one numeric value should be there in a series. Missing values should be indicated with ? symbol\"\n\n return np.array(values, dtype=float)\n\n def _tag(self, line):\n fn_by_tag = {\n \"attribute\": self._tag_attribute,\n \"frequency\": self._tag_frequency,\n \"horizon\": self._tag_horizon,\n \"missing\": self._tag_missing,\n \"equallength\": self._tag_equallength,\n \"data\": self._tag_data,\n }\n tag, *args = line.split(\" \")\n\n if tag not in fn_by_tag:\n return\n\n return fn_by_tag[tag](*args)\n\n def _tag_attribute(self, name, ty):\n self.meta.columns[name] = ty\n\n def _tag_frequency(self, frequency):\n self.meta.frequency = frequency\n\n def _tag_horizon(self, horizon):\n self.meta.forecast_horizon = horizon\n\n def _tag_missing(self, missing):\n self.meta.has_missing_values = parse_bool(missing)\n\n def _tag_equallength(self, equallength):\n self.meta.has_equal_length = parse_bool(equallength)\n\n def _tag_data(self):\n return True\n", "path": "src/gluonts/dataset/repository/_tsf_reader.py"}]} | 2,283 | 427 |
gh_patches_debug_15953 | rasdani/github-patches | git_diff | pytorch__audio-1465 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove unused module
[`torchaudio._internal.misc_ops`](https://github.com/pytorch/audio/blob/b059f08742e70700ce4c92296a1131118f67a588/torchaudio/_internal/misc_ops.py) is a residue from refactoring of I/O features in the past releases. We can get rid of the whole module.
</issue>
<code>
[start of torchaudio/_internal/misc_ops.py]
1 from typing import Union, Callable
2
3 import torch
4 from torch import Tensor
5
6
7 def normalize_audio(signal: Tensor, normalization: Union[bool, float, Callable]) -> None:
8 """Audio normalization of a tensor in-place. The normalization can be a bool,
9 a number, or a callable that takes the audio tensor as an input. SoX uses
10 32-bit signed integers internally, thus bool normalizes based on that assumption.
11 """
12
13 if not normalization:
14 return
15
16 if isinstance(normalization, bool):
17 normalization = 1 << 31
18
19 if isinstance(normalization, (float, int)):
20 # normalize with custom value
21 signal /= normalization
22 elif callable(normalization):
23 signal /= normalization(signal)
24
25
26 def check_input(src: Tensor) -> None:
27 if not torch.is_tensor(src):
28 raise TypeError('Expected a tensor, got %s' % type(src))
29 if src.is_cuda:
30 raise TypeError('Expected a CPU based tensor, got %s' % type(src))
31
[end of torchaudio/_internal/misc_ops.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchaudio/_internal/misc_ops.py b/torchaudio/_internal/misc_ops.py
deleted file mode 100644
--- a/torchaudio/_internal/misc_ops.py
+++ /dev/null
@@ -1,30 +0,0 @@
-from typing import Union, Callable
-
-import torch
-from torch import Tensor
-
-
-def normalize_audio(signal: Tensor, normalization: Union[bool, float, Callable]) -> None:
- """Audio normalization of a tensor in-place. The normalization can be a bool,
- a number, or a callable that takes the audio tensor as an input. SoX uses
- 32-bit signed integers internally, thus bool normalizes based on that assumption.
- """
-
- if not normalization:
- return
-
- if isinstance(normalization, bool):
- normalization = 1 << 31
-
- if isinstance(normalization, (float, int)):
- # normalize with custom value
- signal /= normalization
- elif callable(normalization):
- signal /= normalization(signal)
-
-
-def check_input(src: Tensor) -> None:
- if not torch.is_tensor(src):
- raise TypeError('Expected a tensor, got %s' % type(src))
- if src.is_cuda:
- raise TypeError('Expected a CPU based tensor, got %s' % type(src))
| {"golden_diff": "diff --git a/torchaudio/_internal/misc_ops.py b/torchaudio/_internal/misc_ops.py\ndeleted file mode 100644\n--- a/torchaudio/_internal/misc_ops.py\n+++ /dev/null\n@@ -1,30 +0,0 @@\n-from typing import Union, Callable\n-\n-import torch\n-from torch import Tensor\n-\n-\n-def normalize_audio(signal: Tensor, normalization: Union[bool, float, Callable]) -> None:\n- \"\"\"Audio normalization of a tensor in-place. The normalization can be a bool,\n- a number, or a callable that takes the audio tensor as an input. SoX uses\n- 32-bit signed integers internally, thus bool normalizes based on that assumption.\n- \"\"\"\n-\n- if not normalization:\n- return\n-\n- if isinstance(normalization, bool):\n- normalization = 1 << 31\n-\n- if isinstance(normalization, (float, int)):\n- # normalize with custom value\n- signal /= normalization\n- elif callable(normalization):\n- signal /= normalization(signal)\n-\n-\n-def check_input(src: Tensor) -> None:\n- if not torch.is_tensor(src):\n- raise TypeError('Expected a tensor, got %s' % type(src))\n- if src.is_cuda:\n- raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n", "issue": "Remove unused module\n[`torchaudio._internal.misc_ops`](https://github.com/pytorch/audio/blob/b059f08742e70700ce4c92296a1131118f67a588/torchaudio/_internal/misc_ops.py) is a residue from refactoring of I/O features in the past releases. We can get rid of the whole module.\r\n\r\n\n", "before_files": [{"content": "from typing import Union, Callable\n\nimport torch\nfrom torch import Tensor\n\n\ndef normalize_audio(signal: Tensor, normalization: Union[bool, float, Callable]) -> None:\n \"\"\"Audio normalization of a tensor in-place. The normalization can be a bool,\n a number, or a callable that takes the audio tensor as an input. SoX uses\n 32-bit signed integers internally, thus bool normalizes based on that assumption.\n \"\"\"\n\n if not normalization:\n return\n\n if isinstance(normalization, bool):\n normalization = 1 << 31\n\n if isinstance(normalization, (float, int)):\n # normalize with custom value\n signal /= normalization\n elif callable(normalization):\n signal /= normalization(signal)\n\n\ndef check_input(src: Tensor) -> None:\n if not torch.is_tensor(src):\n raise TypeError('Expected a tensor, got %s' % type(src))\n if src.is_cuda:\n raise TypeError('Expected a CPU based tensor, got %s' % type(src))\n", "path": "torchaudio/_internal/misc_ops.py"}]} | 910 | 300 |
gh_patches_debug_691 | rasdani/github-patches | git_diff | ivy-llc__ivy-15263 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
eigh
</issue>
<code>
[start of ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.numpy.func_wrapper import (
4 to_ivy_arrays_and_back,
5 from_zero_dim_arrays_to_scalar,
6 )
7
8
9 @to_ivy_arrays_and_back
10 @from_zero_dim_arrays_to_scalar
11 def eigvalsh(a, /, UPLO="L"):
12 return ivy.eigvalsh(a, UPLO=UPLO)
13
14
15 @to_ivy_arrays_and_back
16 def eig(a):
17 return ivy.eig(a)
18
19
20 @from_zero_dim_arrays_to_scalar
21 def eigh(a, /, UPLO="L"):
22 return ivy.eigh(a, UPLO=UPLO)
23
[end of ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py b/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py
--- a/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py
+++ b/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py
@@ -17,6 +17,7 @@
return ivy.eig(a)
+@to_ivy_arrays_and_back
@from_zero_dim_arrays_to_scalar
def eigh(a, /, UPLO="L"):
return ivy.eigh(a, UPLO=UPLO)
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py b/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py\n--- a/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py\n+++ b/ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py\n@@ -17,6 +17,7 @@\n return ivy.eig(a)\n \n \n+@to_ivy_arrays_and_back\n @from_zero_dim_arrays_to_scalar\n def eigh(a, /, UPLO=\"L\"):\n return ivy.eigh(a, UPLO=UPLO)\n", "issue": "eigh\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n from_zero_dim_arrays_to_scalar,\n)\n\n\n@to_ivy_arrays_and_back\n@from_zero_dim_arrays_to_scalar\ndef eigvalsh(a, /, UPLO=\"L\"):\n return ivy.eigvalsh(a, UPLO=UPLO)\n\n\n@to_ivy_arrays_and_back\ndef eig(a):\n return ivy.eig(a)\n\n\n@from_zero_dim_arrays_to_scalar\ndef eigh(a, /, UPLO=\"L\"):\n return ivy.eigh(a, UPLO=UPLO)\n", "path": "ivy/functional/frontends/numpy/linalg/matrix_eigenvalues.py"}]} | 737 | 140 |
gh_patches_debug_53306 | rasdani/github-patches | git_diff | privacyidea__privacyidea-2418 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update requirements for upcoming version 3.5
Push requirements to newest versions according to https://github.com/privacyidea/privacyidea/wiki/Development-workflow#requirements
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import print_function
3 from setuptools import setup, find_packages
4 import os
5 import stat
6 import sys
7
8 #VERSION = "2.1dev4"
9 VERSION = "3.4"
10
11 # Taken from kennethreitz/requests/setup.py
12 package_directory = os.path.realpath(os.path.dirname(__file__))
13
14
15 def get_file_contents(file_path):
16 """Get the context of the file using full path name."""
17 content = ""
18 try:
19 full_path = os.path.join(package_directory, file_path)
20 content = open(full_path, 'r').read()
21 except:
22 print("### could not open file {0!r}".format(file_path), file=sys.stderr)
23 return content
24
25
26 def get_file_list(file_path):
27 full_path = os.path.join(package_directory, file_path)
28 file_list = os.listdir(full_path)
29 # now we need to add the path to the files
30 return [file_path + f for f in file_list]
31
32
33 install_requires = ["beautifulsoup4[lxml]>=4.3.2",
34 "cbor2>=5.0.1",
35 "configobj>=5.0.6",
36 "croniter>=0.3.8",
37 "cryptography>=2.4.2",
38 "defusedxml>=0.4.1",
39 "ecdsa>=0.13.3",
40 "Flask>=0.10.1",
41 "Flask-Babel>=0.9",
42 "Flask-Migrate>=1.2.0",
43 "Flask-Script>=2.0.5",
44 "Flask-SQLAlchemy>=2.0",
45 "Flask-Versioned>=0.9.4",
46 "future>=0.18.2;python_version<'3.0'",
47 "huey[redis]>=1.11.0",
48 "ldap3>=2.6",
49 "netaddr>=0.7.12",
50 "oauth2client>=2.0.1",
51 "passlib[bcrypt]>=1.7.0",
52 "Pillow>=6.2.1",
53 "PyJWT>=1.3.0",
54 "PyMySQL>=0.6.6",
55 "pyOpenSSL>=17.5",
56 "pyrad>=2.0",
57 "python-dateutil>=2.7.3",
58 "python-gnupg>=0.4.4",
59 "PyYAML>=5.1",
60 "qrcode>=6.1",
61 "requests>=2.7.0",
62 "smpplib>=2.0",
63 "SQLAlchemy>=1.3.0",
64 "sqlsoup>=0.9.0"]
65
66
67 def get_man_pages(dir):
68 """
69 Get man pages in a directory.
70 :param dir:
71 :return: list of file names
72 """
73 files = os.listdir(dir)
74 r_files = []
75 for file in files:
76 if file.endswith(".1"):
77 r_files.append(dir + "/" + file)
78 return r_files
79
80
81 def get_scripts(dir):
82 """
83 Get files that are executable
84 :param dir:
85 :return: list of file names
86 """
87 files = os.listdir(dir)
88 r_files = []
89 for file in files:
90 if os.stat(dir + "/" + file)[stat.ST_MODE] & stat.S_IEXEC:
91 r_files.append(dir + "/" + file)
92 return r_files
93
94
95 setup(
96 name='privacyIDEA',
97 version=VERSION,
98 description='privacyIDEA: identity, multifactor authentication (OTP), '
99 'authorization, audit',
100 author='privacyidea.org',
101 license='AGPLv3',
102 author_email='[email protected]',
103 url='http://www.privacyidea.org',
104 keywords='OTP, two factor authentication, management, security',
105 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.9.*',
106 packages=find_packages(),
107 scripts=["pi-manage"] + get_scripts("tools"),
108 extras_require={
109 'doc': ["Sphinx>=1.3.1",
110 "sphinxcontrib-httpdomain>=1.3.0",
111 "sphinxcontrib-plantuml>=0.18"],
112 'test': ["mock>=2.0.0",
113 "pytest>=3.6.0",
114 "pytest-cov>=2.5.1",
115 "responses>=0.9.0"],
116 'postgres': ['psycopg2>=2.8.3']
117 },
118 install_requires=install_requires,
119 include_package_data=True,
120 data_files=[('etc/privacyidea/',
121 ['deploy/apache/privacyideaapp.wsgi',
122 'deploy/privacyidea/dictionary']),
123 ('share/man/man1', get_man_pages("tools")),
124 ('lib/privacyidea/migrations',
125 ["migrations/alembic.ini",
126 "migrations/env.py",
127 "migrations/README",
128 "migrations/script.py.mako"]),
129 ('lib/privacyidea/migrations/versions',
130 get_file_list("migrations/versions/")),
131 ('lib/privacyidea/', ['requirements.txt'])
132 ],
133 classifiers=["Framework :: Flask",
134 "License :: OSI Approved :: "
135 "GNU Affero General Public License v3",
136 "Programming Language :: Python",
137 "Development Status :: 5 - Production/Stable",
138 "Topic :: Internet",
139 "Topic :: Security",
140 "Topic :: System ::"
141 " Systems Administration :: Authentication/Directory",
142 'Programming Language :: Python',
143 'Programming Language :: Python :: 2',
144 'Programming Language :: Python :: 2.7',
145 'Programming Language :: Python :: 3',
146 'Programming Language :: Python :: 3.5',
147 'Programming Language :: Python :: 3.6',
148 'Programming Language :: Python :: 3.7',
149 'Programming Language :: Python :: 3.8'
150 ],
151 zip_safe=False,
152 long_description=get_file_contents('README.rst')
153 )
154
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -50,6 +50,7 @@
"oauth2client>=2.0.1",
"passlib[bcrypt]>=1.7.0",
"Pillow>=6.2.1",
+ "pydash>=4.7.4",
"PyJWT>=1.3.0",
"PyMySQL>=0.6.6",
"pyOpenSSL>=17.5",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -50,6 +50,7 @@\n \"oauth2client>=2.0.1\",\n \"passlib[bcrypt]>=1.7.0\",\n \"Pillow>=6.2.1\",\n+ \"pydash>=4.7.4\",\n \"PyJWT>=1.3.0\",\n \"PyMySQL>=0.6.6\",\n \"pyOpenSSL>=17.5\",\n", "issue": "Update requirements for upcoming version 3.5\nPush requirements to newest versions according to https://github.com/privacyidea/privacyidea/wiki/Development-workflow#requirements\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom setuptools import setup, find_packages\nimport os\nimport stat\nimport sys\n\n#VERSION = \"2.1dev4\"\nVERSION = \"3.4\"\n\n# Taken from kennethreitz/requests/setup.py\npackage_directory = os.path.realpath(os.path.dirname(__file__))\n\n\ndef get_file_contents(file_path):\n \"\"\"Get the context of the file using full path name.\"\"\"\n content = \"\"\n try:\n full_path = os.path.join(package_directory, file_path)\n content = open(full_path, 'r').read()\n except:\n print(\"### could not open file {0!r}\".format(file_path), file=sys.stderr)\n return content\n\n\ndef get_file_list(file_path):\n full_path = os.path.join(package_directory, file_path)\n file_list = os.listdir(full_path)\n # now we need to add the path to the files\n return [file_path + f for f in file_list]\n\n\ninstall_requires = [\"beautifulsoup4[lxml]>=4.3.2\",\n \"cbor2>=5.0.1\",\n \"configobj>=5.0.6\",\n \"croniter>=0.3.8\",\n \"cryptography>=2.4.2\",\n \"defusedxml>=0.4.1\",\n \"ecdsa>=0.13.3\",\n \"Flask>=0.10.1\",\n \"Flask-Babel>=0.9\",\n \"Flask-Migrate>=1.2.0\",\n \"Flask-Script>=2.0.5\",\n \"Flask-SQLAlchemy>=2.0\",\n \"Flask-Versioned>=0.9.4\",\n \"future>=0.18.2;python_version<'3.0'\",\n \"huey[redis]>=1.11.0\",\n \"ldap3>=2.6\",\n \"netaddr>=0.7.12\",\n \"oauth2client>=2.0.1\",\n \"passlib[bcrypt]>=1.7.0\",\n \"Pillow>=6.2.1\",\n \"PyJWT>=1.3.0\",\n \"PyMySQL>=0.6.6\",\n \"pyOpenSSL>=17.5\",\n \"pyrad>=2.0\",\n \"python-dateutil>=2.7.3\",\n \"python-gnupg>=0.4.4\",\n \"PyYAML>=5.1\",\n \"qrcode>=6.1\",\n \"requests>=2.7.0\",\n \"smpplib>=2.0\",\n \"SQLAlchemy>=1.3.0\",\n \"sqlsoup>=0.9.0\"]\n\n\ndef get_man_pages(dir):\n \"\"\"\n Get man pages in a directory.\n :param dir:\n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if file.endswith(\".1\"):\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\ndef get_scripts(dir):\n \"\"\"\n Get files that are executable\n :param dir:\n :return: list of file names\n \"\"\"\n files = os.listdir(dir)\n r_files = []\n for file in files:\n if os.stat(dir + \"/\" + file)[stat.ST_MODE] & stat.S_IEXEC:\n r_files.append(dir + \"/\" + file)\n return r_files\n\n\nsetup(\n name='privacyIDEA',\n version=VERSION,\n description='privacyIDEA: identity, multifactor authentication (OTP), '\n 'authorization, audit',\n author='privacyidea.org',\n license='AGPLv3',\n author_email='[email protected]',\n url='http://www.privacyidea.org',\n keywords='OTP, two factor authentication, management, security',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*, !=3.9.*',\n packages=find_packages(),\n scripts=[\"pi-manage\"] + get_scripts(\"tools\"),\n extras_require={\n 'doc': [\"Sphinx>=1.3.1\",\n \"sphinxcontrib-httpdomain>=1.3.0\",\n \"sphinxcontrib-plantuml>=0.18\"],\n 'test': [\"mock>=2.0.0\",\n \"pytest>=3.6.0\",\n \"pytest-cov>=2.5.1\",\n \"responses>=0.9.0\"],\n 'postgres': ['psycopg2>=2.8.3']\n },\n install_requires=install_requires,\n include_package_data=True,\n data_files=[('etc/privacyidea/',\n ['deploy/apache/privacyideaapp.wsgi',\n 'deploy/privacyidea/dictionary']),\n ('share/man/man1', get_man_pages(\"tools\")),\n ('lib/privacyidea/migrations',\n [\"migrations/alembic.ini\",\n \"migrations/env.py\",\n \"migrations/README\",\n \"migrations/script.py.mako\"]),\n ('lib/privacyidea/migrations/versions',\n get_file_list(\"migrations/versions/\")),\n ('lib/privacyidea/', ['requirements.txt'])\n ],\n classifiers=[\"Framework :: Flask\",\n \"License :: OSI Approved :: \"\n \"GNU Affero General Public License v3\",\n \"Programming Language :: Python\",\n \"Development Status :: 5 - Production/Stable\",\n \"Topic :: Internet\",\n \"Topic :: Security\",\n \"Topic :: System ::\"\n \" Systems Administration :: Authentication/Directory\",\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8'\n ],\n zip_safe=False,\n long_description=get_file_contents('README.rst')\n)\n", "path": "setup.py"}]} | 2,238 | 113 |
gh_patches_debug_3635 | rasdani/github-patches | git_diff | ansible__ansible-lint-1625 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
False positive: async jobs
<!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and master branch are affected too -->
##### Summary
<!--- Explain the problem briefly below -->
A `command` module task that is run as an async job is incorrectly treated as a normal sync task.
For async tasks the options like `changed_when` (and `failed_when` and so on) are not given to the async `command` task itself, they are given to the `async_status` module task that is run after the async task.
Ansible-lint does not understand this and complains for rule `no-changed-when` for the `command` task.
Example:
```yaml
---
- name: Asynchronous long task
command: alongtask.sh
async: 1000
poll: 0
register: job_sleeper
- name: Wait for asynchronous job to end
async_status:
jid: '{{ job_sleeper.ansible_job_id }}'
register: job_result
until: job_result.finished
retries: 100
delay: 10
changed_when: [....]
```
Note how the `changed_when` is given in the `async_status` task and not in the `command` task.
##### Issue Type
- Bug Report
##### Ansible and Ansible Lint details
<!--- Paste verbatim output between triple backticks -->
```console (paste below)
ansible --version
2.9.21
ansible-lint --version
5.0.8
```
- ansible installation method: pip
- ansible-lint installation method: pip
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
EL7.9 all updated
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
```yaml
---
- name: Asynchronous yum task
command: alongtask.sh
async: 1000
poll: 0
register: job_sleeper
- name: Wait for asynchronous job to end
async_status:
jid: '{{ job_sleeper.ansible_job_id }}'
register: job_result
until: job_result.finished
retries: 100
delay: 10
changed_when: [....]
```
<!--- Paste example playbooks or commands between triple backticks below -->
```console (paste below)
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### Desired Behaviour
<!--- Describe what you expected to happen when running the steps above -->
Ansible-lint should not detect `no-changed-when` for `command` module task run as async job since the `changed_when` cannot be given to the `command` module task itself.
It should detect that there is a `changed_when` in the following `async_status` task.
##### Actual Behaviour
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
Ansible-lint detects false positive `no-changed-when` for `command` module task run as async job even though `changed_when` cannot be correctly given for an async task - the `changed_when` is given for the subsequent `async_status` module task.
<!--- Paste verbatim command output between triple backticks -->
```paste below
```
[minimum complete verifiable example]: http://stackoverflow.com/help/mcve
</issue>
<code>
[start of src/ansiblelint/rules/CommandHasChangesCheckRule.py]
1 # Copyright (c) 2016 Will Thames <[email protected]>
2 #
3 # Permission is hereby granted, free of charge, to any person obtaining a copy
4 # of this software and associated documentation files (the "Software"), to deal
5 # in the Software without restriction, including without limitation the rights
6 # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 # copies of the Software, and to permit persons to whom the Software is
8 # furnished to do so, subject to the following conditions:
9 #
10 # The above copyright notice and this permission notice shall be included in
11 # all copies or substantial portions of the Software.
12 #
13 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 # THE SOFTWARE.
20
21 from typing import TYPE_CHECKING, Any, Dict, Union
22
23 from ansiblelint.rules import AnsibleLintRule
24
25 if TYPE_CHECKING:
26 from typing import Optional
27
28 from ansiblelint.file_utils import Lintable
29
30
31 class CommandHasChangesCheckRule(AnsibleLintRule):
32 id = 'no-changed-when'
33 shortdesc = 'Commands should not change things if nothing needs doing'
34 description = (
35 'Commands should either read information (and thus set '
36 '``changed_when``) or not do something if it has already been '
37 'done (using creates/removes) or only do it if another '
38 'check has a particular result (``when``)'
39 )
40 severity = 'HIGH'
41 tags = ['command-shell', 'idempotency']
42 version_added = 'historic'
43
44 _commands = ['command', 'shell', 'raw']
45
46 def matchtask(
47 self, task: Dict[str, Any], file: 'Optional[Lintable]' = None
48 ) -> Union[bool, str]:
49 if task["__ansible_action_type__"] == 'task':
50 if task["action"]["__ansible_module__"] in self._commands:
51 return (
52 'changed_when' not in task
53 and 'when' not in task
54 and 'creates' not in task['action']
55 and 'removes' not in task['action']
56 )
57 return False
58
[end of src/ansiblelint/rules/CommandHasChangesCheckRule.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/ansiblelint/rules/CommandHasChangesCheckRule.py b/src/ansiblelint/rules/CommandHasChangesCheckRule.py
--- a/src/ansiblelint/rules/CommandHasChangesCheckRule.py
+++ b/src/ansiblelint/rules/CommandHasChangesCheckRule.py
@@ -53,5 +53,6 @@
and 'when' not in task
and 'creates' not in task['action']
and 'removes' not in task['action']
+ and not ('async' in task and task.get('poll') == 0)
)
return False
| {"golden_diff": "diff --git a/src/ansiblelint/rules/CommandHasChangesCheckRule.py b/src/ansiblelint/rules/CommandHasChangesCheckRule.py\n--- a/src/ansiblelint/rules/CommandHasChangesCheckRule.py\n+++ b/src/ansiblelint/rules/CommandHasChangesCheckRule.py\n@@ -53,5 +53,6 @@\n and 'when' not in task\n and 'creates' not in task['action']\n and 'removes' not in task['action']\n+ and not ('async' in task and task.get('poll') == 0)\n )\n return False\n", "issue": "False positive: async jobs\n<!--- Verify first that your issue is not already reported on GitHub -->\r\n<!--- Also test if the latest release and master branch are affected too -->\r\n\r\n##### Summary\r\n<!--- Explain the problem briefly below -->\r\nA `command` module task that is run as an async job is incorrectly treated as a normal sync task.\r\n\r\nFor async tasks the options like `changed_when` (and `failed_when` and so on) are not given to the async `command` task itself, they are given to the `async_status` module task that is run after the async task.\r\n\r\nAnsible-lint does not understand this and complains for rule `no-changed-when` for the `command` task.\r\n\r\nExample:\r\n```yaml\r\n---\r\n- name: Asynchronous long task\r\n command: alongtask.sh\r\n async: 1000\r\n poll: 0\r\n register: job_sleeper\r\n\r\n- name: Wait for asynchronous job to end\r\n async_status:\r\n jid: '{{ job_sleeper.ansible_job_id }}'\r\n register: job_result\r\n until: job_result.finished\r\n retries: 100\r\n delay: 10\r\n changed_when: [....]\r\n```\r\n\r\nNote how the `changed_when` is given in the `async_status` task and not in the `command` task.\r\n\r\n##### Issue Type\r\n\r\n- Bug Report\r\n\r\n##### Ansible and Ansible Lint details\r\n<!--- Paste verbatim output between triple backticks -->\r\n```console (paste below)\r\nansible --version\r\n2.9.21\r\n\r\nansible-lint --version\r\n5.0.8\r\n\r\n```\r\n\r\n- ansible installation method: pip\r\n- ansible-lint installation method: pip\r\n\r\n##### OS / ENVIRONMENT\r\n<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->\r\nEL7.9 all updated\r\n\r\n##### STEPS TO REPRODUCE\r\n<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->\r\n```yaml\r\n---\r\n- name: Asynchronous yum task\r\n command: alongtask.sh\r\n async: 1000\r\n poll: 0\r\n register: job_sleeper\r\n\r\n- name: Wait for asynchronous job to end\r\n async_status:\r\n jid: '{{ job_sleeper.ansible_job_id }}'\r\n register: job_result\r\n until: job_result.finished\r\n retries: 100\r\n delay: 10\r\n changed_when: [....]\r\n```\r\n\r\n<!--- Paste example playbooks or commands between triple backticks below -->\r\n```console (paste below)\r\n\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### Desired Behaviour\r\n<!--- Describe what you expected to happen when running the steps above -->\r\nAnsible-lint should not detect `no-changed-when` for `command` module task run as async job since the `changed_when` cannot be given to the `command` module task itself.\r\n\r\nIt should detect that there is a `changed_when` in the following `async_status` task.\r\n\r\n##### Actual Behaviour\r\n<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->\r\nAnsible-lint detects false positive `no-changed-when` for `command` module task run as async job even though `changed_when` cannot be correctly given for an async task - the `changed_when` is given for the subsequent `async_status` module task.\r\n\r\n<!--- Paste verbatim command output between triple backticks -->\r\n```paste below\r\n\r\n```\r\n\r\n\r\n[minimum complete verifiable example]: http://stackoverflow.com/help/mcve\r\n\n", "before_files": [{"content": "# Copyright (c) 2016 Will Thames <[email protected]>\n#\n# Permission is hereby granted, free of charge, to any person obtaining a copy\n# of this software and associated documentation files (the \"Software\"), to deal\n# in the Software without restriction, including without limitation the rights\n# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n# copies of the Software, and to permit persons to whom the Software is\n# furnished to do so, subject to the following conditions:\n#\n# The above copyright notice and this permission notice shall be included in\n# all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN\n# THE SOFTWARE.\n\nfrom typing import TYPE_CHECKING, Any, Dict, Union\n\nfrom ansiblelint.rules import AnsibleLintRule\n\nif TYPE_CHECKING:\n from typing import Optional\n\n from ansiblelint.file_utils import Lintable\n\n\nclass CommandHasChangesCheckRule(AnsibleLintRule):\n id = 'no-changed-when'\n shortdesc = 'Commands should not change things if nothing needs doing'\n description = (\n 'Commands should either read information (and thus set '\n '``changed_when``) or not do something if it has already been '\n 'done (using creates/removes) or only do it if another '\n 'check has a particular result (``when``)'\n )\n severity = 'HIGH'\n tags = ['command-shell', 'idempotency']\n version_added = 'historic'\n\n _commands = ['command', 'shell', 'raw']\n\n def matchtask(\n self, task: Dict[str, Any], file: 'Optional[Lintable]' = None\n ) -> Union[bool, str]:\n if task[\"__ansible_action_type__\"] == 'task':\n if task[\"action\"][\"__ansible_module__\"] in self._commands:\n return (\n 'changed_when' not in task\n and 'when' not in task\n and 'creates' not in task['action']\n and 'removes' not in task['action']\n )\n return False\n", "path": "src/ansiblelint/rules/CommandHasChangesCheckRule.py"}]} | 1,958 | 129 |
gh_patches_debug_42365 | rasdani/github-patches | git_diff | scrapy__scrapy-3660 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document LogFormatter
Currently, the `LogFormatter` class is only mentioned in the [Release notes](https://docs.scrapy.org/en/latest/news.html) page of the documentation. This class should be properly documented, both its API members and a small section introducing it on the documentation page about [Logging](https://docs.scrapy.org/en/latest/topics/logging.html).
The responses to [Scrapy - Silently drop an item](https://stackoverflow.com/q/13527921/939364) in StackOverflow would be a good starting point.
</issue>
<code>
[start of scrapy/logformatter.py]
1 import os
2 import logging
3
4 from twisted.python.failure import Failure
5
6 from scrapy.utils.request import referer_str
7
8 SCRAPEDMSG = u"Scraped from %(src)s" + os.linesep + "%(item)s"
9 DROPPEDMSG = u"Dropped: %(exception)s" + os.linesep + "%(item)s"
10 CRAWLEDMSG = u"Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s"
11
12
13 class LogFormatter(object):
14 """Class for generating log messages for different actions.
15
16 All methods must return a dictionary listing the parameters ``level``,
17 ``msg`` and ``args`` which are going to be used for constructing the log
18 message when calling logging.log.
19
20 Dictionary keys for the method outputs:
21 * ``level`` should be the log level for that action, you can use those
22 from the python logging library: logging.DEBUG, logging.INFO,
23 logging.WARNING, logging.ERROR and logging.CRITICAL.
24
25 * ``msg`` should be a string that can contain different formatting
26 placeholders. This string, formatted with the provided ``args``, is
27 going to be the log message for that action.
28
29 * ``args`` should be a tuple or dict with the formatting placeholders
30 for ``msg``. The final log message is computed as output['msg'] %
31 output['args'].
32 """
33
34 def crawled(self, request, response, spider):
35 request_flags = ' %s' % str(request.flags) if request.flags else ''
36 response_flags = ' %s' % str(response.flags) if response.flags else ''
37 return {
38 'level': logging.DEBUG,
39 'msg': CRAWLEDMSG,
40 'args': {
41 'status': response.status,
42 'request': request,
43 'request_flags' : request_flags,
44 'referer': referer_str(request),
45 'response_flags': response_flags,
46 # backward compatibility with Scrapy logformatter below 1.4 version
47 'flags': response_flags
48 }
49 }
50
51 def scraped(self, item, response, spider):
52 if isinstance(response, Failure):
53 src = response.getErrorMessage()
54 else:
55 src = response
56 return {
57 'level': logging.DEBUG,
58 'msg': SCRAPEDMSG,
59 'args': {
60 'src': src,
61 'item': item,
62 }
63 }
64
65 def dropped(self, item, exception, response, spider):
66 return {
67 'level': logging.WARNING,
68 'msg': DROPPEDMSG,
69 'args': {
70 'exception': exception,
71 'item': item,
72 }
73 }
74
75 @classmethod
76 def from_crawler(cls, crawler):
77 return cls()
78
[end of scrapy/logformatter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/logformatter.py b/scrapy/logformatter.py
--- a/scrapy/logformatter.py
+++ b/scrapy/logformatter.py
@@ -12,26 +12,40 @@
class LogFormatter(object):
"""Class for generating log messages for different actions.
-
- All methods must return a dictionary listing the parameters ``level``,
- ``msg`` and ``args`` which are going to be used for constructing the log
- message when calling logging.log.
+
+ All methods must return a dictionary listing the parameters ``level``, ``msg``
+ and ``args`` which are going to be used for constructing the log message when
+ calling ``logging.log``.
Dictionary keys for the method outputs:
- * ``level`` should be the log level for that action, you can use those
- from the python logging library: logging.DEBUG, logging.INFO,
- logging.WARNING, logging.ERROR and logging.CRITICAL.
- * ``msg`` should be a string that can contain different formatting
- placeholders. This string, formatted with the provided ``args``, is
- going to be the log message for that action.
+ * ``level`` is the log level for that action, you can use those from the
+ `python logging library <https://docs.python.org/3/library/logging.html>`_ :
+ ``logging.DEBUG``, ``logging.INFO``, ``logging.WARNING``, ``logging.ERROR``
+ and ``logging.CRITICAL``.
+ * ``msg`` should be a string that can contain different formatting placeholders.
+ This string, formatted with the provided ``args``, is going to be the long message
+ for that action.
+ * ``args`` should be a tuple or dict with the formatting placeholders for ``msg``.
+ The final log message is computed as ``msg % args``.
- * ``args`` should be a tuple or dict with the formatting placeholders
- for ``msg``. The final log message is computed as output['msg'] %
- output['args'].
- """
+ Here is an example on how to create a custom log formatter to lower the severity level of
+ the log message when an item is dropped from the pipeline::
+ class PoliteLogFormatter(logformatter.LogFormatter):
+ def dropped(self, item, exception, response, spider):
+ return {
+ 'level': logging.INFO, # lowering the level from logging.WARNING
+ 'msg': u"Dropped: %(exception)s" + os.linesep + "%(item)s",
+ 'args': {
+ 'exception': exception,
+ 'item': item,
+ }
+ }
+ """
+
def crawled(self, request, response, spider):
+ """Logs a message when the crawler finds a webpage."""
request_flags = ' %s' % str(request.flags) if request.flags else ''
response_flags = ' %s' % str(response.flags) if response.flags else ''
return {
@@ -40,7 +54,7 @@
'args': {
'status': response.status,
'request': request,
- 'request_flags' : request_flags,
+ 'request_flags': request_flags,
'referer': referer_str(request),
'response_flags': response_flags,
# backward compatibility with Scrapy logformatter below 1.4 version
@@ -49,6 +63,7 @@
}
def scraped(self, item, response, spider):
+ """Logs a message when an item is scraped by a spider."""
if isinstance(response, Failure):
src = response.getErrorMessage()
else:
@@ -63,6 +78,7 @@
}
def dropped(self, item, exception, response, spider):
+ """Logs a message when an item is dropped while it is passing through the item pipeline."""
return {
'level': logging.WARNING,
'msg': DROPPEDMSG,
| {"golden_diff": "diff --git a/scrapy/logformatter.py b/scrapy/logformatter.py\n--- a/scrapy/logformatter.py\n+++ b/scrapy/logformatter.py\n@@ -12,26 +12,40 @@\n \n class LogFormatter(object):\n \"\"\"Class for generating log messages for different actions.\n-\n- All methods must return a dictionary listing the parameters ``level``,\n- ``msg`` and ``args`` which are going to be used for constructing the log\n- message when calling logging.log.\n+ \n+ All methods must return a dictionary listing the parameters ``level``, ``msg``\n+ and ``args`` which are going to be used for constructing the log message when\n+ calling ``logging.log``.\n \n Dictionary keys for the method outputs:\n- * ``level`` should be the log level for that action, you can use those\n- from the python logging library: logging.DEBUG, logging.INFO,\n- logging.WARNING, logging.ERROR and logging.CRITICAL.\n \n- * ``msg`` should be a string that can contain different formatting\n- placeholders. This string, formatted with the provided ``args``, is\n- going to be the log message for that action.\n+ * ``level`` is the log level for that action, you can use those from the\n+ `python logging library <https://docs.python.org/3/library/logging.html>`_ :\n+ ``logging.DEBUG``, ``logging.INFO``, ``logging.WARNING``, ``logging.ERROR``\n+ and ``logging.CRITICAL``.\n+ * ``msg`` should be a string that can contain different formatting placeholders.\n+ This string, formatted with the provided ``args``, is going to be the long message\n+ for that action.\n+ * ``args`` should be a tuple or dict with the formatting placeholders for ``msg``.\n+ The final log message is computed as ``msg % args``.\n \n- * ``args`` should be a tuple or dict with the formatting placeholders\n- for ``msg``. The final log message is computed as output['msg'] %\n- output['args'].\n- \"\"\"\n+ Here is an example on how to create a custom log formatter to lower the severity level of\n+ the log message when an item is dropped from the pipeline::\n \n+ class PoliteLogFormatter(logformatter.LogFormatter):\n+ def dropped(self, item, exception, response, spider):\n+ return {\n+ 'level': logging.INFO, # lowering the level from logging.WARNING\n+ 'msg': u\"Dropped: %(exception)s\" + os.linesep + \"%(item)s\",\n+ 'args': {\n+ 'exception': exception,\n+ 'item': item,\n+ }\n+ }\n+ \"\"\"\n+ \n def crawled(self, request, response, spider):\n+ \"\"\"Logs a message when the crawler finds a webpage.\"\"\"\n request_flags = ' %s' % str(request.flags) if request.flags else ''\n response_flags = ' %s' % str(response.flags) if response.flags else ''\n return {\n@@ -40,7 +54,7 @@\n 'args': {\n 'status': response.status,\n 'request': request,\n- 'request_flags' : request_flags,\n+ 'request_flags': request_flags,\n 'referer': referer_str(request),\n 'response_flags': response_flags,\n # backward compatibility with Scrapy logformatter below 1.4 version\n@@ -49,6 +63,7 @@\n }\n \n def scraped(self, item, response, spider):\n+ \"\"\"Logs a message when an item is scraped by a spider.\"\"\"\n if isinstance(response, Failure):\n src = response.getErrorMessage()\n else:\n@@ -63,6 +78,7 @@\n }\n \n def dropped(self, item, exception, response, spider):\n+ \"\"\"Logs a message when an item is dropped while it is passing through the item pipeline.\"\"\"\n return {\n 'level': logging.WARNING,\n 'msg': DROPPEDMSG,\n", "issue": "Document LogFormatter\nCurrently, the `LogFormatter` class is only mentioned in the [Release notes](https://docs.scrapy.org/en/latest/news.html) page of the documentation. This class should be properly documented, both its API members and a small section introducing it on the documentation page about [Logging](https://docs.scrapy.org/en/latest/topics/logging.html).\r\n\r\nThe responses to [Scrapy - Silently drop an item](https://stackoverflow.com/q/13527921/939364) in StackOverflow would be a good starting point.\n", "before_files": [{"content": "import os\nimport logging\n\nfrom twisted.python.failure import Failure\n\nfrom scrapy.utils.request import referer_str\n\nSCRAPEDMSG = u\"Scraped from %(src)s\" + os.linesep + \"%(item)s\"\nDROPPEDMSG = u\"Dropped: %(exception)s\" + os.linesep + \"%(item)s\"\nCRAWLEDMSG = u\"Crawled (%(status)s) %(request)s%(request_flags)s (referer: %(referer)s)%(response_flags)s\"\n\n\nclass LogFormatter(object):\n \"\"\"Class for generating log messages for different actions.\n\n All methods must return a dictionary listing the parameters ``level``,\n ``msg`` and ``args`` which are going to be used for constructing the log\n message when calling logging.log.\n\n Dictionary keys for the method outputs:\n * ``level`` should be the log level for that action, you can use those\n from the python logging library: logging.DEBUG, logging.INFO,\n logging.WARNING, logging.ERROR and logging.CRITICAL.\n\n * ``msg`` should be a string that can contain different formatting\n placeholders. This string, formatted with the provided ``args``, is\n going to be the log message for that action.\n\n * ``args`` should be a tuple or dict with the formatting placeholders\n for ``msg``. The final log message is computed as output['msg'] %\n output['args'].\n \"\"\"\n\n def crawled(self, request, response, spider):\n request_flags = ' %s' % str(request.flags) if request.flags else ''\n response_flags = ' %s' % str(response.flags) if response.flags else ''\n return {\n 'level': logging.DEBUG,\n 'msg': CRAWLEDMSG,\n 'args': {\n 'status': response.status,\n 'request': request,\n 'request_flags' : request_flags,\n 'referer': referer_str(request),\n 'response_flags': response_flags,\n # backward compatibility with Scrapy logformatter below 1.4 version\n 'flags': response_flags\n }\n }\n\n def scraped(self, item, response, spider):\n if isinstance(response, Failure):\n src = response.getErrorMessage()\n else:\n src = response\n return {\n 'level': logging.DEBUG,\n 'msg': SCRAPEDMSG,\n 'args': {\n 'src': src,\n 'item': item,\n }\n }\n\n def dropped(self, item, exception, response, spider):\n return {\n 'level': logging.WARNING,\n 'msg': DROPPEDMSG,\n 'args': {\n 'exception': exception,\n 'item': item,\n }\n }\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls()\n", "path": "scrapy/logformatter.py"}]} | 1,389 | 858 |
gh_patches_debug_36614 | rasdani/github-patches | git_diff | catalyst-team__catalyst-685 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SupervisedWandbRunner logs wrong number of epochs to WandB
**Describe the bug**
Catalyst 20.02.4
`WandbRunner` is logging wrong number of epochs to WandB
**To Reproduce**
Steps to reproduce the behavior:
```
from catalyst import dl
import torch
import torch.nn as nn
import torch.optim as optim
from torch.utils.data import DataLoader, TensorDataset
class Projector(nn.Module):
def __init__(self, input_size):
super().__init__()
self.linear = nn.Linear(input_size, 1)
def forward(self, X):
return self.linear(X).squeeze(-1)
X = torch.rand(16, 10)
y = torch.rand(X.shape[0])
model = Projector(X.shape[1])
dataset = TensorDataset(X, y)
loader = DataLoader(dataset, batch_size=8)
runner = dl.SupervisedWandbRunner()
runner.train(
model=model,
loaders={
"train": loader,
"valid": loader
},
criterion=nn.MSELoss(),
optimizer=optim.Adam(model.parameters()),
logdir="log_xxx_000",
monitoring_params={
"project": "wandb_wrong_epochs"
},
num_epochs=10
)
```
**Expected behavior**
In WandB I see two plots with `MSELoss` with exactly 10 epochs
**Actual behaviour**
In WandB I see two plots with `MSELoss` with 20 epochs (but each has exactly 10 dots)
**Screenshots**
Look on number of steps. It has 20 steps. But should have 10.

</issue>
<code>
[start of catalyst/contrib/dl/runner/wandb.py]
1 from typing import Dict, List # isort:skip
2 from pathlib import Path
3 import shutil
4
5 import wandb
6
7 from catalyst.dl import utils
8 from catalyst.dl.core import Experiment, Runner
9 from catalyst.dl.experiment import ConfigExperiment
10 from catalyst.dl.runner import SupervisedRunner
11
12
13 class WandbRunner(Runner):
14 """
15 Runner wrapper with wandb integration hooks.
16 """
17 @staticmethod
18 def _log_metrics(metrics: Dict, mode: str, suffix: str = ""):
19 def key_locate(key: str):
20 """
21 Wandb uses first symbol _ for it service purposes
22 because of that fact, we can not send original metric names
23
24 Args:
25 key: metric name
26 Returns:
27 formatted metric name
28 """
29 if key.startswith("_"):
30 return key[1:]
31 return key
32
33 metrics = {
34 f"{key_locate(key)}/{mode}{suffix}": value
35 for key, value in metrics.items()
36 }
37 wandb.log(metrics)
38
39 def _init(
40 self,
41 log_on_batch_end: bool = False,
42 log_on_epoch_end: bool = True,
43 checkpoints_glob: List = None,
44 ):
45 super()._init()
46 self.log_on_batch_end = log_on_batch_end
47 self.log_on_epoch_end = log_on_epoch_end
48 self.checkpoints_glob = checkpoints_glob
49
50 if (self.log_on_batch_end and not self.log_on_epoch_end) \
51 or (not self.log_on_batch_end and self.log_on_epoch_end):
52 self.batch_log_suffix = ""
53 self.epoch_log_suffix = ""
54 else:
55 self.batch_log_suffix = "_batch"
56 self.epoch_log_suffix = "_epoch"
57
58 def _pre_experiment_hook(self, experiment: Experiment):
59 monitoring_params = experiment.monitoring_params
60 monitoring_params["dir"] = str(Path(experiment.logdir).absolute())
61
62 log_on_batch_end: bool = \
63 monitoring_params.pop("log_on_batch_end", False)
64 log_on_epoch_end: bool = \
65 monitoring_params.pop("log_on_epoch_end", True)
66 checkpoints_glob: List[str] = \
67 monitoring_params.pop("checkpoints_glob", [])
68 self._init(
69 log_on_batch_end=log_on_batch_end,
70 log_on_epoch_end=log_on_epoch_end,
71 checkpoints_glob=checkpoints_glob,
72 )
73 if isinstance(experiment, ConfigExperiment):
74 exp_config = utils.flatten_dict(experiment.stages_config)
75 wandb.init(**monitoring_params, config=exp_config)
76 else:
77 wandb.init(**monitoring_params)
78
79 def _post_experiment_hook(self, experiment: Experiment):
80 # @TODO: add params for artefacts logging
81 logdir_src = Path(experiment.logdir)
82 # logdir_dst = wandb.run.dir
83 #
84 # exclude = ["wandb", "checkpoints"]
85 # logdir_files = list(logdir_src.glob("*"))
86 # logdir_files = list(
87 # filter(
88 # lambda x: all(z not in str(x) for z in exclude), logdir_files
89 # )
90 # )
91 #
92 # for subdir in logdir_files:
93 # if subdir.is_dir():
94 # os.makedirs(f"{logdir_dst}/{subdir.name}", exist_ok=True)
95 # shutil.rmtree(f"{logdir_dst}/{subdir.name}")
96 # shutil.copytree(
97 # f"{str(subdir.absolute())}",
98 # f"{logdir_dst}/{subdir.name}"
99 # )
100 # else:
101 # shutil.copy2(
102 # f"{str(subdir.absolute())}",
103 # f"{logdir_dst}/{subdir.name}"
104 # )
105 #
106 checkpoints_src = logdir_src.joinpath("checkpoints")
107 checkpoints_dst = Path(wandb.run.dir).joinpath("checkpoints")
108 # os.makedirs(checkpoints_dst, exist_ok=True)
109
110 checkpoint_paths = []
111 for glob in self.checkpoints_glob:
112 checkpoint_paths.extend(list(checkpoints_src.glob(glob)))
113 checkpoint_paths = list(set(checkpoint_paths))
114 for checkpoint_path in checkpoint_paths:
115 shutil.copy2(
116 f"{str(checkpoint_path.absolute())}",
117 f"{checkpoints_dst}/{checkpoint_path.name}"
118 )
119
120 def _run_batch(self, batch):
121 super()._run_batch(batch=batch)
122 if self.log_on_batch_end:
123 mode = self.state.loader_name
124 metrics = self.state.batch_metrics
125 self._log_metrics(
126 metrics=metrics, mode=mode, suffix=self.batch_log_suffix
127 )
128
129 def _run_epoch(self, stage: str, epoch: int):
130 super()._run_epoch(stage=stage, epoch=epoch)
131 if self.log_on_epoch_end:
132 mode_metrics = utils.split_dict_to_subdicts(
133 dct=self.state.epoch_metrics,
134 prefixes=list(self.state.loaders.keys()),
135 extra_key="_base",
136 )
137 for mode, metrics in mode_metrics.items():
138 self._log_metrics(
139 metrics=metrics, mode=mode, suffix=self.epoch_log_suffix
140 )
141
142 def run_experiment(self, experiment: Experiment):
143 self._pre_experiment_hook(experiment=experiment)
144 super().run_experiment(experiment=experiment)
145 self._post_experiment_hook(experiment=experiment)
146
147
148 class SupervisedWandbRunner(WandbRunner, SupervisedRunner):
149 pass
150
151
152 __all__ = ["WandbRunner", "SupervisedWandbRunner"]
153
[end of catalyst/contrib/dl/runner/wandb.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/catalyst/contrib/dl/runner/wandb.py b/catalyst/contrib/dl/runner/wandb.py
--- a/catalyst/contrib/dl/runner/wandb.py
+++ b/catalyst/contrib/dl/runner/wandb.py
@@ -15,7 +15,9 @@
Runner wrapper with wandb integration hooks.
"""
@staticmethod
- def _log_metrics(metrics: Dict, mode: str, suffix: str = ""):
+ def _log_metrics(
+ metrics: Dict, mode: str, suffix: str = "", commit: bool = True
+ ):
def key_locate(key: str):
"""
Wandb uses first symbol _ for it service purposes
@@ -34,7 +36,7 @@
f"{key_locate(key)}/{mode}{suffix}": value
for key, value in metrics.items()
}
- wandb.log(metrics)
+ wandb.log(metrics, commit=commit)
def _init(
self,
@@ -123,7 +125,10 @@
mode = self.state.loader_name
metrics = self.state.batch_metrics
self._log_metrics(
- metrics=metrics, mode=mode, suffix=self.batch_log_suffix
+ metrics=metrics,
+ mode=mode,
+ suffix=self.batch_log_suffix,
+ commit=True
)
def _run_epoch(self, stage: str, epoch: int):
@@ -136,17 +141,26 @@
)
for mode, metrics in mode_metrics.items():
self._log_metrics(
- metrics=metrics, mode=mode, suffix=self.epoch_log_suffix
+ metrics=metrics,
+ mode=mode,
+ suffix=self.epoch_log_suffix,
+ commit=False
)
+ wandb.log(commit=True)
def run_experiment(self, experiment: Experiment):
+ """Starts experiment
+
+ Args:
+ experiment (Experiment): experiment class
+ """
self._pre_experiment_hook(experiment=experiment)
super().run_experiment(experiment=experiment)
self._post_experiment_hook(experiment=experiment)
class SupervisedWandbRunner(WandbRunner, SupervisedRunner):
- pass
+ """SupervisedRunner with WandB"""
__all__ = ["WandbRunner", "SupervisedWandbRunner"]
| {"golden_diff": "diff --git a/catalyst/contrib/dl/runner/wandb.py b/catalyst/contrib/dl/runner/wandb.py\n--- a/catalyst/contrib/dl/runner/wandb.py\n+++ b/catalyst/contrib/dl/runner/wandb.py\n@@ -15,7 +15,9 @@\n Runner wrapper with wandb integration hooks.\n \"\"\"\n @staticmethod\n- def _log_metrics(metrics: Dict, mode: str, suffix: str = \"\"):\n+ def _log_metrics(\n+ metrics: Dict, mode: str, suffix: str = \"\", commit: bool = True\n+ ):\n def key_locate(key: str):\n \"\"\"\n Wandb uses first symbol _ for it service purposes\n@@ -34,7 +36,7 @@\n f\"{key_locate(key)}/{mode}{suffix}\": value\n for key, value in metrics.items()\n }\n- wandb.log(metrics)\n+ wandb.log(metrics, commit=commit)\n \n def _init(\n self,\n@@ -123,7 +125,10 @@\n mode = self.state.loader_name\n metrics = self.state.batch_metrics\n self._log_metrics(\n- metrics=metrics, mode=mode, suffix=self.batch_log_suffix\n+ metrics=metrics,\n+ mode=mode,\n+ suffix=self.batch_log_suffix,\n+ commit=True\n )\n \n def _run_epoch(self, stage: str, epoch: int):\n@@ -136,17 +141,26 @@\n )\n for mode, metrics in mode_metrics.items():\n self._log_metrics(\n- metrics=metrics, mode=mode, suffix=self.epoch_log_suffix\n+ metrics=metrics,\n+ mode=mode,\n+ suffix=self.epoch_log_suffix,\n+ commit=False\n )\n+ wandb.log(commit=True)\n \n def run_experiment(self, experiment: Experiment):\n+ \"\"\"Starts experiment\n+\n+ Args:\n+ experiment (Experiment): experiment class\n+ \"\"\"\n self._pre_experiment_hook(experiment=experiment)\n super().run_experiment(experiment=experiment)\n self._post_experiment_hook(experiment=experiment)\n \n \n class SupervisedWandbRunner(WandbRunner, SupervisedRunner):\n- pass\n+ \"\"\"SupervisedRunner with WandB\"\"\"\n \n \n __all__ = [\"WandbRunner\", \"SupervisedWandbRunner\"]\n", "issue": "SupervisedWandbRunner logs wrong number of epochs to WandB\n**Describe the bug**\r\nCatalyst 20.02.4\r\n`WandbRunner` is logging wrong number of epochs to WandB\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n```\r\nfrom catalyst import dl\r\nimport torch\r\nimport torch.nn as nn\r\nimport torch.optim as optim\r\nfrom torch.utils.data import DataLoader, TensorDataset\r\n\r\nclass Projector(nn.Module):\r\n def __init__(self, input_size):\r\n super().__init__()\r\n self.linear = nn.Linear(input_size, 1)\r\n \r\n def forward(self, X):\r\n return self.linear(X).squeeze(-1)\r\n\r\nX = torch.rand(16, 10)\r\ny = torch.rand(X.shape[0])\r\nmodel = Projector(X.shape[1])\r\ndataset = TensorDataset(X, y)\r\nloader = DataLoader(dataset, batch_size=8)\r\nrunner = dl.SupervisedWandbRunner()\r\n\r\nrunner.train(\r\n model=model,\r\n loaders={\r\n \"train\": loader,\r\n \"valid\": loader\r\n },\r\n criterion=nn.MSELoss(),\r\n optimizer=optim.Adam(model.parameters()),\r\n logdir=\"log_xxx_000\",\r\n monitoring_params={\r\n \"project\": \"wandb_wrong_epochs\"\r\n },\r\n num_epochs=10\r\n)\r\n```\r\n\r\n**Expected behavior**\r\nIn WandB I see two plots with `MSELoss` with exactly 10 epochs\r\n\r\n**Actual behaviour**\r\nIn WandB I see two plots with `MSELoss` with 20 epochs (but each has exactly 10 dots)\r\n\r\n**Screenshots**\r\nLook on number of steps. It has 20 steps. But should have 10.\r\n\r\n\n", "before_files": [{"content": "from typing import Dict, List # isort:skip\nfrom pathlib import Path\nimport shutil\n\nimport wandb\n\nfrom catalyst.dl import utils\nfrom catalyst.dl.core import Experiment, Runner\nfrom catalyst.dl.experiment import ConfigExperiment\nfrom catalyst.dl.runner import SupervisedRunner\n\n\nclass WandbRunner(Runner):\n \"\"\"\n Runner wrapper with wandb integration hooks.\n \"\"\"\n @staticmethod\n def _log_metrics(metrics: Dict, mode: str, suffix: str = \"\"):\n def key_locate(key: str):\n \"\"\"\n Wandb uses first symbol _ for it service purposes\n because of that fact, we can not send original metric names\n\n Args:\n key: metric name\n Returns:\n formatted metric name\n \"\"\"\n if key.startswith(\"_\"):\n return key[1:]\n return key\n\n metrics = {\n f\"{key_locate(key)}/{mode}{suffix}\": value\n for key, value in metrics.items()\n }\n wandb.log(metrics)\n\n def _init(\n self,\n log_on_batch_end: bool = False,\n log_on_epoch_end: bool = True,\n checkpoints_glob: List = None,\n ):\n super()._init()\n self.log_on_batch_end = log_on_batch_end\n self.log_on_epoch_end = log_on_epoch_end\n self.checkpoints_glob = checkpoints_glob\n\n if (self.log_on_batch_end and not self.log_on_epoch_end) \\\n or (not self.log_on_batch_end and self.log_on_epoch_end):\n self.batch_log_suffix = \"\"\n self.epoch_log_suffix = \"\"\n else:\n self.batch_log_suffix = \"_batch\"\n self.epoch_log_suffix = \"_epoch\"\n\n def _pre_experiment_hook(self, experiment: Experiment):\n monitoring_params = experiment.monitoring_params\n monitoring_params[\"dir\"] = str(Path(experiment.logdir).absolute())\n\n log_on_batch_end: bool = \\\n monitoring_params.pop(\"log_on_batch_end\", False)\n log_on_epoch_end: bool = \\\n monitoring_params.pop(\"log_on_epoch_end\", True)\n checkpoints_glob: List[str] = \\\n monitoring_params.pop(\"checkpoints_glob\", [])\n self._init(\n log_on_batch_end=log_on_batch_end,\n log_on_epoch_end=log_on_epoch_end,\n checkpoints_glob=checkpoints_glob,\n )\n if isinstance(experiment, ConfigExperiment):\n exp_config = utils.flatten_dict(experiment.stages_config)\n wandb.init(**monitoring_params, config=exp_config)\n else:\n wandb.init(**monitoring_params)\n\n def _post_experiment_hook(self, experiment: Experiment):\n # @TODO: add params for artefacts logging\n logdir_src = Path(experiment.logdir)\n # logdir_dst = wandb.run.dir\n #\n # exclude = [\"wandb\", \"checkpoints\"]\n # logdir_files = list(logdir_src.glob(\"*\"))\n # logdir_files = list(\n # filter(\n # lambda x: all(z not in str(x) for z in exclude), logdir_files\n # )\n # )\n #\n # for subdir in logdir_files:\n # if subdir.is_dir():\n # os.makedirs(f\"{logdir_dst}/{subdir.name}\", exist_ok=True)\n # shutil.rmtree(f\"{logdir_dst}/{subdir.name}\")\n # shutil.copytree(\n # f\"{str(subdir.absolute())}\",\n # f\"{logdir_dst}/{subdir.name}\"\n # )\n # else:\n # shutil.copy2(\n # f\"{str(subdir.absolute())}\",\n # f\"{logdir_dst}/{subdir.name}\"\n # )\n #\n checkpoints_src = logdir_src.joinpath(\"checkpoints\")\n checkpoints_dst = Path(wandb.run.dir).joinpath(\"checkpoints\")\n # os.makedirs(checkpoints_dst, exist_ok=True)\n\n checkpoint_paths = []\n for glob in self.checkpoints_glob:\n checkpoint_paths.extend(list(checkpoints_src.glob(glob)))\n checkpoint_paths = list(set(checkpoint_paths))\n for checkpoint_path in checkpoint_paths:\n shutil.copy2(\n f\"{str(checkpoint_path.absolute())}\",\n f\"{checkpoints_dst}/{checkpoint_path.name}\"\n )\n\n def _run_batch(self, batch):\n super()._run_batch(batch=batch)\n if self.log_on_batch_end:\n mode = self.state.loader_name\n metrics = self.state.batch_metrics\n self._log_metrics(\n metrics=metrics, mode=mode, suffix=self.batch_log_suffix\n )\n\n def _run_epoch(self, stage: str, epoch: int):\n super()._run_epoch(stage=stage, epoch=epoch)\n if self.log_on_epoch_end:\n mode_metrics = utils.split_dict_to_subdicts(\n dct=self.state.epoch_metrics,\n prefixes=list(self.state.loaders.keys()),\n extra_key=\"_base\",\n )\n for mode, metrics in mode_metrics.items():\n self._log_metrics(\n metrics=metrics, mode=mode, suffix=self.epoch_log_suffix\n )\n\n def run_experiment(self, experiment: Experiment):\n self._pre_experiment_hook(experiment=experiment)\n super().run_experiment(experiment=experiment)\n self._post_experiment_hook(experiment=experiment)\n\n\nclass SupervisedWandbRunner(WandbRunner, SupervisedRunner):\n pass\n\n\n__all__ = [\"WandbRunner\", \"SupervisedWandbRunner\"]\n", "path": "catalyst/contrib/dl/runner/wandb.py"}]} | 2,490 | 529 |
gh_patches_debug_32933 | rasdani/github-patches | git_diff | kserve__kserve-524 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sample of image_transformer does not work
/kind bug
Sample under docs/samples/transformer/image_transformer is broken, there's python error in it.
**What steps did you take and what happened:**
[A clear and concise description of what the bug is.]
It's due to PR #492, kfmodel and kfserver is refactored now but the sample still inherit from transformer which does not exist now. Also some other symbols need be renamed.
**What did you expect to happen:**
Sample still works
</issue>
<code>
[start of docs/samples/transformer/image_transformer/image_transformer/__main__.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import kfserving
16 import argparse
17 from .image_transformer import ImageTransformer
18
19 DEFAULT_MODEL_NAME = "model"
20
21 parser = argparse.ArgumentParser(parents=[kfserving.server.parser])
22 parser.add_argument('--model_name', default=DEFAULT_MODEL_NAME,
23 help='The name that the model is served under.')
24 parser.add_argument('--predictor_host', help='The URL for the model predict function', required=True)
25
26 args, _ = parser.parse_known_args()
27
28 if __name__ == "__main__":
29 transformer = ImageTransformer(args.model_name, predictor_host=args.predictor_host,
30 protocol=args.protocol)
31 kfserver = kfserving.KFServer()
32 kfserver.start(models=[transformer])
33
[end of docs/samples/transformer/image_transformer/image_transformer/__main__.py]
[start of docs/samples/transformer/image_transformer/image_transformer/image_transformer.py]
1 # Copyright 2019 kubeflow.org.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import kfserving
16 from typing import List, Dict
17 from kfserving.transformer import Transformer
18 from PIL import Image
19 import torchvision.transforms as transforms
20 import logging
21 import io
22 import numpy as np
23 import base64
24
25 logging.basicConfig(level=kfserving.constants.KFSERVING_LOGLEVEL)
26
27 transform = transforms.Compose(
28 [transforms.ToTensor(),
29 transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
30
31
32 def image_transform(instance):
33 byte_array = base64.b64decode(instance['image_bytes']['b64'])
34 image = Image.open(io.BytesIO(byte_array))
35 a = np.asarray(image)
36 im = Image.fromarray(a)
37 res = transform(im)
38 logging.info(res)
39 return res.tolist()
40
41
42 class ImageTransformer(Transformer):
43
44 def preprocess(self, inputs: Dict) -> Dict:
45 return {'instances': [image_transform(instance) for instance in inputs['instances']]}
46
47 def postprocess(self, inputs: List) -> List:
48 return inputs
49
[end of docs/samples/transformer/image_transformer/image_transformer/image_transformer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/samples/transformer/image_transformer/image_transformer/__main__.py b/docs/samples/transformer/image_transformer/image_transformer/__main__.py
--- a/docs/samples/transformer/image_transformer/image_transformer/__main__.py
+++ b/docs/samples/transformer/image_transformer/image_transformer/__main__.py
@@ -18,7 +18,7 @@
DEFAULT_MODEL_NAME = "model"
-parser = argparse.ArgumentParser(parents=[kfserving.server.parser])
+parser = argparse.ArgumentParser(parents=[kfserving.kfserver.parser])
parser.add_argument('--model_name', default=DEFAULT_MODEL_NAME,
help='The name that the model is served under.')
parser.add_argument('--predictor_host', help='The URL for the model predict function', required=True)
@@ -26,7 +26,6 @@
args, _ = parser.parse_known_args()
if __name__ == "__main__":
- transformer = ImageTransformer(args.model_name, predictor_host=args.predictor_host,
- protocol=args.protocol)
+ transformer = ImageTransformer(args.model_name, predictor_host=args.predictor_host)
kfserver = kfserving.KFServer()
kfserver.start(models=[transformer])
diff --git a/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py b/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py
--- a/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py
+++ b/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py
@@ -14,7 +14,6 @@
import kfserving
from typing import List, Dict
-from kfserving.transformer import Transformer
from PIL import Image
import torchvision.transforms as transforms
import logging
@@ -39,7 +38,10 @@
return res.tolist()
-class ImageTransformer(Transformer):
+class ImageTransformer(kfserving.KFModel):
+ def __init__(self, name: str, predictor_host: str):
+ super().__init__(name)
+ self.predictor_host = predictor_host
def preprocess(self, inputs: Dict) -> Dict:
return {'instances': [image_transform(instance) for instance in inputs['instances']]}
| {"golden_diff": "diff --git a/docs/samples/transformer/image_transformer/image_transformer/__main__.py b/docs/samples/transformer/image_transformer/image_transformer/__main__.py\n--- a/docs/samples/transformer/image_transformer/image_transformer/__main__.py\n+++ b/docs/samples/transformer/image_transformer/image_transformer/__main__.py\n@@ -18,7 +18,7 @@\n \n DEFAULT_MODEL_NAME = \"model\"\n \n-parser = argparse.ArgumentParser(parents=[kfserving.server.parser])\n+parser = argparse.ArgumentParser(parents=[kfserving.kfserver.parser])\n parser.add_argument('--model_name', default=DEFAULT_MODEL_NAME,\n help='The name that the model is served under.')\n parser.add_argument('--predictor_host', help='The URL for the model predict function', required=True)\n@@ -26,7 +26,6 @@\n args, _ = parser.parse_known_args()\n \n if __name__ == \"__main__\":\n- transformer = ImageTransformer(args.model_name, predictor_host=args.predictor_host,\n- protocol=args.protocol)\n+ transformer = ImageTransformer(args.model_name, predictor_host=args.predictor_host)\n kfserver = kfserving.KFServer()\n kfserver.start(models=[transformer])\ndiff --git a/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py b/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py\n--- a/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py\n+++ b/docs/samples/transformer/image_transformer/image_transformer/image_transformer.py\n@@ -14,7 +14,6 @@\n \n import kfserving\n from typing import List, Dict\n-from kfserving.transformer import Transformer\n from PIL import Image\n import torchvision.transforms as transforms\n import logging\n@@ -39,7 +38,10 @@\n return res.tolist()\n \n \n-class ImageTransformer(Transformer):\n+class ImageTransformer(kfserving.KFModel):\n+ def __init__(self, name: str, predictor_host: str):\n+ super().__init__(name)\n+ self.predictor_host = predictor_host\n \n def preprocess(self, inputs: Dict) -> Dict:\n return {'instances': [image_transform(instance) for instance in inputs['instances']]}\n", "issue": "Sample of image_transformer does not work\n/kind bug\r\nSample under docs/samples/transformer/image_transformer is broken, there's python error in it.\r\n\r\n**What steps did you take and what happened:**\r\n[A clear and concise description of what the bug is.]\r\nIt's due to PR #492, kfmodel and kfserver is refactored now but the sample still inherit from transformer which does not exist now. Also some other symbols need be renamed.\r\n\r\n**What did you expect to happen:**\r\nSample still works\r\n\n", "before_files": [{"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport kfserving\nimport argparse\nfrom .image_transformer import ImageTransformer\n\nDEFAULT_MODEL_NAME = \"model\"\n\nparser = argparse.ArgumentParser(parents=[kfserving.server.parser])\nparser.add_argument('--model_name', default=DEFAULT_MODEL_NAME,\n help='The name that the model is served under.')\nparser.add_argument('--predictor_host', help='The URL for the model predict function', required=True)\n\nargs, _ = parser.parse_known_args()\n\nif __name__ == \"__main__\":\n transformer = ImageTransformer(args.model_name, predictor_host=args.predictor_host,\n protocol=args.protocol)\n kfserver = kfserving.KFServer()\n kfserver.start(models=[transformer])\n", "path": "docs/samples/transformer/image_transformer/image_transformer/__main__.py"}, {"content": "# Copyright 2019 kubeflow.org.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport kfserving\nfrom typing import List, Dict\nfrom kfserving.transformer import Transformer\nfrom PIL import Image\nimport torchvision.transforms as transforms\nimport logging\nimport io\nimport numpy as np\nimport base64\n\nlogging.basicConfig(level=kfserving.constants.KFSERVING_LOGLEVEL)\n\ntransform = transforms.Compose(\n [transforms.ToTensor(),\n transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])\n\n\ndef image_transform(instance):\n byte_array = base64.b64decode(instance['image_bytes']['b64'])\n image = Image.open(io.BytesIO(byte_array))\n a = np.asarray(image)\n im = Image.fromarray(a)\n res = transform(im)\n logging.info(res)\n return res.tolist()\n\n\nclass ImageTransformer(Transformer):\n\n def preprocess(self, inputs: Dict) -> Dict:\n return {'instances': [image_transform(instance) for instance in inputs['instances']]}\n\n def postprocess(self, inputs: List) -> List:\n return inputs\n", "path": "docs/samples/transformer/image_transformer/image_transformer/image_transformer.py"}]} | 1,502 | 487 |
gh_patches_debug_22638 | rasdani/github-patches | git_diff | hedyorg__hedy-1308 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] Class names can be duplicate
**Describe the bug**
Currently, we are able to create multiple classes with an identical name. I think this is undesirable and should be prevented. Might be best to tackle this issue at the same time as #1152.
**Paste the Hedy code & level**
It is super helpful if we can copy-paste the Hedy code to test, so please paste the code here, and don't forget to tell us what level you were in.
**Add a screenshot (optional)**
Make a picture or screenshot to show the issue. Tip! You can make a screenshot and simply paste the image into GitHub with ctrl-v.
**Expected behavior**
A clear and concise description of what you expected to happen.
**What machine and browser you were using (optional)**
If the issue concerns things in the website, let us know:
- What computer you are using (Windows, Mac, Linux?)
- What browser you were using (Chrome, Edge, Safari)
</issue>
<code>
[start of website/teacher.py]
1 from website.auth import requires_login, is_teacher, current_user
2 import utils
3 import uuid
4 from flask import g, request, jsonify, redirect
5 from flask_helpers import render_template
6 import os
7 import hedyweb
8 TRANSLATIONS = hedyweb.Translations ()
9 from config import config
10 cookie_name = config ['session'] ['cookie_name']
11
12 def routes (app, database):
13 global DATABASE
14 DATABASE = database
15
16 from app import render_main_menu
17
18 @app.route('/classes', methods=['GET'])
19 @requires_login
20 def get_classes (user):
21 if not is_teacher(user):
22 return 'Only teachers can retrieve classes', 403
23 return jsonify (DATABASE.get_teacher_classes (user ['username'], True))
24
25 @app.route('/class/<class_id>', methods=['GET'])
26 @requires_login
27 def get_class (user, class_id):
28 if not is_teacher(user):
29 return 'Only teachers can retrieve classes', 403
30 Class = DATABASE.get_class (class_id)
31 if not Class or Class ['teacher'] != user ['username']:
32 return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('no_such_class'))
33 students = []
34 for student_username in Class.get ('students', []):
35 student = DATABASE.user_by_username (student_username)
36 programs = DATABASE.programs_for_user(student_username)
37 highest_level = max(program['level'] for program in programs) if len(programs) else 0
38 sorted_public_programs = list(sorted([program for program in programs if program.get ('public')], key=lambda p: p['date']))
39 if sorted_public_programs:
40 latest_shared = sorted_public_programs[-1]
41 latest_shared['link'] = os.getenv ('BASE_URL') + f"/hedy/{latest_shared['id']}/view"
42 else:
43 latest_shared = None
44 students.append ({'username': student_username, 'last_login': utils.mstoisostring (student ['last_login']), 'programs': len (programs), 'highest_level': highest_level, 'latest_shared': latest_shared})
45
46 if utils.is_testing_request (request):
47 return jsonify ({'students': students, 'link': Class ['link'], 'name': Class ['name'], 'id': Class ['id']})
48 return render_template ('class-overview.html', auth=TRANSLATIONS.get_translations (g.lang, 'Auth'), menu=render_main_menu('my-profile'), current_page='my-profile', class_info={'students': students, 'link': os.getenv ('BASE_URL') + '/hedy/l/' + Class ['link'], 'name': Class ['name'], 'id': Class ['id']})
49
50 @app.route('/class', methods=['POST'])
51 @requires_login
52 def create_class (user):
53 if not is_teacher(user):
54 return 'Only teachers can create classes', 403
55
56 body = request.json
57 # Validations
58 if not isinstance(body, dict):
59 return 'body must be an object', 400
60 if not isinstance(body.get('name'), str):
61 return 'name must be a string', 400
62
63 Class = {
64 'id': uuid.uuid4().hex,
65 'date': utils.timems (),
66 'teacher': user ['username'],
67 'link': utils.random_id_generator (7),
68 'name': body ['name']
69 }
70
71 DATABASE.store_class (Class)
72
73 return {}, 200
74
75 @app.route('/class/<class_id>', methods=['PUT'])
76 @requires_login
77 def update_class (user, class_id):
78 if not is_teacher(user):
79 return 'Only teachers can update classes', 403
80
81 body = request.json
82 # Validations
83 if not isinstance(body, dict):
84 return 'body must be an object', 400
85 if not isinstance(body.get('name'), str):
86 return 'name must be a string', 400
87
88 Class = DATABASE.get_class (class_id)
89 if not Class or Class ['teacher'] != user ['username']:
90 return 'No such class', 404
91
92 Class = DATABASE.update_class (class_id, body ['name'])
93
94 return {}, 200
95
96 @app.route('/class/<class_id>', methods=['DELETE'])
97 @requires_login
98 def delete_class (user, class_id):
99 Class = DATABASE.get_class (class_id)
100 if not Class or Class ['teacher'] != user ['username']:
101 return 'No such class', 404
102
103 DATABASE.delete_class (Class)
104
105 return {}, 200
106
107 @app.route('/class/<class_id>/prejoin/<link>', methods=['GET'])
108 def prejoin_class (class_id, link):
109 Class = DATABASE.get_class (class_id)
110 if not Class or Class ['link'] != link:
111 return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('invalid_class_link'))
112 user = {}
113 if request.cookies.get (cookie_name):
114 token = DATABASE.get_token(request.cookies.get (cookie_name))
115 if token:
116 if token ['username'] in Class.get ('students', []):
117 return render_template ('class-already-joined.html', auth=TRANSLATIONS.get_translations (g.lang, 'Auth'), menu=render_main_menu('my-profile'), current_page='my-profile', class_info={'name': Class ['name']})
118 user = DATABASE.user_by_username(token ['username'])
119
120 return render_template ('class-prejoin.html',
121 auth=TRANSLATIONS.get_translations (g.lang, 'Auth'),
122 menu=render_main_menu('my-profile'),
123 current_page='my-profile',
124 class_info={
125 'link': os.getenv ('BASE_URL') + '/class/' + Class ['id'] + '/join/' + Class ['link'] + '?lang=' + g.lang,
126 'name': Class ['name'],
127 })
128
129 @app.route('/class/<class_id>/join/<link>', methods=['GET'])
130 @requires_login
131 def join_class (user, class_id, link):
132 Class = DATABASE.get_class (class_id)
133 if not Class or Class ['link'] != link:
134 return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('invalid_class_link'))
135
136 DATABASE.add_student_to_class (Class ['id'], user ['username'])
137
138 return redirect(request.url.replace('/class/' + class_id + '/join/' + link, '/my-profile'), code=302)
139
140 @app.route('/class/<class_id>/student/<student_id>', methods=['DELETE'])
141 @requires_login
142 def leave_class (user, class_id, student_id):
143
144 Class = DATABASE.get_class (class_id)
145 if not Class or Class ['teacher'] != user ['username']:
146 return 'No such class', 404
147
148 DATABASE.remove_student_from_class (Class ['id'], student_id)
149
150 return {}, 200
151
152 @app.route('/hedy/l/<link_id>', methods=['GET'])
153 def resolve_class_link (link_id):
154 Class = DATABASE.resolve_class_link (link_id)
155 if not Class:
156 return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('invalid_class_link'))
157 return redirect(request.url.replace('/hedy/l/' + link_id, '/class/' + Class ['id'] + '/prejoin/' + link_id), code=302)
158
[end of website/teacher.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/website/teacher.py b/website/teacher.py
--- a/website/teacher.py
+++ b/website/teacher.py
@@ -60,6 +60,12 @@
if not isinstance(body.get('name'), str):
return 'name must be a string', 400
+ # We use this extra call to verify if the class name doesn't already exist, if so it's a duplicate
+ Classes = DATABASE.get_teacher_classes(user['username'], True)
+ for Class in Classes:
+ if Class['name'] == body['name']:
+ return "duplicate", 200
+
Class = {
'id': uuid.uuid4().hex,
'date': utils.timems (),
@@ -89,6 +95,12 @@
if not Class or Class ['teacher'] != user ['username']:
return 'No such class', 404
+ # We use this extra call to verify if the class name doesn't already exist, if so it's a duplicate
+ Classes = DATABASE.get_teacher_classes(user ['username'], True)
+ for Class in Classes:
+ if Class['name'] == body['name']:
+ return "duplicate", 200
+
Class = DATABASE.update_class (class_id, body ['name'])
return {}, 200
| {"golden_diff": "diff --git a/website/teacher.py b/website/teacher.py\n--- a/website/teacher.py\n+++ b/website/teacher.py\n@@ -60,6 +60,12 @@\n if not isinstance(body.get('name'), str):\n return 'name must be a string', 400\n \n+ # We use this extra call to verify if the class name doesn't already exist, if so it's a duplicate\n+ Classes = DATABASE.get_teacher_classes(user['username'], True)\n+ for Class in Classes:\n+ if Class['name'] == body['name']:\n+ return \"duplicate\", 200\n+\n Class = {\n 'id': uuid.uuid4().hex,\n 'date': utils.timems (),\n@@ -89,6 +95,12 @@\n if not Class or Class ['teacher'] != user ['username']:\n return 'No such class', 404\n \n+ # We use this extra call to verify if the class name doesn't already exist, if so it's a duplicate\n+ Classes = DATABASE.get_teacher_classes(user ['username'], True)\n+ for Class in Classes:\n+ if Class['name'] == body['name']:\n+ return \"duplicate\", 200\n+\n Class = DATABASE.update_class (class_id, body ['name'])\n \n return {}, 200\n", "issue": "[BUG] Class names can be duplicate\n**Describe the bug**\r\nCurrently, we are able to create multiple classes with an identical name. I think this is undesirable and should be prevented. Might be best to tackle this issue at the same time as #1152.\r\n\r\n**Paste the Hedy code & level**\r\nIt is super helpful if we can copy-paste the Hedy code to test, so please paste the code here, and don't forget to tell us what level you were in.\r\n\r\n**Add a screenshot (optional)**\r\nMake a picture or screenshot to show the issue. Tip! You can make a screenshot and simply paste the image into GitHub with ctrl-v.\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**What machine and browser you were using (optional)**\r\nIf the issue concerns things in the website, let us know:\r\n- What computer you are using (Windows, Mac, Linux?)\r\n- What browser you were using (Chrome, Edge, Safari)\r\n\n", "before_files": [{"content": "from website.auth import requires_login, is_teacher, current_user\nimport utils\nimport uuid\nfrom flask import g, request, jsonify, redirect\nfrom flask_helpers import render_template\nimport os\nimport hedyweb\nTRANSLATIONS = hedyweb.Translations ()\nfrom config import config\ncookie_name = config ['session'] ['cookie_name']\n\ndef routes (app, database):\n global DATABASE\n DATABASE = database\n\n from app import render_main_menu\n\n @app.route('/classes', methods=['GET'])\n @requires_login\n def get_classes (user):\n if not is_teacher(user):\n return 'Only teachers can retrieve classes', 403\n return jsonify (DATABASE.get_teacher_classes (user ['username'], True))\n\n @app.route('/class/<class_id>', methods=['GET'])\n @requires_login\n def get_class (user, class_id):\n if not is_teacher(user):\n return 'Only teachers can retrieve classes', 403\n Class = DATABASE.get_class (class_id)\n if not Class or Class ['teacher'] != user ['username']:\n return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('no_such_class'))\n students = []\n for student_username in Class.get ('students', []):\n student = DATABASE.user_by_username (student_username)\n programs = DATABASE.programs_for_user(student_username)\n highest_level = max(program['level'] for program in programs) if len(programs) else 0\n sorted_public_programs = list(sorted([program for program in programs if program.get ('public')], key=lambda p: p['date']))\n if sorted_public_programs:\n latest_shared = sorted_public_programs[-1]\n latest_shared['link'] = os.getenv ('BASE_URL') + f\"/hedy/{latest_shared['id']}/view\"\n else:\n latest_shared = None\n students.append ({'username': student_username, 'last_login': utils.mstoisostring (student ['last_login']), 'programs': len (programs), 'highest_level': highest_level, 'latest_shared': latest_shared})\n\n if utils.is_testing_request (request):\n return jsonify ({'students': students, 'link': Class ['link'], 'name': Class ['name'], 'id': Class ['id']})\n return render_template ('class-overview.html', auth=TRANSLATIONS.get_translations (g.lang, 'Auth'), menu=render_main_menu('my-profile'), current_page='my-profile', class_info={'students': students, 'link': os.getenv ('BASE_URL') + '/hedy/l/' + Class ['link'], 'name': Class ['name'], 'id': Class ['id']})\n\n @app.route('/class', methods=['POST'])\n @requires_login\n def create_class (user):\n if not is_teacher(user):\n return 'Only teachers can create classes', 403\n\n body = request.json\n # Validations\n if not isinstance(body, dict):\n return 'body must be an object', 400\n if not isinstance(body.get('name'), str):\n return 'name must be a string', 400\n\n Class = {\n 'id': uuid.uuid4().hex,\n 'date': utils.timems (),\n 'teacher': user ['username'],\n 'link': utils.random_id_generator (7),\n 'name': body ['name']\n }\n\n DATABASE.store_class (Class)\n\n return {}, 200\n\n @app.route('/class/<class_id>', methods=['PUT'])\n @requires_login\n def update_class (user, class_id):\n if not is_teacher(user):\n return 'Only teachers can update classes', 403\n\n body = request.json\n # Validations\n if not isinstance(body, dict):\n return 'body must be an object', 400\n if not isinstance(body.get('name'), str):\n return 'name must be a string', 400\n\n Class = DATABASE.get_class (class_id)\n if not Class or Class ['teacher'] != user ['username']:\n return 'No such class', 404\n\n Class = DATABASE.update_class (class_id, body ['name'])\n\n return {}, 200\n\n @app.route('/class/<class_id>', methods=['DELETE'])\n @requires_login\n def delete_class (user, class_id):\n Class = DATABASE.get_class (class_id)\n if not Class or Class ['teacher'] != user ['username']:\n return 'No such class', 404\n\n DATABASE.delete_class (Class)\n\n return {}, 200\n\n @app.route('/class/<class_id>/prejoin/<link>', methods=['GET'])\n def prejoin_class (class_id, link):\n Class = DATABASE.get_class (class_id)\n if not Class or Class ['link'] != link:\n return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('invalid_class_link'))\n user = {}\n if request.cookies.get (cookie_name):\n token = DATABASE.get_token(request.cookies.get (cookie_name))\n if token:\n if token ['username'] in Class.get ('students', []):\n return render_template ('class-already-joined.html', auth=TRANSLATIONS.get_translations (g.lang, 'Auth'), menu=render_main_menu('my-profile'), current_page='my-profile', class_info={'name': Class ['name']})\n user = DATABASE.user_by_username(token ['username'])\n\n return render_template ('class-prejoin.html',\n auth=TRANSLATIONS.get_translations (g.lang, 'Auth'),\n menu=render_main_menu('my-profile'),\n current_page='my-profile',\n class_info={\n 'link': os.getenv ('BASE_URL') + '/class/' + Class ['id'] + '/join/' + Class ['link'] + '?lang=' + g.lang,\n 'name': Class ['name'],\n })\n\n @app.route('/class/<class_id>/join/<link>', methods=['GET'])\n @requires_login\n def join_class (user, class_id, link):\n Class = DATABASE.get_class (class_id)\n if not Class or Class ['link'] != link:\n return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('invalid_class_link'))\n\n DATABASE.add_student_to_class (Class ['id'], user ['username'])\n\n return redirect(request.url.replace('/class/' + class_id + '/join/' + link, '/my-profile'), code=302)\n\n @app.route('/class/<class_id>/student/<student_id>', methods=['DELETE'])\n @requires_login\n def leave_class (user, class_id, student_id):\n\n Class = DATABASE.get_class (class_id)\n if not Class or Class ['teacher'] != user ['username']:\n return 'No such class', 404\n\n DATABASE.remove_student_from_class (Class ['id'], student_id)\n\n return {}, 200\n\n @app.route('/hedy/l/<link_id>', methods=['GET'])\n def resolve_class_link (link_id):\n Class = DATABASE.resolve_class_link (link_id)\n if not Class:\n return utils.page_404 (TRANSLATIONS, render_main_menu('my-profile'), current_user()['username'], g.lang, TRANSLATIONS.get_translations(g.lang, 'ui').get('invalid_class_link'))\n return redirect(request.url.replace('/hedy/l/' + link_id, '/class/' + Class ['id'] + '/prejoin/' + link_id), code=302)\n", "path": "website/teacher.py"}]} | 2,802 | 296 |
gh_patches_debug_53989 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1329 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide length of TableOfContents
Currently, you can only iter over `TableOfContents`. I would like to know the length of it.
</issue>
<code>
[start of mkdocs/toc.py]
1 # coding: utf-8
2
3 """
4 Deals with generating the per-page table of contents.
5
6 For the sake of simplicity we use an existing markdown extension to generate
7 an HTML table of contents, and then parse that into the underlying data.
8
9 The steps we take to generate a table of contents are:
10
11 * Pre-process the markdown, injecting a [TOC] marker.
12 * Generate HTML from markdown.
13 * Post-process the HTML, spliting the content and the table of contents.
14 * Parse table of contents HTML into the underlying data structure.
15 """
16
17 from __future__ import unicode_literals
18
19 try: # pragma: no cover
20 from html.parser import HTMLParser # noqa
21 except ImportError: # pragma: no cover
22 from HTMLParser import HTMLParser # noqa
23
24
25 class TableOfContents(object):
26 """
27 Represents the table of contents for a given page.
28 """
29 def __init__(self, html):
30 self.items = _parse_html_table_of_contents(html)
31
32 def __iter__(self):
33 return iter(self.items)
34
35 def __str__(self):
36 return ''.join([str(item) for item in self])
37
38
39 class AnchorLink(object):
40 """
41 A single entry in the table of contents.
42 """
43 def __init__(self, title, url):
44 self.title, self.url = title, url
45 self.children = []
46
47 def __str__(self):
48 return self.indent_print()
49
50 def indent_print(self, depth=0):
51 indent = ' ' * depth
52 ret = '%s%s - %s\n' % (indent, self.title, self.url)
53 for item in self.children:
54 ret += item.indent_print(depth + 1)
55 return ret
56
57
58 class TOCParser(HTMLParser):
59
60 def __init__(self):
61 HTMLParser.__init__(self)
62 self.links = []
63
64 self.in_anchor = False
65 self.attrs = None
66 self.title = ''
67
68 # Prior to Python3.4 no convert_charrefs keyword existed.
69 # However, in Python3.5 the default was changed to True.
70 # We need the False behavior in all versions but can only
71 # set it if it exists.
72 if hasattr(self, 'convert_charrefs'):
73 self.convert_charrefs = False
74
75 def handle_starttag(self, tag, attrs):
76
77 if not self.in_anchor:
78 if tag == 'a':
79 self.in_anchor = True
80 self.attrs = dict(attrs)
81
82 def handle_endtag(self, tag):
83 if tag == 'a':
84 self.in_anchor = False
85
86 def handle_data(self, data):
87
88 if self.in_anchor:
89 self.title += data
90
91 def handle_charref(self, ref):
92 self.handle_entityref("#" + ref)
93
94 def handle_entityref(self, ref):
95 self.handle_data("&%s;" % ref)
96
97
98 def _parse_html_table_of_contents(html):
99 """
100 Given a table of contents string that has been automatically generated by
101 the markdown library, parse it into a tree of AnchorLink instances.
102
103 Returns a list of all the parent AnchorLink instances.
104 """
105 lines = html.splitlines()[2:-2]
106 parents = []
107 ret = []
108 for line in lines:
109 parser = TOCParser()
110 parser.feed(line)
111 if parser.title:
112 try:
113 href = parser.attrs['href']
114 except KeyError:
115 continue
116 title = parser.title
117 nav = AnchorLink(title, href)
118 # Add the item to its parent if required. If it is a topmost
119 # item then instead append it to our return value.
120 if parents:
121 parents[-1].children.append(nav)
122 else:
123 ret.append(nav)
124 # If this item has children, store it as the current parent
125 if line.endswith('<ul>'):
126 parents.append(nav)
127 elif line.startswith('</ul>'):
128 if parents:
129 parents.pop()
130
131 # For the table of contents, always mark the first element as active
132 if ret:
133 ret[0].active = True
134
135 return ret
136
[end of mkdocs/toc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/toc.py b/mkdocs/toc.py
--- a/mkdocs/toc.py
+++ b/mkdocs/toc.py
@@ -32,6 +32,9 @@
def __iter__(self):
return iter(self.items)
+ def __len__(self):
+ return len(self.items)
+
def __str__(self):
return ''.join([str(item) for item in self])
| {"golden_diff": "diff --git a/mkdocs/toc.py b/mkdocs/toc.py\n--- a/mkdocs/toc.py\n+++ b/mkdocs/toc.py\n@@ -32,6 +32,9 @@\n def __iter__(self):\n return iter(self.items)\n \n+ def __len__(self):\n+ return len(self.items)\n+\n def __str__(self):\n return ''.join([str(item) for item in self])\n", "issue": "Provide length of TableOfContents\nCurrently, you can only iter over `TableOfContents`. I would like to know the length of it.\n", "before_files": [{"content": "# coding: utf-8\n\n\"\"\"\nDeals with generating the per-page table of contents.\n\nFor the sake of simplicity we use an existing markdown extension to generate\nan HTML table of contents, and then parse that into the underlying data.\n\nThe steps we take to generate a table of contents are:\n\n* Pre-process the markdown, injecting a [TOC] marker.\n* Generate HTML from markdown.\n* Post-process the HTML, spliting the content and the table of contents.\n* Parse table of contents HTML into the underlying data structure.\n\"\"\"\n\nfrom __future__ import unicode_literals\n\ntry: # pragma: no cover\n from html.parser import HTMLParser # noqa\nexcept ImportError: # pragma: no cover\n from HTMLParser import HTMLParser # noqa\n\n\nclass TableOfContents(object):\n \"\"\"\n Represents the table of contents for a given page.\n \"\"\"\n def __init__(self, html):\n self.items = _parse_html_table_of_contents(html)\n\n def __iter__(self):\n return iter(self.items)\n\n def __str__(self):\n return ''.join([str(item) for item in self])\n\n\nclass AnchorLink(object):\n \"\"\"\n A single entry in the table of contents.\n \"\"\"\n def __init__(self, title, url):\n self.title, self.url = title, url\n self.children = []\n\n def __str__(self):\n return self.indent_print()\n\n def indent_print(self, depth=0):\n indent = ' ' * depth\n ret = '%s%s - %s\\n' % (indent, self.title, self.url)\n for item in self.children:\n ret += item.indent_print(depth + 1)\n return ret\n\n\nclass TOCParser(HTMLParser):\n\n def __init__(self):\n HTMLParser.__init__(self)\n self.links = []\n\n self.in_anchor = False\n self.attrs = None\n self.title = ''\n\n # Prior to Python3.4 no convert_charrefs keyword existed.\n # However, in Python3.5 the default was changed to True.\n # We need the False behavior in all versions but can only\n # set it if it exists.\n if hasattr(self, 'convert_charrefs'):\n self.convert_charrefs = False\n\n def handle_starttag(self, tag, attrs):\n\n if not self.in_anchor:\n if tag == 'a':\n self.in_anchor = True\n self.attrs = dict(attrs)\n\n def handle_endtag(self, tag):\n if tag == 'a':\n self.in_anchor = False\n\n def handle_data(self, data):\n\n if self.in_anchor:\n self.title += data\n\n def handle_charref(self, ref):\n self.handle_entityref(\"#\" + ref)\n\n def handle_entityref(self, ref):\n self.handle_data(\"&%s;\" % ref)\n\n\ndef _parse_html_table_of_contents(html):\n \"\"\"\n Given a table of contents string that has been automatically generated by\n the markdown library, parse it into a tree of AnchorLink instances.\n\n Returns a list of all the parent AnchorLink instances.\n \"\"\"\n lines = html.splitlines()[2:-2]\n parents = []\n ret = []\n for line in lines:\n parser = TOCParser()\n parser.feed(line)\n if parser.title:\n try:\n href = parser.attrs['href']\n except KeyError:\n continue\n title = parser.title\n nav = AnchorLink(title, href)\n # Add the item to its parent if required. If it is a topmost\n # item then instead append it to our return value.\n if parents:\n parents[-1].children.append(nav)\n else:\n ret.append(nav)\n # If this item has children, store it as the current parent\n if line.endswith('<ul>'):\n parents.append(nav)\n elif line.startswith('</ul>'):\n if parents:\n parents.pop()\n\n # For the table of contents, always mark the first element as active\n if ret:\n ret[0].active = True\n\n return ret\n", "path": "mkdocs/toc.py"}]} | 1,744 | 98 |
gh_patches_debug_1235 | rasdani/github-patches | git_diff | chainer__chainer-5586 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Docstring of `functions.forget` is incorrect as `+` doesn't retain inputs anymore
The docstring says that `(x + y) + x` retains the immediate variable holding `x + y`.
```
Let ``f`` be a function defined as:
>>> def f(a, b):
... return a + b + a
and, ``x`` and ``y`` be :class:`~chainer.Variable`\\ s:
>>> x = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))
>>> y = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))
When ``z`` is calculated as ``z = f(x, y)``, its intermediate result
``x + y`` is stored in memory. Instead, if you call ``f`` with
``F.forget``:
>>> z = F.forget(f, x, y)
intermediate ``x + y`` is forgotten.
```
But this isn't true for new-style function of `+`, because addition don't requires book-kept inputs for backpropagation.
I checked the behavior by the following script, which traverses retained variables.
```python
import chainer
import chainer.functions as F
import numpy as np
def f(a, b):
return (a + b) + a
def recur_check_vars(v, x, y):
creator = v.creator_node
if creator is None:
return
for pnode in creator.inputs:
p = pnode.get_variable()
assert p.data is None or p is x or p is y
print(p)
recur_check_vars(p, x, y)
def main():
x = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))
y = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))
print(x)
print(y)
print()
z = f(x, y)
recur_check_vars(z, x, y)
if __name__ == '__main__':
main()
```
The script doesn't fail, and the output is as follows. We can see that`x + y` is discarded. Living variables `x` and `y` are retrieved, as each `VariableNode` instance has a weakref to the corresponding variable.
```
variable([-0.7699733 -0.50523347 -0.20869003 -0.7912116 0.92058474])
variable([ 0.58832335 -0.06183117 0.1939743 0.9021316 -0.19973369])
variable(None)
variable([-0.7699733 -0.50523347 -0.20869003 -0.7912116 0.92058474])
variable([ 0.58832335 -0.06183117 0.1939743 0.9021316 -0.19973369])
variable([-0.7699733 -0.50523347 -0.20869003 -0.7912116 0.92058474])
```
</issue>
<code>
[start of chainer/functions/util/forget.py]
1 import chainer
2 from chainer import function
3 from chainer import function_node
4 from chainer import variable
5
6
7 def _call_func(func, xs):
8 outs = func(*xs)
9
10 if isinstance(outs, tuple):
11 for i, out in enumerate(outs):
12 if isinstance(out, variable.Variable):
13 continue
14 n = i + 1
15 suffix = {1: 'st', 2: 'nd', 3: 'rd'}.get(
16 n if n < 20 else n % 10, 'th')
17 msg = ('{}{} element of a returned tuple is not Variable, '
18 'but is {}').format(n, suffix, type(out))
19 raise RuntimeError(msg)
20 elif isinstance(outs, variable.Variable):
21 outs = (outs,)
22 else:
23 msg = ('A tuple of Variables or a Variable are expected, but {} '
24 'is returned.'.format(type(outs)))
25 raise RuntimeError(msg)
26
27 return outs
28
29
30 class Forget(function_node.FunctionNode):
31
32 def __init__(self, func):
33 if not callable(func):
34 raise TypeError('func must be callable')
35 self.func = func
36
37 def forward(self, inputs):
38 self.retain_inputs(tuple(range(len(inputs))))
39 with function.no_backprop_mode():
40 xs = [variable.Variable(x) for x in inputs]
41 outs = _call_func(self.func, xs)
42 return tuple(out.data for out in outs)
43
44 def backward(self, indexes, grad_outputs):
45 # Double backprop is not allowed
46 if chainer.config.enable_backprop:
47 raise RuntimeError('double backpropagation in functions.forget is '
48 'not allowed.')
49
50 inputs = self.get_retained_inputs()
51 # Create new variables that have no creators
52 dummy_inputs = tuple([variable.Variable(inp.array) for inp in inputs])
53
54 with function.force_backprop_mode():
55 outs = _call_func(self.func, dummy_inputs)
56 assert len(outs) == len(grad_outputs)
57 if len(outs) > 1:
58 # Avoid doing backward multiple times when `outs` is a tuple
59 outs = chainer.functions.identity(*outs)
60
61 for out, grad_output in zip(outs, grad_outputs):
62 out.grad_var = grad_output
63 outs[0].backward()
64
65 return tuple([inp.grad_var for inp in dummy_inputs])
66
67
68 def forget(func, *xs):
69 """Calls a function without storing intermediate results.
70
71 On a forward propagation, Chainer normally stores all intermediate results
72 of :class:`~chainer.variable.VariableNode`\\ s on a computational graph as
73 they are required on backward propagation.
74 Sometimes these results consume too much memory.
75 ``F.forget`` *forgets* such intermediate results on forward propagation,
76 and still supports backpropagation with recalculation.
77
78 On a forward propagation, ``F.forget`` calls a given function with given
79 variables without creating a computational graph. That means, no
80 intermediate results are stored.
81 On a backward propagation, ``F.forget`` calls the given function again to
82 create a computational graph for backpropagation.
83
84 ``F.forget`` reduces internal memory usage, whereas it requires more
85 calculation time as it calls the function twice.
86
87 .. admonition:: Example
88
89 Let ``f`` be a function defined as:
90
91 >>> def f(a, b):
92 ... return a + b + a
93
94 and, ``x`` and ``y`` be :class:`~chainer.Variable`\\ s:
95
96 >>> x = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))
97 >>> y = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))
98
99 When ``z`` is calculated as ``z = f(x, y)``, its intermediate result
100 ``x + y`` is stored in memory. Instead, if you call ``f`` with
101 ``F.forget``:
102
103 >>> z = F.forget(f, x, y)
104
105 intermediate ``x + y`` is forgotten.
106
107 .. note::
108
109 ``F.forget`` does not support functions which behave differently in
110 multiple calls with the same inputs, such as
111 :meth:`F.dropout() <chainer.functions.dropout>` and
112 :meth:`F.negative_sampling() <chainer.functions.negative_sampling>`.
113
114 .. note::
115
116 In case input argument variables are of class :class:`numpy.ndarray` or
117 :class:`cupy.ndarray` objects, arguments will automatically be
118 converted to :class:`~chainer.Variable`\\ s.
119 This conversion takes place to ensure that this function is included
120 in the computational graph to enable backward computations.
121
122 .. note::
123
124 ``F.forget`` does not support double backpropagation.
125
126 Args:
127 func (callable): A function to call. It needs to be called with
128 :class:`~chainer.Variable` object(s) and to return a
129 :class:`~chainer.Variable` object or a tuple of
130 :class:`~chainer.Variable` objects.
131 xs (~chainer.Variable): Argument variables of the function.
132
133 Returns:
134 ~chainer.Variable: A variable ``func`` returns. If it returns a tuple,
135 the method returns a tuple too.
136
137 """
138 xs = tuple(x if isinstance(x, variable.Variable) else
139 variable.Variable(x, requires_grad=True) for x in xs)
140 y = Forget(func).apply(xs)
141 if len(y) == 1:
142 y, = y
143 return y
144
[end of chainer/functions/util/forget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/functions/util/forget.py b/chainer/functions/util/forget.py
--- a/chainer/functions/util/forget.py
+++ b/chainer/functions/util/forget.py
@@ -89,7 +89,7 @@
Let ``f`` be a function defined as:
>>> def f(a, b):
- ... return a + b + a
+ ... return (a + b) * a
and, ``x`` and ``y`` be :class:`~chainer.Variable`\\ s:
| {"golden_diff": "diff --git a/chainer/functions/util/forget.py b/chainer/functions/util/forget.py\n--- a/chainer/functions/util/forget.py\n+++ b/chainer/functions/util/forget.py\n@@ -89,7 +89,7 @@\n Let ``f`` be a function defined as:\n \n >>> def f(a, b):\n- ... return a + b + a\n+ ... return (a + b) * a\n \n and, ``x`` and ``y`` be :class:`~chainer.Variable`\\\\ s:\n", "issue": "Docstring of `functions.forget` is incorrect as `+` doesn't retain inputs anymore\nThe docstring says that `(x + y) + x` retains the immediate variable holding `x + y`. \r\n\r\n```\r\n Let ``f`` be a function defined as:\r\n >>> def f(a, b):\r\n ... return a + b + a\r\n and, ``x`` and ``y`` be :class:`~chainer.Variable`\\\\ s:\r\n >>> x = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))\r\n >>> y = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))\r\n When ``z`` is calculated as ``z = f(x, y)``, its intermediate result\r\n ``x + y`` is stored in memory. Instead, if you call ``f`` with\r\n ``F.forget``:\r\n >>> z = F.forget(f, x, y)\r\n intermediate ``x + y`` is forgotten.\r\n```\r\n\r\nBut this isn't true for new-style function of `+`, because addition don't requires book-kept inputs for backpropagation.\r\n\r\nI checked the behavior by the following script, which traverses retained variables.\r\n\r\n```python\r\nimport chainer\r\nimport chainer.functions as F\r\nimport numpy as np \r\n\r\n\r\ndef f(a, b):\r\n return (a + b) + a\r\n\r\n\r\ndef recur_check_vars(v, x, y):\r\n creator = v.creator_node\r\n if creator is None:\r\n return\r\n for pnode in creator.inputs:\r\n p = pnode.get_variable()\r\n assert p.data is None or p is x or p is y\r\n print(p)\r\n recur_check_vars(p, x, y) \r\n\r\n\r\ndef main():\r\n x = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))\r\n y = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))\r\n print(x)\r\n print(y)\r\n print()\r\n z = f(x, y) \r\n recur_check_vars(z, x, y) \r\n\r\n\r\nif __name__ == '__main__':\r\n main()\r\n```\r\n\r\nThe script doesn't fail, and the output is as follows. We can see that`x + y` is discarded. Living variables `x` and `y` are retrieved, as each `VariableNode` instance has a weakref to the corresponding variable.\r\n\r\n```\r\nvariable([-0.7699733 -0.50523347 -0.20869003 -0.7912116 0.92058474])\r\nvariable([ 0.58832335 -0.06183117 0.1939743 0.9021316 -0.19973369])\r\n\r\nvariable(None)\r\nvariable([-0.7699733 -0.50523347 -0.20869003 -0.7912116 0.92058474])\r\nvariable([ 0.58832335 -0.06183117 0.1939743 0.9021316 -0.19973369])\r\nvariable([-0.7699733 -0.50523347 -0.20869003 -0.7912116 0.92058474])\r\n```\n", "before_files": [{"content": "import chainer\nfrom chainer import function\nfrom chainer import function_node\nfrom chainer import variable\n\n\ndef _call_func(func, xs):\n outs = func(*xs)\n\n if isinstance(outs, tuple):\n for i, out in enumerate(outs):\n if isinstance(out, variable.Variable):\n continue\n n = i + 1\n suffix = {1: 'st', 2: 'nd', 3: 'rd'}.get(\n n if n < 20 else n % 10, 'th')\n msg = ('{}{} element of a returned tuple is not Variable, '\n 'but is {}').format(n, suffix, type(out))\n raise RuntimeError(msg)\n elif isinstance(outs, variable.Variable):\n outs = (outs,)\n else:\n msg = ('A tuple of Variables or a Variable are expected, but {} '\n 'is returned.'.format(type(outs)))\n raise RuntimeError(msg)\n\n return outs\n\n\nclass Forget(function_node.FunctionNode):\n\n def __init__(self, func):\n if not callable(func):\n raise TypeError('func must be callable')\n self.func = func\n\n def forward(self, inputs):\n self.retain_inputs(tuple(range(len(inputs))))\n with function.no_backprop_mode():\n xs = [variable.Variable(x) for x in inputs]\n outs = _call_func(self.func, xs)\n return tuple(out.data for out in outs)\n\n def backward(self, indexes, grad_outputs):\n # Double backprop is not allowed\n if chainer.config.enable_backprop:\n raise RuntimeError('double backpropagation in functions.forget is '\n 'not allowed.')\n\n inputs = self.get_retained_inputs()\n # Create new variables that have no creators\n dummy_inputs = tuple([variable.Variable(inp.array) for inp in inputs])\n\n with function.force_backprop_mode():\n outs = _call_func(self.func, dummy_inputs)\n assert len(outs) == len(grad_outputs)\n if len(outs) > 1:\n # Avoid doing backward multiple times when `outs` is a tuple\n outs = chainer.functions.identity(*outs)\n\n for out, grad_output in zip(outs, grad_outputs):\n out.grad_var = grad_output\n outs[0].backward()\n\n return tuple([inp.grad_var for inp in dummy_inputs])\n\n\ndef forget(func, *xs):\n \"\"\"Calls a function without storing intermediate results.\n\n On a forward propagation, Chainer normally stores all intermediate results\n of :class:`~chainer.variable.VariableNode`\\\\ s on a computational graph as\n they are required on backward propagation.\n Sometimes these results consume too much memory.\n ``F.forget`` *forgets* such intermediate results on forward propagation,\n and still supports backpropagation with recalculation.\n\n On a forward propagation, ``F.forget`` calls a given function with given\n variables without creating a computational graph. That means, no\n intermediate results are stored.\n On a backward propagation, ``F.forget`` calls the given function again to\n create a computational graph for backpropagation.\n\n ``F.forget`` reduces internal memory usage, whereas it requires more\n calculation time as it calls the function twice.\n\n .. admonition:: Example\n\n Let ``f`` be a function defined as:\n\n >>> def f(a, b):\n ... return a + b + a\n\n and, ``x`` and ``y`` be :class:`~chainer.Variable`\\\\ s:\n\n >>> x = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))\n >>> y = chainer.Variable(np.random.uniform(-1, 1, 5).astype(np.float32))\n\n When ``z`` is calculated as ``z = f(x, y)``, its intermediate result\n ``x + y`` is stored in memory. Instead, if you call ``f`` with\n ``F.forget``:\n\n >>> z = F.forget(f, x, y)\n\n intermediate ``x + y`` is forgotten.\n\n .. note::\n\n ``F.forget`` does not support functions which behave differently in\n multiple calls with the same inputs, such as\n :meth:`F.dropout() <chainer.functions.dropout>` and\n :meth:`F.negative_sampling() <chainer.functions.negative_sampling>`.\n\n .. note::\n\n In case input argument variables are of class :class:`numpy.ndarray` or\n :class:`cupy.ndarray` objects, arguments will automatically be\n converted to :class:`~chainer.Variable`\\\\ s.\n This conversion takes place to ensure that this function is included\n in the computational graph to enable backward computations.\n\n .. note::\n\n ``F.forget`` does not support double backpropagation.\n\n Args:\n func (callable): A function to call. It needs to be called with\n :class:`~chainer.Variable` object(s) and to return a\n :class:`~chainer.Variable` object or a tuple of\n :class:`~chainer.Variable` objects.\n xs (~chainer.Variable): Argument variables of the function.\n\n Returns:\n ~chainer.Variable: A variable ``func`` returns. If it returns a tuple,\n the method returns a tuple too.\n\n \"\"\"\n xs = tuple(x if isinstance(x, variable.Variable) else\n variable.Variable(x, requires_grad=True) for x in xs)\n y = Forget(func).apply(xs)\n if len(y) == 1:\n y, = y\n return y\n", "path": "chainer/functions/util/forget.py"}]} | 2,886 | 117 |
gh_patches_debug_24326 | rasdani/github-patches | git_diff | ansible-collections__community.general-2731 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
xfconf fails to set double value when LC_NUMERIC is set to nb_NO.UTF-8
### Summary
In https://github.com/ansible-collections/community.general/pull/744 `LANGUAGE` is used to force `xfconf-query` to return doubles using the expected format. This fails when `LC_NUMERIC` is set. From the article linked to in https://github.com/ansible-collections/community.general/pull/744, it seems like setting `LANGUAGE` should override `LC_NUMERIC`, but that isn't actually the case.
The correct variable to use in this case is probably `LC_ALL`.
I've attached a terminal recording showing the results. You'll notice that in the first run, the `previous_value` is `0,200000`, while after setting `LC_ALL=C`, the `previous_value` becomes `0.200000` which matches the input and no change is needed.
I've also attached the test-play I used (with a `.txt` extension because github doesn't like `.yml`).
[recording.txt](https://github.com/ansible-collections/community.general/files/6597487/recording.txt)
[test_play.yml](https://github.com/ansible-collections/community.general/files/6597488/test_play.yml.txt)
### Issue Type
Bug Report
### Component Name
xfconf
### Ansible Version
```console (paste below)
$ ansible --version
ansible 2.10.5
config file = /home/mortenlj/.ansible.cfg
configured module search path = ['/home/mortenlj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
DEFAULT_GATHERING(/home/mortenlj/.ansible.cfg) = smart
DEFAULT_HOST_LIST(/home/mortenlj/.ansible.cfg) = ['/home/mortenlj/code/personal/ansible/hosts']
DEFAULT_VAULT_PASSWORD_FILE(/home/mortenlj/.ansible.cfg) = /home/mortenlj/.ansible/vault-pass
```
### OS / Environment
```
$ cat /etc/os-release
NAME="Ubuntu"
VERSION="21.04 (Hirsute Hippo)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 21.04"
VERSION_ID="21.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=hirsute
UBUNTU_CODENAME=hirsute
$ apt list xfce4-settings
Listing... Done
xfce4-settings/hirsute,now 4.16.0-1ubuntu1 amd64 [installed,automatic]
```
### Steps to Reproduce
See summary for links.
### Expected Results
I expect the play to not "change" the xfconf property on every run, because the new value should match the already set value.
### Actual Results
See summary. I realise I only used `-vvv`, but I don't think the extra `v` would make it any clearer.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/module_utils/mh/mixins/cmd.py]
1 # -*- coding: utf-8 -*-
2 # (c) 2020, Alexei Znamensky <[email protected]>
3 # Copyright: (c) 2020, Ansible Project
4 # Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)
5
6 from __future__ import absolute_import, division, print_function
7 __metaclass__ = type
8
9 from functools import partial
10
11
12 class ArgFormat(object):
13 """
14 Argument formatter for use as a command line parameter. Used in CmdMixin.
15 """
16 BOOLEAN = 0
17 PRINTF = 1
18 FORMAT = 2
19
20 @staticmethod
21 def stars_deco(num):
22 if num == 1:
23 def deco(f):
24 return lambda v: f(*v)
25 return deco
26 elif num == 2:
27 def deco(f):
28 return lambda v: f(**v)
29 return deco
30
31 return lambda f: f
32
33 def __init__(self, name, fmt=None, style=FORMAT, stars=0):
34 """
35 Creates a CLI-formatter for one specific argument. The argument may be a module parameter or just a named parameter for
36 the CLI command execution.
37 :param name: Name of the argument to be formatted
38 :param fmt: Either a str to be formatted (using or not printf-style) or a callable that does that
39 :param style: Whether arg_format (as str) should use printf-style formatting.
40 Ignored if arg_format is None or not a str (should be callable).
41 :param stars: A int with 0, 1 or 2 value, indicating to formatting the value as: value, *value or **value
42 """
43 def printf_fmt(_fmt, v):
44 try:
45 return [_fmt % v]
46 except TypeError as e:
47 if e.args[0] != 'not all arguments converted during string formatting':
48 raise
49 return [_fmt]
50
51 _fmts = {
52 ArgFormat.BOOLEAN: lambda _fmt, v: ([_fmt] if bool(v) else []),
53 ArgFormat.PRINTF: printf_fmt,
54 ArgFormat.FORMAT: lambda _fmt, v: [_fmt.format(v)],
55 }
56
57 self.name = name
58 self.stars = stars
59
60 if fmt is None:
61 fmt = "{0}"
62 style = ArgFormat.FORMAT
63
64 if isinstance(fmt, str):
65 func = _fmts[style]
66 self.arg_format = partial(func, fmt)
67 elif isinstance(fmt, list) or isinstance(fmt, tuple):
68 self.arg_format = lambda v: [_fmts[style](f, v)[0] for f in fmt]
69 elif hasattr(fmt, '__call__'):
70 self.arg_format = fmt
71 else:
72 raise TypeError('Parameter fmt must be either: a string, a list/tuple of '
73 'strings or a function: type={0}, value={1}'.format(type(fmt), fmt))
74
75 if stars:
76 self.arg_format = (self.stars_deco(stars))(self.arg_format)
77
78 def to_text(self, value):
79 if value is None:
80 return []
81 func = self.arg_format
82 return [str(p) for p in func(value)]
83
84
85 class CmdMixin(object):
86 """
87 Mixin for mapping module options to running a CLI command with its arguments.
88 """
89 command = None
90 command_args_formats = {}
91 run_command_fixed_options = {}
92 check_rc = False
93 force_lang = "C"
94
95 @property
96 def module_formats(self):
97 result = {}
98 for param in self.module.params.keys():
99 result[param] = ArgFormat(param)
100 return result
101
102 @property
103 def custom_formats(self):
104 result = {}
105 for param, fmt_spec in self.command_args_formats.items():
106 result[param] = ArgFormat(param, **fmt_spec)
107 return result
108
109 def _calculate_args(self, extra_params=None, params=None):
110 def add_arg_formatted_param(_cmd_args, arg_format, _value):
111 args = list(arg_format.to_text(_value))
112 return _cmd_args + args
113
114 def find_format(_param):
115 return self.custom_formats.get(_param, self.module_formats.get(_param))
116
117 extra_params = extra_params or dict()
118 cmd_args = list([self.command]) if isinstance(self.command, str) else list(self.command)
119 try:
120 cmd_args[0] = self.module.get_bin_path(cmd_args[0], required=True)
121 except ValueError:
122 pass
123 param_list = params if params else self.vars.keys()
124
125 for param in param_list:
126 if isinstance(param, dict):
127 if len(param) != 1:
128 raise self.ModuleHelperException("run_command parameter as a dict must "
129 "contain only one key: {0}".format(param))
130 _param = list(param.keys())[0]
131 fmt = find_format(_param)
132 value = param[_param]
133 elif isinstance(param, str):
134 if param in self.vars.keys():
135 fmt = find_format(param)
136 value = self.vars[param]
137 elif param in extra_params:
138 fmt = find_format(param)
139 value = extra_params[param]
140 else:
141 self.module.deprecate("Cannot determine value for parameter: {0}. "
142 "From version 4.0.0 onwards this will generate an exception".format(param),
143 version="4.0.0", collection_name="community.general")
144 continue
145
146 else:
147 raise self.ModuleHelperException("run_command parameter must be either a str or a dict: {0}".format(param))
148 cmd_args = add_arg_formatted_param(cmd_args, fmt, value)
149
150 return cmd_args
151
152 def process_command_output(self, rc, out, err):
153 return rc, out, err
154
155 def run_command(self, extra_params=None, params=None, process_output=None, *args, **kwargs):
156 self.vars.cmd_args = self._calculate_args(extra_params, params)
157 options = dict(self.run_command_fixed_options)
158 env_update = dict(options.get('environ_update', {}))
159 options['check_rc'] = options.get('check_rc', self.check_rc)
160 if self.force_lang:
161 env_update.update({'LANGUAGE': self.force_lang})
162 self.update_output(force_lang=self.force_lang)
163 options['environ_update'] = env_update
164 options.update(kwargs)
165 rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)
166 self.update_output(rc=rc, stdout=out, stderr=err)
167 if process_output is None:
168 _process = self.process_command_output
169 else:
170 _process = process_output
171
172 return _process(rc, out, err)
173
[end of plugins/module_utils/mh/mixins/cmd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/module_utils/mh/mixins/cmd.py b/plugins/module_utils/mh/mixins/cmd.py
--- a/plugins/module_utils/mh/mixins/cmd.py
+++ b/plugins/module_utils/mh/mixins/cmd.py
@@ -155,13 +155,16 @@
def run_command(self, extra_params=None, params=None, process_output=None, *args, **kwargs):
self.vars.cmd_args = self._calculate_args(extra_params, params)
options = dict(self.run_command_fixed_options)
- env_update = dict(options.get('environ_update', {}))
options['check_rc'] = options.get('check_rc', self.check_rc)
+ options.update(kwargs)
+ env_update = dict(options.get('environ_update', {}))
if self.force_lang:
- env_update.update({'LANGUAGE': self.force_lang})
+ env_update.update({
+ 'LANGUAGE': self.force_lang,
+ 'LC_ALL': self.force_lang,
+ })
self.update_output(force_lang=self.force_lang)
options['environ_update'] = env_update
- options.update(kwargs)
rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)
self.update_output(rc=rc, stdout=out, stderr=err)
if process_output is None:
| {"golden_diff": "diff --git a/plugins/module_utils/mh/mixins/cmd.py b/plugins/module_utils/mh/mixins/cmd.py\n--- a/plugins/module_utils/mh/mixins/cmd.py\n+++ b/plugins/module_utils/mh/mixins/cmd.py\n@@ -155,13 +155,16 @@\n def run_command(self, extra_params=None, params=None, process_output=None, *args, **kwargs):\n self.vars.cmd_args = self._calculate_args(extra_params, params)\n options = dict(self.run_command_fixed_options)\n- env_update = dict(options.get('environ_update', {}))\n options['check_rc'] = options.get('check_rc', self.check_rc)\n+ options.update(kwargs)\n+ env_update = dict(options.get('environ_update', {}))\n if self.force_lang:\n- env_update.update({'LANGUAGE': self.force_lang})\n+ env_update.update({\n+ 'LANGUAGE': self.force_lang,\n+ 'LC_ALL': self.force_lang,\n+ })\n self.update_output(force_lang=self.force_lang)\n options['environ_update'] = env_update\n- options.update(kwargs)\n rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)\n self.update_output(rc=rc, stdout=out, stderr=err)\n if process_output is None:\n", "issue": "xfconf fails to set double value when LC_NUMERIC is set to nb_NO.UTF-8\n### Summary\r\n\r\nIn https://github.com/ansible-collections/community.general/pull/744 `LANGUAGE` is used to force `xfconf-query` to return doubles using the expected format. This fails when `LC_NUMERIC` is set. From the article linked to in https://github.com/ansible-collections/community.general/pull/744, it seems like setting `LANGUAGE` should override `LC_NUMERIC`, but that isn't actually the case.\r\n\r\nThe correct variable to use in this case is probably `LC_ALL`.\r\n\r\nI've attached a terminal recording showing the results. You'll notice that in the first run, the `previous_value` is `0,200000`, while after setting `LC_ALL=C`, the `previous_value` becomes `0.200000` which matches the input and no change is needed.\r\n\r\nI've also attached the test-play I used (with a `.txt` extension because github doesn't like `.yml`).\r\n\r\n[recording.txt](https://github.com/ansible-collections/community.general/files/6597487/recording.txt)\r\n\r\n[test_play.yml](https://github.com/ansible-collections/community.general/files/6597488/test_play.yml.txt)\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nxfconf\r\n\r\n### Ansible Version\r\n\r\n```console (paste below)\r\n$ ansible --version\r\nansible 2.10.5\r\n config file = /home/mortenlj/.ansible.cfg\r\n configured module search path = ['/home/mortenlj/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.9.5 (default, May 11 2021, 08:20:37) [GCC 10.3.0]\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\nDEFAULT_GATHERING(/home/mortenlj/.ansible.cfg) = smart\r\nDEFAULT_HOST_LIST(/home/mortenlj/.ansible.cfg) = ['/home/mortenlj/code/personal/ansible/hosts']\r\nDEFAULT_VAULT_PASSWORD_FILE(/home/mortenlj/.ansible.cfg) = /home/mortenlj/.ansible/vault-pass\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\n```\r\n$ cat /etc/os-release \r\nNAME=\"Ubuntu\"\r\nVERSION=\"21.04 (Hirsute Hippo)\"\r\nID=ubuntu\r\nID_LIKE=debian\r\nPRETTY_NAME=\"Ubuntu 21.04\"\r\nVERSION_ID=\"21.04\"\r\nHOME_URL=\"https://www.ubuntu.com/\"\r\nSUPPORT_URL=\"https://help.ubuntu.com/\"\r\nBUG_REPORT_URL=\"https://bugs.launchpad.net/ubuntu/\"\r\nPRIVACY_POLICY_URL=\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\"\r\nVERSION_CODENAME=hirsute\r\nUBUNTU_CODENAME=hirsute\r\n\r\n$ apt list xfce4-settings\r\nListing... Done\r\nxfce4-settings/hirsute,now 4.16.0-1ubuntu1 amd64 [installed,automatic]\r\n```\r\n\r\n\r\n### Steps to Reproduce\r\n\r\nSee summary for links.\r\n\r\n### Expected Results\r\n\r\nI expect the play to not \"change\" the xfconf property on every run, because the new value should match the already set value.\r\n\r\n### Actual Results\r\n\r\nSee summary. I realise I only used `-vvv`, but I don't think the extra `v` would make it any clearer.\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# (c) 2020, Alexei Znamensky <[email protected]>\n# Copyright: (c) 2020, Ansible Project\n# Simplified BSD License (see licenses/simplified_bsd.txt or https://opensource.org/licenses/BSD-2-Clause)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\nfrom functools import partial\n\n\nclass ArgFormat(object):\n \"\"\"\n Argument formatter for use as a command line parameter. Used in CmdMixin.\n \"\"\"\n BOOLEAN = 0\n PRINTF = 1\n FORMAT = 2\n\n @staticmethod\n def stars_deco(num):\n if num == 1:\n def deco(f):\n return lambda v: f(*v)\n return deco\n elif num == 2:\n def deco(f):\n return lambda v: f(**v)\n return deco\n\n return lambda f: f\n\n def __init__(self, name, fmt=None, style=FORMAT, stars=0):\n \"\"\"\n Creates a CLI-formatter for one specific argument. The argument may be a module parameter or just a named parameter for\n the CLI command execution.\n :param name: Name of the argument to be formatted\n :param fmt: Either a str to be formatted (using or not printf-style) or a callable that does that\n :param style: Whether arg_format (as str) should use printf-style formatting.\n Ignored if arg_format is None or not a str (should be callable).\n :param stars: A int with 0, 1 or 2 value, indicating to formatting the value as: value, *value or **value\n \"\"\"\n def printf_fmt(_fmt, v):\n try:\n return [_fmt % v]\n except TypeError as e:\n if e.args[0] != 'not all arguments converted during string formatting':\n raise\n return [_fmt]\n\n _fmts = {\n ArgFormat.BOOLEAN: lambda _fmt, v: ([_fmt] if bool(v) else []),\n ArgFormat.PRINTF: printf_fmt,\n ArgFormat.FORMAT: lambda _fmt, v: [_fmt.format(v)],\n }\n\n self.name = name\n self.stars = stars\n\n if fmt is None:\n fmt = \"{0}\"\n style = ArgFormat.FORMAT\n\n if isinstance(fmt, str):\n func = _fmts[style]\n self.arg_format = partial(func, fmt)\n elif isinstance(fmt, list) or isinstance(fmt, tuple):\n self.arg_format = lambda v: [_fmts[style](f, v)[0] for f in fmt]\n elif hasattr(fmt, '__call__'):\n self.arg_format = fmt\n else:\n raise TypeError('Parameter fmt must be either: a string, a list/tuple of '\n 'strings or a function: type={0}, value={1}'.format(type(fmt), fmt))\n\n if stars:\n self.arg_format = (self.stars_deco(stars))(self.arg_format)\n\n def to_text(self, value):\n if value is None:\n return []\n func = self.arg_format\n return [str(p) for p in func(value)]\n\n\nclass CmdMixin(object):\n \"\"\"\n Mixin for mapping module options to running a CLI command with its arguments.\n \"\"\"\n command = None\n command_args_formats = {}\n run_command_fixed_options = {}\n check_rc = False\n force_lang = \"C\"\n\n @property\n def module_formats(self):\n result = {}\n for param in self.module.params.keys():\n result[param] = ArgFormat(param)\n return result\n\n @property\n def custom_formats(self):\n result = {}\n for param, fmt_spec in self.command_args_formats.items():\n result[param] = ArgFormat(param, **fmt_spec)\n return result\n\n def _calculate_args(self, extra_params=None, params=None):\n def add_arg_formatted_param(_cmd_args, arg_format, _value):\n args = list(arg_format.to_text(_value))\n return _cmd_args + args\n\n def find_format(_param):\n return self.custom_formats.get(_param, self.module_formats.get(_param))\n\n extra_params = extra_params or dict()\n cmd_args = list([self.command]) if isinstance(self.command, str) else list(self.command)\n try:\n cmd_args[0] = self.module.get_bin_path(cmd_args[0], required=True)\n except ValueError:\n pass\n param_list = params if params else self.vars.keys()\n\n for param in param_list:\n if isinstance(param, dict):\n if len(param) != 1:\n raise self.ModuleHelperException(\"run_command parameter as a dict must \"\n \"contain only one key: {0}\".format(param))\n _param = list(param.keys())[0]\n fmt = find_format(_param)\n value = param[_param]\n elif isinstance(param, str):\n if param in self.vars.keys():\n fmt = find_format(param)\n value = self.vars[param]\n elif param in extra_params:\n fmt = find_format(param)\n value = extra_params[param]\n else:\n self.module.deprecate(\"Cannot determine value for parameter: {0}. \"\n \"From version 4.0.0 onwards this will generate an exception\".format(param),\n version=\"4.0.0\", collection_name=\"community.general\")\n continue\n\n else:\n raise self.ModuleHelperException(\"run_command parameter must be either a str or a dict: {0}\".format(param))\n cmd_args = add_arg_formatted_param(cmd_args, fmt, value)\n\n return cmd_args\n\n def process_command_output(self, rc, out, err):\n return rc, out, err\n\n def run_command(self, extra_params=None, params=None, process_output=None, *args, **kwargs):\n self.vars.cmd_args = self._calculate_args(extra_params, params)\n options = dict(self.run_command_fixed_options)\n env_update = dict(options.get('environ_update', {}))\n options['check_rc'] = options.get('check_rc', self.check_rc)\n if self.force_lang:\n env_update.update({'LANGUAGE': self.force_lang})\n self.update_output(force_lang=self.force_lang)\n options['environ_update'] = env_update\n options.update(kwargs)\n rc, out, err = self.module.run_command(self.vars.cmd_args, *args, **options)\n self.update_output(rc=rc, stdout=out, stderr=err)\n if process_output is None:\n _process = self.process_command_output\n else:\n _process = process_output\n\n return _process(rc, out, err)\n", "path": "plugins/module_utils/mh/mixins/cmd.py"}]} | 3,197 | 286 |
gh_patches_debug_20847 | rasdani/github-patches | git_diff | pytorch__vision-2754 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Has anyone successfully compiled the C++ API on Windows, including the cuda code?
I have tried many versions of torchvision, none seems to work for me. (on Windows)
1, The old version 0.5.0 simply does not include cuda code into the build.
2, Then I tried 0.6.0, it has the "-openmp" not support error. After a lot of searching and trying, I solved it by deleting one line in the torchvision.vcxproj.
3, 0.6.0 finally gets to compile, but it gives "unresolved external symbol" error if i want to use nms_cuda, according to this issue #2139, the cuda impl of operators are not included in torchvision.lib.
4, Then I tried the recent 0.7.0 tag and it gives "A single input file is required for a non-link phase when an outputfile is specified" error, this one (#2677) simply no one here knows how to solve it. Other resources suggests it might due to misplaced white space...
Could you point me to a version that works on Windows? Or simply no support for Windows?
cc @peterjc123 @nbcsm @guyang3532 @maxluk @gunandrose4u @smartcat2010 @mszhanyi
</issue>
<code>
[start of .circleci/regenerate.py]
1 #!/usr/bin/env python3
2
3 """
4 This script should use a very simple, functional programming style.
5 Avoid Jinja macros in favor of native Python functions.
6
7 Don't go overboard on code generation; use Python only to generate
8 content that can't be easily declared statically using CircleCI's YAML API.
9
10 Data declarations (e.g. the nested loops for defining the configuration matrix)
11 should be at the top of the file for easy updating.
12
13 See this comment for design rationale:
14 https://github.com/pytorch/vision/pull/1321#issuecomment-531033978
15 """
16
17 import jinja2
18 import yaml
19 import os.path
20
21
22 PYTHON_VERSIONS = ["3.6", "3.7", "3.8"]
23
24
25 def build_workflows(prefix='', filter_branch=None, upload=False, indentation=6, windows_latest_only=False):
26 w = []
27 for btype in ["wheel", "conda"]:
28 for os_type in ["linux", "macos", "win"]:
29 python_versions = PYTHON_VERSIONS
30 cu_versions_dict = {"linux": ["cpu", "cu92", "cu101", "cu102", "cu110"],
31 "win": ["cpu", "cu101", "cu102", "cu110"],
32 "macos": ["cpu"]}
33 cu_versions = cu_versions_dict[os_type]
34 for python_version in python_versions:
35 for cu_version in cu_versions:
36 for unicode in ([False, True] if btype == "wheel" and python_version == "2.7" else [False]):
37 fb = filter_branch
38 if windows_latest_only and os_type == "win" and filter_branch is None and \
39 (python_version != python_versions[-1] or
40 (cu_version not in [cu_versions[0], cu_versions[-1]])):
41 fb = "master"
42 w += workflow_pair(
43 btype, os_type, python_version, cu_version,
44 unicode, prefix, upload, filter_branch=fb)
45
46 return indent(indentation, w)
47
48
49 def workflow_pair(btype, os_type, python_version, cu_version, unicode, prefix='', upload=False, *, filter_branch=None):
50
51 w = []
52 unicode_suffix = "u" if unicode else ""
53 base_workflow_name = f"{prefix}binary_{os_type}_{btype}_py{python_version}{unicode_suffix}_{cu_version}"
54
55 w.append(generate_base_workflow(
56 base_workflow_name, python_version, cu_version,
57 unicode, os_type, btype, filter_branch=filter_branch))
58
59 if upload:
60 w.append(generate_upload_workflow(base_workflow_name, os_type, btype, cu_version, filter_branch=filter_branch))
61 if filter_branch == 'nightly' and os_type in ['linux', 'win']:
62 pydistro = 'pip' if btype == 'wheel' else 'conda'
63 w.append(generate_smoketest_workflow(pydistro, base_workflow_name, filter_branch, python_version, os_type))
64
65 return w
66
67
68 manylinux_images = {
69 "cu92": "pytorch/manylinux-cuda92",
70 "cu101": "pytorch/manylinux-cuda101",
71 "cu102": "pytorch/manylinux-cuda102",
72 "cu110": "pytorch/manylinux-cuda110",
73 }
74
75
76 def get_manylinux_image(cu_version):
77 cu_suffix = "102"
78 if cu_version.startswith('cu'):
79 cu_suffix = cu_version[len('cu'):]
80 return f"pytorch/manylinux-cuda{cu_suffix}"
81
82
83 def generate_base_workflow(base_workflow_name, python_version, cu_version,
84 unicode, os_type, btype, *, filter_branch=None):
85
86 d = {
87 "name": base_workflow_name,
88 "python_version": python_version,
89 "cu_version": cu_version,
90 }
91
92 if os_type != "win" and unicode:
93 d["unicode_abi"] = '1'
94
95 if os_type != "win":
96 d["wheel_docker_image"] = get_manylinux_image(cu_version)
97
98 if filter_branch is not None:
99 d["filters"] = {
100 "branches": {
101 "only": filter_branch
102 },
103 "tags": {
104 # Using a raw string here to avoid having to escape
105 # anything
106 "only": r"/v[0-9]+(\.[0-9]+)*-rc[0-9]+/"
107 }
108 }
109
110 w = f"binary_{os_type}_{btype}"
111 return {w: d}
112
113
114 def gen_filter_branch_tree(*branches):
115 return {"branches": {"only": [b for b in branches]}}
116
117
118 def generate_upload_workflow(base_workflow_name, os_type, btype, cu_version, *, filter_branch=None):
119 d = {
120 "name": f"{base_workflow_name}_upload",
121 "context": "org-member",
122 "requires": [base_workflow_name],
123 }
124
125 if btype == 'wheel':
126 d["subfolder"] = "" if os_type == 'macos' else cu_version + "/"
127
128 if filter_branch is not None:
129 d["filters"] = {
130 "branches": {
131 "only": filter_branch
132 },
133 "tags": {
134 # Using a raw string here to avoid having to escape
135 # anything
136 "only": r"/v[0-9]+(\.[0-9]+)*-rc[0-9]+/"
137 }
138 }
139
140 return {f"binary_{btype}_upload": d}
141
142
143 def generate_smoketest_workflow(pydistro, base_workflow_name, filter_branch, python_version, os_type):
144
145 required_build_suffix = "_upload"
146 required_build_name = base_workflow_name + required_build_suffix
147
148 smoke_suffix = f"smoke_test_{pydistro}"
149 d = {
150 "name": f"{base_workflow_name}_{smoke_suffix}",
151 "requires": [required_build_name],
152 "python_version": python_version,
153 }
154
155 if filter_branch:
156 d["filters"] = gen_filter_branch_tree(filter_branch)
157
158 return {"smoke_test_{os_type}_{pydistro}".format(os_type=os_type, pydistro=pydistro): d}
159
160
161 def indent(indentation, data_list):
162 return ("\n" + " " * indentation).join(
163 yaml.dump(data_list, default_flow_style=False).splitlines())
164
165
166 def unittest_workflows(indentation=6):
167 jobs = []
168 for os_type in ["linux", "windows", "macos"]:
169 for device_type in ["cpu", "gpu"]:
170 if os_type == "macos" and device_type == "gpu":
171 continue
172 for i, python_version in enumerate(PYTHON_VERSIONS):
173 job = {
174 "name": f"unittest_{os_type}_{device_type}_py{python_version}",
175 "python_version": python_version,
176 }
177
178 if device_type == 'gpu':
179 if python_version != "3.8":
180 job['filters'] = gen_filter_branch_tree('master', 'nightly')
181 job['cu_version'] = 'cu101'
182 else:
183 job['cu_version'] = 'cpu'
184
185 jobs.append({f"unittest_{os_type}_{device_type}": job})
186
187 return indent(indentation, jobs)
188
189
190 def cmake_workflows(indentation=6):
191 jobs = []
192 python_version = '3.8'
193 for os_type in ['linux', 'windows', 'macos']:
194 # Right now CMake builds are failling on Windows (GPU)
195 device_types = ['cpu', 'gpu'] if os_type == 'linux' else ['cpu']
196 for device in device_types:
197 job = {
198 'name': f'cmake_{os_type}_{device}',
199 'python_version': python_version
200 }
201
202 job['cu_version'] = 'cu101' if device == 'gpu' else 'cpu'
203 if device == 'gpu':
204 job['wheel_docker_image'] = 'pytorch/manylinux-cuda101'
205 jobs.append({f'cmake_{os_type}_{device}': job})
206 return indent(indentation, jobs)
207
208
209 if __name__ == "__main__":
210 d = os.path.dirname(__file__)
211 env = jinja2.Environment(
212 loader=jinja2.FileSystemLoader(d),
213 lstrip_blocks=True,
214 autoescape=False,
215 )
216
217 with open(os.path.join(d, 'config.yml'), 'w') as f:
218 f.write(env.get_template('config.yml.in').render(
219 build_workflows=build_workflows,
220 unittest_workflows=unittest_workflows,
221 cmake_workflows=cmake_workflows,
222 ))
223
[end of .circleci/regenerate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.circleci/regenerate.py b/.circleci/regenerate.py
--- a/.circleci/regenerate.py
+++ b/.circleci/regenerate.py
@@ -191,8 +191,8 @@
jobs = []
python_version = '3.8'
for os_type in ['linux', 'windows', 'macos']:
- # Right now CMake builds are failling on Windows (GPU)
- device_types = ['cpu', 'gpu'] if os_type == 'linux' else ['cpu']
+ # Skip OSX CUDA
+ device_types = ['cpu', 'gpu'] if os_type != 'macos' else ['cpu']
for device in device_types:
job = {
'name': f'cmake_{os_type}_{device}',
@@ -200,7 +200,7 @@
}
job['cu_version'] = 'cu101' if device == 'gpu' else 'cpu'
- if device == 'gpu':
+ if device == 'gpu' and os_type == 'linux':
job['wheel_docker_image'] = 'pytorch/manylinux-cuda101'
jobs.append({f'cmake_{os_type}_{device}': job})
return indent(indentation, jobs)
| {"golden_diff": "diff --git a/.circleci/regenerate.py b/.circleci/regenerate.py\n--- a/.circleci/regenerate.py\n+++ b/.circleci/regenerate.py\n@@ -191,8 +191,8 @@\n jobs = []\n python_version = '3.8'\n for os_type in ['linux', 'windows', 'macos']:\n- # Right now CMake builds are failling on Windows (GPU)\n- device_types = ['cpu', 'gpu'] if os_type == 'linux' else ['cpu']\n+ # Skip OSX CUDA\n+ device_types = ['cpu', 'gpu'] if os_type != 'macos' else ['cpu']\n for device in device_types:\n job = {\n 'name': f'cmake_{os_type}_{device}',\n@@ -200,7 +200,7 @@\n }\n \n job['cu_version'] = 'cu101' if device == 'gpu' else 'cpu'\n- if device == 'gpu':\n+ if device == 'gpu' and os_type == 'linux':\n job['wheel_docker_image'] = 'pytorch/manylinux-cuda101'\n jobs.append({f'cmake_{os_type}_{device}': job})\n return indent(indentation, jobs)\n", "issue": "Has anyone successfully compiled the C++ API on Windows, including the cuda code?\nI have tried many versions of torchvision, none seems to work for me. (on Windows)\r\n1, The old version 0.5.0 simply does not include cuda code into the build. \r\n2, Then I tried 0.6.0, it has the \"-openmp\" not support error. After a lot of searching and trying, I solved it by deleting one line in the torchvision.vcxproj.\r\n3, 0.6.0 finally gets to compile, but it gives \"unresolved external symbol\" error if i want to use nms_cuda, according to this issue #2139, the cuda impl of operators are not included in torchvision.lib.\r\n4, Then I tried the recent 0.7.0 tag and it gives \"A single input file is required for a non-link phase when an outputfile is specified\" error, this one (#2677) simply no one here knows how to solve it. Other resources suggests it might due to misplaced white space...\r\n\r\nCould you point me to a version that works on Windows? Or simply no support for Windows?\r\n\n\ncc @peterjc123 @nbcsm @guyang3532 @maxluk @gunandrose4u @smartcat2010 @mszhanyi\n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"\nThis script should use a very simple, functional programming style.\nAvoid Jinja macros in favor of native Python functions.\n\nDon't go overboard on code generation; use Python only to generate\ncontent that can't be easily declared statically using CircleCI's YAML API.\n\nData declarations (e.g. the nested loops for defining the configuration matrix)\nshould be at the top of the file for easy updating.\n\nSee this comment for design rationale:\nhttps://github.com/pytorch/vision/pull/1321#issuecomment-531033978\n\"\"\"\n\nimport jinja2\nimport yaml\nimport os.path\n\n\nPYTHON_VERSIONS = [\"3.6\", \"3.7\", \"3.8\"]\n\n\ndef build_workflows(prefix='', filter_branch=None, upload=False, indentation=6, windows_latest_only=False):\n w = []\n for btype in [\"wheel\", \"conda\"]:\n for os_type in [\"linux\", \"macos\", \"win\"]:\n python_versions = PYTHON_VERSIONS\n cu_versions_dict = {\"linux\": [\"cpu\", \"cu92\", \"cu101\", \"cu102\", \"cu110\"],\n \"win\": [\"cpu\", \"cu101\", \"cu102\", \"cu110\"],\n \"macos\": [\"cpu\"]}\n cu_versions = cu_versions_dict[os_type]\n for python_version in python_versions:\n for cu_version in cu_versions:\n for unicode in ([False, True] if btype == \"wheel\" and python_version == \"2.7\" else [False]):\n fb = filter_branch\n if windows_latest_only and os_type == \"win\" and filter_branch is None and \\\n (python_version != python_versions[-1] or\n (cu_version not in [cu_versions[0], cu_versions[-1]])):\n fb = \"master\"\n w += workflow_pair(\n btype, os_type, python_version, cu_version,\n unicode, prefix, upload, filter_branch=fb)\n\n return indent(indentation, w)\n\n\ndef workflow_pair(btype, os_type, python_version, cu_version, unicode, prefix='', upload=False, *, filter_branch=None):\n\n w = []\n unicode_suffix = \"u\" if unicode else \"\"\n base_workflow_name = f\"{prefix}binary_{os_type}_{btype}_py{python_version}{unicode_suffix}_{cu_version}\"\n\n w.append(generate_base_workflow(\n base_workflow_name, python_version, cu_version,\n unicode, os_type, btype, filter_branch=filter_branch))\n\n if upload:\n w.append(generate_upload_workflow(base_workflow_name, os_type, btype, cu_version, filter_branch=filter_branch))\n if filter_branch == 'nightly' and os_type in ['linux', 'win']:\n pydistro = 'pip' if btype == 'wheel' else 'conda'\n w.append(generate_smoketest_workflow(pydistro, base_workflow_name, filter_branch, python_version, os_type))\n\n return w\n\n\nmanylinux_images = {\n \"cu92\": \"pytorch/manylinux-cuda92\",\n \"cu101\": \"pytorch/manylinux-cuda101\",\n \"cu102\": \"pytorch/manylinux-cuda102\",\n \"cu110\": \"pytorch/manylinux-cuda110\",\n}\n\n\ndef get_manylinux_image(cu_version):\n cu_suffix = \"102\"\n if cu_version.startswith('cu'):\n cu_suffix = cu_version[len('cu'):]\n return f\"pytorch/manylinux-cuda{cu_suffix}\"\n\n\ndef generate_base_workflow(base_workflow_name, python_version, cu_version,\n unicode, os_type, btype, *, filter_branch=None):\n\n d = {\n \"name\": base_workflow_name,\n \"python_version\": python_version,\n \"cu_version\": cu_version,\n }\n\n if os_type != \"win\" and unicode:\n d[\"unicode_abi\"] = '1'\n\n if os_type != \"win\":\n d[\"wheel_docker_image\"] = get_manylinux_image(cu_version)\n\n if filter_branch is not None:\n d[\"filters\"] = {\n \"branches\": {\n \"only\": filter_branch\n },\n \"tags\": {\n # Using a raw string here to avoid having to escape\n # anything\n \"only\": r\"/v[0-9]+(\\.[0-9]+)*-rc[0-9]+/\"\n }\n }\n\n w = f\"binary_{os_type}_{btype}\"\n return {w: d}\n\n\ndef gen_filter_branch_tree(*branches):\n return {\"branches\": {\"only\": [b for b in branches]}}\n\n\ndef generate_upload_workflow(base_workflow_name, os_type, btype, cu_version, *, filter_branch=None):\n d = {\n \"name\": f\"{base_workflow_name}_upload\",\n \"context\": \"org-member\",\n \"requires\": [base_workflow_name],\n }\n\n if btype == 'wheel':\n d[\"subfolder\"] = \"\" if os_type == 'macos' else cu_version + \"/\"\n\n if filter_branch is not None:\n d[\"filters\"] = {\n \"branches\": {\n \"only\": filter_branch\n },\n \"tags\": {\n # Using a raw string here to avoid having to escape\n # anything\n \"only\": r\"/v[0-9]+(\\.[0-9]+)*-rc[0-9]+/\"\n }\n }\n\n return {f\"binary_{btype}_upload\": d}\n\n\ndef generate_smoketest_workflow(pydistro, base_workflow_name, filter_branch, python_version, os_type):\n\n required_build_suffix = \"_upload\"\n required_build_name = base_workflow_name + required_build_suffix\n\n smoke_suffix = f\"smoke_test_{pydistro}\"\n d = {\n \"name\": f\"{base_workflow_name}_{smoke_suffix}\",\n \"requires\": [required_build_name],\n \"python_version\": python_version,\n }\n\n if filter_branch:\n d[\"filters\"] = gen_filter_branch_tree(filter_branch)\n\n return {\"smoke_test_{os_type}_{pydistro}\".format(os_type=os_type, pydistro=pydistro): d}\n\n\ndef indent(indentation, data_list):\n return (\"\\n\" + \" \" * indentation).join(\n yaml.dump(data_list, default_flow_style=False).splitlines())\n\n\ndef unittest_workflows(indentation=6):\n jobs = []\n for os_type in [\"linux\", \"windows\", \"macos\"]:\n for device_type in [\"cpu\", \"gpu\"]:\n if os_type == \"macos\" and device_type == \"gpu\":\n continue\n for i, python_version in enumerate(PYTHON_VERSIONS):\n job = {\n \"name\": f\"unittest_{os_type}_{device_type}_py{python_version}\",\n \"python_version\": python_version,\n }\n\n if device_type == 'gpu':\n if python_version != \"3.8\":\n job['filters'] = gen_filter_branch_tree('master', 'nightly')\n job['cu_version'] = 'cu101'\n else:\n job['cu_version'] = 'cpu'\n\n jobs.append({f\"unittest_{os_type}_{device_type}\": job})\n\n return indent(indentation, jobs)\n\n\ndef cmake_workflows(indentation=6):\n jobs = []\n python_version = '3.8'\n for os_type in ['linux', 'windows', 'macos']:\n # Right now CMake builds are failling on Windows (GPU)\n device_types = ['cpu', 'gpu'] if os_type == 'linux' else ['cpu']\n for device in device_types:\n job = {\n 'name': f'cmake_{os_type}_{device}',\n 'python_version': python_version\n }\n\n job['cu_version'] = 'cu101' if device == 'gpu' else 'cpu'\n if device == 'gpu':\n job['wheel_docker_image'] = 'pytorch/manylinux-cuda101'\n jobs.append({f'cmake_{os_type}_{device}': job})\n return indent(indentation, jobs)\n\n\nif __name__ == \"__main__\":\n d = os.path.dirname(__file__)\n env = jinja2.Environment(\n loader=jinja2.FileSystemLoader(d),\n lstrip_blocks=True,\n autoescape=False,\n )\n\n with open(os.path.join(d, 'config.yml'), 'w') as f:\n f.write(env.get_template('config.yml.in').render(\n build_workflows=build_workflows,\n unittest_workflows=unittest_workflows,\n cmake_workflows=cmake_workflows,\n ))\n", "path": ".circleci/regenerate.py"}]} | 3,287 | 280 |
gh_patches_debug_47845 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-550 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
About page requires login
**Describe the bug**
Accessing the "About this server" link (https://bookwyrm.social/about) redirects to login
**To Reproduce**
Steps to reproduce the behavior:
1. Go to https://bookwyrm.social/about
2. redirected to login instead of seeing an about page (the URL is login/?next=/about)
**Expected behavior**
Access to information about this site / server
**Desktop (please complete the following information):**
- OS: linux
- Browser firefox
- Version 85 (developer edition)
</issue>
<code>
[start of bookwyrm/views/landing.py]
1 ''' non-interactive pages '''
2 from django.contrib.auth.decorators import login_required
3 from django.core.paginator import Paginator
4 from django.db.models import Avg, Max
5 from django.template.response import TemplateResponse
6 from django.utils import timezone
7 from django.utils.decorators import method_decorator
8 from django.views import View
9
10 from bookwyrm import forms, models
11 from bookwyrm.settings import PAGE_LENGTH
12 from .helpers import get_activity_feed
13
14
15 # pylint: disable= no-self-use
16 @method_decorator(login_required, name='dispatch')
17 class About(View):
18 ''' create invites '''
19 def get(self, request):
20 ''' more information about the instance '''
21 data = {
22 'title': 'About',
23 }
24 return TemplateResponse(request, 'about.html', data)
25
26 class Home(View):
27 ''' discover page or home feed depending on auth '''
28 def get(self, request):
29 ''' this is the same as the feed on the home tab '''
30 if request.user.is_authenticated:
31 feed_view = Feed.as_view()
32 return feed_view(request, 'home')
33 discover_view = Discover.as_view()
34 return discover_view(request)
35
36 class Discover(View):
37 ''' preview of recently reviewed books '''
38 def get(self, request):
39 ''' tiled book activity page '''
40 books = models.Edition.objects.filter(
41 review__published_date__isnull=False,
42 review__user__local=True,
43 review__privacy__in=['public', 'unlisted'],
44 ).exclude(
45 cover__exact=''
46 ).annotate(
47 Max('review__published_date')
48 ).order_by('-review__published_date__max')[:6]
49
50 ratings = {}
51 for book in books:
52 reviews = models.Review.objects.filter(
53 book__in=book.parent_work.editions.all()
54 )
55 reviews = get_activity_feed(
56 request.user, ['public', 'unlisted'], queryset=reviews)
57 ratings[book.id] = reviews.aggregate(Avg('rating'))['rating__avg']
58 data = {
59 'title': 'Discover',
60 'register_form': forms.RegisterForm(),
61 'books': list(set(books)),
62 'ratings': ratings
63 }
64 return TemplateResponse(request, 'discover.html', data)
65
66
67 @method_decorator(login_required, name='dispatch')
68 class Feed(View):
69 ''' activity stream '''
70 def get(self, request, tab):
71 ''' user's homepage with activity feed '''
72 try:
73 page = int(request.GET.get('page', 1))
74 except ValueError:
75 page = 1
76
77 suggested_books = get_suggested_books(request.user)
78
79 if tab == 'home':
80 activities = get_activity_feed(
81 request.user, ['public', 'unlisted', 'followers'],
82 following_only=True)
83 elif tab == 'local':
84 activities = get_activity_feed(
85 request.user, ['public', 'followers'], local_only=True)
86 else:
87 activities = get_activity_feed(
88 request.user, ['public', 'followers'])
89 paginated = Paginator(activities, PAGE_LENGTH)
90
91 goal = models.AnnualGoal.objects.filter(
92 user=request.user, year=timezone.now().year
93 ).first()
94 data = {
95 'title': 'Updates Feed',
96 'user': request.user,
97 'suggested_books': suggested_books,
98 'activities': paginated.page(page),
99 'tab': tab,
100 'goal': goal,
101 'goal_form': forms.GoalForm(),
102 }
103 return TemplateResponse(request, 'feed.html', data)
104
105
106 def get_suggested_books(user, max_books=5):
107 ''' helper to get a user's recent books '''
108 book_count = 0
109 preset_shelves = [
110 ('reading', max_books), ('read', 2), ('to-read', max_books)
111 ]
112 suggested_books = []
113 for (preset, shelf_max) in preset_shelves:
114 limit = shelf_max if shelf_max < (max_books - book_count) \
115 else max_books - book_count
116 shelf = user.shelf_set.get(identifier=preset)
117
118 shelf_books = shelf.shelfbook_set.order_by(
119 '-updated_date'
120 ).all()[:limit]
121 if not shelf_books:
122 continue
123 shelf_preview = {
124 'name': shelf.name,
125 'books': [s.book for s in shelf_books]
126 }
127 suggested_books.append(shelf_preview)
128 book_count += len(shelf_preview['books'])
129 return suggested_books
130
[end of bookwyrm/views/landing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/landing.py b/bookwyrm/views/landing.py
--- a/bookwyrm/views/landing.py
+++ b/bookwyrm/views/landing.py
@@ -13,7 +13,6 @@
# pylint: disable= no-self-use
-@method_decorator(login_required, name='dispatch')
class About(View):
''' create invites '''
def get(self, request):
| {"golden_diff": "diff --git a/bookwyrm/views/landing.py b/bookwyrm/views/landing.py\n--- a/bookwyrm/views/landing.py\n+++ b/bookwyrm/views/landing.py\n@@ -13,7 +13,6 @@\n \n \n # pylint: disable= no-self-use\n-@method_decorator(login_required, name='dispatch')\n class About(View):\n ''' create invites '''\n def get(self, request):\n", "issue": "About page requires login\n**Describe the bug**\r\nAccessing the \"About this server\" link (https://bookwyrm.social/about) redirects to login\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to https://bookwyrm.social/about\r\n2. redirected to login instead of seeing an about page (the URL is login/?next=/about)\r\n\r\n**Expected behavior**\r\nAccess to information about this site / server\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: linux\r\n - Browser firefox\r\n - Version 85 (developer edition)\r\n\n", "before_files": [{"content": "''' non-interactive pages '''\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.paginator import Paginator\nfrom django.db.models import Avg, Max\nfrom django.template.response import TemplateResponse\nfrom django.utils import timezone\nfrom django.utils.decorators import method_decorator\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.settings import PAGE_LENGTH\nfrom .helpers import get_activity_feed\n\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name='dispatch')\nclass About(View):\n ''' create invites '''\n def get(self, request):\n ''' more information about the instance '''\n data = {\n 'title': 'About',\n }\n return TemplateResponse(request, 'about.html', data)\n\nclass Home(View):\n ''' discover page or home feed depending on auth '''\n def get(self, request):\n ''' this is the same as the feed on the home tab '''\n if request.user.is_authenticated:\n feed_view = Feed.as_view()\n return feed_view(request, 'home')\n discover_view = Discover.as_view()\n return discover_view(request)\n\nclass Discover(View):\n ''' preview of recently reviewed books '''\n def get(self, request):\n ''' tiled book activity page '''\n books = models.Edition.objects.filter(\n review__published_date__isnull=False,\n review__user__local=True,\n review__privacy__in=['public', 'unlisted'],\n ).exclude(\n cover__exact=''\n ).annotate(\n Max('review__published_date')\n ).order_by('-review__published_date__max')[:6]\n\n ratings = {}\n for book in books:\n reviews = models.Review.objects.filter(\n book__in=book.parent_work.editions.all()\n )\n reviews = get_activity_feed(\n request.user, ['public', 'unlisted'], queryset=reviews)\n ratings[book.id] = reviews.aggregate(Avg('rating'))['rating__avg']\n data = {\n 'title': 'Discover',\n 'register_form': forms.RegisterForm(),\n 'books': list(set(books)),\n 'ratings': ratings\n }\n return TemplateResponse(request, 'discover.html', data)\n\n\n@method_decorator(login_required, name='dispatch')\nclass Feed(View):\n ''' activity stream '''\n def get(self, request, tab):\n ''' user's homepage with activity feed '''\n try:\n page = int(request.GET.get('page', 1))\n except ValueError:\n page = 1\n\n suggested_books = get_suggested_books(request.user)\n\n if tab == 'home':\n activities = get_activity_feed(\n request.user, ['public', 'unlisted', 'followers'],\n following_only=True)\n elif tab == 'local':\n activities = get_activity_feed(\n request.user, ['public', 'followers'], local_only=True)\n else:\n activities = get_activity_feed(\n request.user, ['public', 'followers'])\n paginated = Paginator(activities, PAGE_LENGTH)\n\n goal = models.AnnualGoal.objects.filter(\n user=request.user, year=timezone.now().year\n ).first()\n data = {\n 'title': 'Updates Feed',\n 'user': request.user,\n 'suggested_books': suggested_books,\n 'activities': paginated.page(page),\n 'tab': tab,\n 'goal': goal,\n 'goal_form': forms.GoalForm(),\n }\n return TemplateResponse(request, 'feed.html', data)\n\n\ndef get_suggested_books(user, max_books=5):\n ''' helper to get a user's recent books '''\n book_count = 0\n preset_shelves = [\n ('reading', max_books), ('read', 2), ('to-read', max_books)\n ]\n suggested_books = []\n for (preset, shelf_max) in preset_shelves:\n limit = shelf_max if shelf_max < (max_books - book_count) \\\n else max_books - book_count\n shelf = user.shelf_set.get(identifier=preset)\n\n shelf_books = shelf.shelfbook_set.order_by(\n '-updated_date'\n ).all()[:limit]\n if not shelf_books:\n continue\n shelf_preview = {\n 'name': shelf.name,\n 'books': [s.book for s in shelf_books]\n }\n suggested_books.append(shelf_preview)\n book_count += len(shelf_preview['books'])\n return suggested_books\n", "path": "bookwyrm/views/landing.py"}]} | 1,869 | 89 |
gh_patches_debug_15620 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-451 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Length limit check on Route53 TXT records is two characters short
*cfn-lint version: (`cfn-lint --version`)* master
*Description of issue.*
The length limit check on TXT records takes into account the starting and ending double quote characters, but these aren't counted on the API, so cfn-lint is really restricting to 253 characters rather than 255.
```
$ cat test.yml
Resources:
Example:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneId: abc123
Name: example.com.
Type: TXT
TTL: '14400'
ResourceRecords:
# 255 "a" characters within appropriate quotes
- '"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"'
$ cfn-lint test.yml
E3020 The length of the TXT record (257) exceeds the limit (255)
test.yml:9:7
```
</issue>
<code>
[start of src/cfnlint/rules/resources/route53/RecordSet.py]
1 """
2 Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
3
4 Permission is hereby granted, free of charge, to any person obtaining a copy of this
5 software and associated documentation files (the "Software"), to deal in the Software
6 without restriction, including without limitation the rights to use, copy, modify,
7 merge, publish, distribute, sublicense, and/or sell copies of the Software, and to
8 permit persons to whom the Software is furnished to do so.
9
10 THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
11 INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A
12 PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
13 HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
14 OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
15 SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
16 """
17 import re
18 from cfnlint import CloudFormationLintRule
19 from cfnlint import RuleMatch
20
21 from cfnlint.helpers import REGEX_IPV4, REGEX_IPV6, REGEX_ALPHANUMERIC
22
23 class RecordSet(CloudFormationLintRule):
24 """Check Route53 Recordset Configuration"""
25 id = 'E3020'
26 shortdesc = 'Validate Route53 RecordSets'
27 description = 'Check if all RecordSets are correctly configured'
28 source_url = 'https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html'
29 tags = ['resources', 'route53', 'record_set']
30
31 # https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html
32 VALID_RECORD_TYPES = [
33 'A',
34 'AAAA',
35 'CAA',
36 'CNAME',
37 'MX',
38 'NAPTR',
39 'NS',
40 'PTR',
41 'SOA'
42 'SPF',
43 'SRV',
44 'TXT'
45 ]
46
47 REGEX_CNAME = re.compile(r'^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])(.)$')
48
49 def check_a_record(self, path, recordset):
50 """Check A record Configuration"""
51 matches = []
52
53 resource_records = recordset.get('ResourceRecords')
54 for index, record in enumerate(resource_records):
55
56 if not isinstance(record, dict):
57 tree = path[:] + ['ResourceRecords', index]
58
59 # Check if a valid IPv4 address is specified
60 if not re.match(REGEX_IPV4, record):
61 message = 'A record ({}) is not a valid IPv4 address'
62 matches.append(RuleMatch(tree, message.format(record)))
63
64 return matches
65
66 def check_aaaa_record(self, path, recordset):
67 """Check AAAA record Configuration"""
68 matches = []
69
70 resource_records = recordset.get('ResourceRecords')
71 for index, record in enumerate(resource_records):
72
73 if not isinstance(record, dict):
74 tree = path[:] + ['ResourceRecords', index]
75
76 # Check if a valid IPv4 address is specified
77 if not re.match(REGEX_IPV6, record):
78 message = 'AAAA record ({}) is not a valid IPv6 address'
79 matches.append(RuleMatch(tree, message.format(record)))
80
81 return matches
82
83 def check_caa_record(self, path, recordset):
84 """Check CAA record Configuration"""
85 matches = []
86
87 resource_records = recordset.get('ResourceRecords')
88
89 for index, record in enumerate(resource_records):
90 tree = path[:] + ['ResourceRecords', index]
91
92 if not isinstance(record, dict):
93 # Split the record up to the mandatory settings (flags tag "value")
94 items = record.split(' ', 2)
95
96 # Check if the 3 settings are given.
97 if len(items) != 3:
98 message = 'CAA record must contain 3 settings (flags tag "value"), record contains {} settings.'
99 matches.append(RuleMatch(tree, message.format(len(items))))
100 else:
101 # Check the flag value
102 if not items[0].isdigit():
103 message = 'CAA record flag setting ({}) should be of type Integer.'
104 matches.append(RuleMatch(tree, message.format(items[0])))
105 else:
106 if int(items[0]) not in [0, 128]:
107 message = 'Invalid CAA record flag setting ({}) given, must be 0 or 128.'
108 matches.append(RuleMatch(tree, message.format(items[0])))
109
110 # Check the tag value
111 if not re.match(REGEX_ALPHANUMERIC, items[1]):
112 message = 'Invalid CAA record tag setting {}. Value has to be alphanumeric.'
113 matches.append(RuleMatch(tree, message.format(items[0])))
114
115 # Check the value
116 if not items[2].startswith('"') or not items[2].endswith('"'):
117 message = 'CAA record value setting has to be enclosed in double quotation marks (").'
118 matches.append(RuleMatch(tree, message))
119
120 return matches
121
122 def check_cname_record(self, path, recordset):
123 """Check CNAME record Configuration"""
124 matches = []
125
126 resource_records = recordset.get('ResourceRecords')
127 if len(resource_records) > 1:
128 message = 'A CNAME recordset can only contain 1 value'
129 matches.append(RuleMatch(path + ['ResourceRecords'], message))
130 else:
131 for index, record in enumerate(resource_records):
132 if not isinstance(record, dict):
133 tree = path[:] + ['ResourceRecords', index]
134 if (not re.match(self.REGEX_CNAME, record)
135 # ACM Route 53 validation uses invalid CNAMEs starting with `_`,
136 # special-case them rather than complicate the regex.
137 and not record.endswith('.acm-validations.aws.')):
138 message = 'CNAME record ({}) does not contain a valid domain name'
139 matches.append(RuleMatch(tree, message.format(record)))
140
141 return matches
142
143 def check_txt_record(self, path, recordset):
144 """Check TXT record Configuration"""
145 matches = []
146
147 # Check quotation of the records
148 resource_records = recordset.get('ResourceRecords')
149
150 for index, record in enumerate(resource_records):
151 tree = path[:] + ['ResourceRecords', index]
152
153 if not isinstance(record, dict):
154 if not record.startswith('"') or not record.endswith('"'):
155 message = 'TXT record ({}) has to be enclosed in double quotation marks (")'
156 matches.append(RuleMatch(tree, message.format(record)))
157 elif len(record) > 255:
158 message = 'The length of the TXT record ({}) exceeds the limit (255)'
159 matches.append(RuleMatch(tree, message.format(len(record))))
160
161 return matches
162
163 def check_recordset(self, path, recordset):
164 """Check record configuration"""
165
166 matches = []
167 recordset_type = recordset.get('Type')
168
169 # Skip Intrinsic functions
170 if not isinstance(recordset_type, dict):
171 if recordset_type not in self.VALID_RECORD_TYPES:
172 message = 'Invalid record type "{0}" specified'
173 matches.append(RuleMatch(path + ['Type'], message.format(recordset_type)))
174 elif not recordset.get('AliasTarget'):
175 # Record type specific checks
176 if recordset_type == 'A':
177 matches.extend(self.check_a_record(path, recordset))
178 elif recordset_type == 'AAAA':
179 matches.extend(self.check_aaaa_record(path, recordset))
180 elif recordset_type == 'CAA':
181 matches.extend(self.check_caa_record(path, recordset))
182 elif recordset_type == 'CNAME':
183 matches.extend(self.check_cname_record(path, recordset))
184 elif recordset_type == 'TXT':
185 matches.extend(self.check_txt_record(path, recordset))
186
187 return matches
188
189 def match(self, cfn):
190 """Check RecordSets and RecordSetGroups Properties"""
191
192 matches = []
193
194 recordsets = cfn.get_resources(['AWS::Route53::RecordSet'])
195
196 for name, recordset in recordsets.items():
197 path = ['Resources', name, 'Properties']
198
199 if isinstance(recordset, dict):
200 props = recordset.get('Properties')
201 if props:
202 matches.extend(self.check_recordset(path, props))
203
204 recordsetgroups = cfn.get_resource_properties(['AWS::Route53::RecordSetGroup', 'RecordSets'])
205
206 for recordsetgroup in recordsetgroups:
207 path = recordsetgroup['Path']
208 value = recordsetgroup['Value']
209 if isinstance(value, list):
210 for index, recordset in enumerate(value):
211 tree = path[:] + [index]
212 matches.extend(self.check_recordset(tree, recordset))
213
214 return matches
215
[end of src/cfnlint/rules/resources/route53/RecordSet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/resources/route53/RecordSet.py b/src/cfnlint/rules/resources/route53/RecordSet.py
--- a/src/cfnlint/rules/resources/route53/RecordSet.py
+++ b/src/cfnlint/rules/resources/route53/RecordSet.py
@@ -154,7 +154,7 @@
if not record.startswith('"') or not record.endswith('"'):
message = 'TXT record ({}) has to be enclosed in double quotation marks (")'
matches.append(RuleMatch(tree, message.format(record)))
- elif len(record) > 255:
+ elif len(record) > 257: # 2 extra characters for start and end double quotation marks
message = 'The length of the TXT record ({}) exceeds the limit (255)'
matches.append(RuleMatch(tree, message.format(len(record))))
| {"golden_diff": "diff --git a/src/cfnlint/rules/resources/route53/RecordSet.py b/src/cfnlint/rules/resources/route53/RecordSet.py\n--- a/src/cfnlint/rules/resources/route53/RecordSet.py\n+++ b/src/cfnlint/rules/resources/route53/RecordSet.py\n@@ -154,7 +154,7 @@\n if not record.startswith('\"') or not record.endswith('\"'):\n message = 'TXT record ({}) has to be enclosed in double quotation marks (\")'\n matches.append(RuleMatch(tree, message.format(record)))\n- elif len(record) > 255:\n+ elif len(record) > 257: # 2 extra characters for start and end double quotation marks\n message = 'The length of the TXT record ({}) exceeds the limit (255)'\n matches.append(RuleMatch(tree, message.format(len(record))))\n", "issue": "Length limit check on Route53 TXT records is two characters short\n*cfn-lint version: (`cfn-lint --version`)* master\r\n\r\n*Description of issue.*\r\n\r\nThe length limit check on TXT records takes into account the starting and ending double quote characters, but these aren't counted on the API, so cfn-lint is really restricting to 253 characters rather than 255.\r\n\r\n```\r\n$ cat test.yml\r\nResources:\r\n Example:\r\n Type: AWS::Route53::RecordSet\r\n Properties:\r\n HostedZoneId: abc123\r\n Name: example.com.\r\n Type: TXT\r\n TTL: '14400'\r\n ResourceRecords:\r\n # 255 \"a\" characters within appropriate quotes\r\n - '\"aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\"'\r\n$ cfn-lint test.yml\r\nE3020 The length of the TXT record (257) exceeds the limit (255)\r\ntest.yml:9:7\r\n```\n", "before_files": [{"content": "\"\"\"\n Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.\n\n Permission is hereby granted, free of charge, to any person obtaining a copy of this\n software and associated documentation files (the \"Software\"), to deal in the Software\n without restriction, including without limitation the rights to use, copy, modify,\n merge, publish, distribute, sublicense, and/or sell copies of the Software, and to\n permit persons to whom the Software is furnished to do so.\n\n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,\n INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A\n PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT\n HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION\n OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE\n SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n\"\"\"\nimport re\nfrom cfnlint import CloudFormationLintRule\nfrom cfnlint import RuleMatch\n\nfrom cfnlint.helpers import REGEX_IPV4, REGEX_IPV6, REGEX_ALPHANUMERIC\n\nclass RecordSet(CloudFormationLintRule):\n \"\"\"Check Route53 Recordset Configuration\"\"\"\n id = 'E3020'\n shortdesc = 'Validate Route53 RecordSets'\n description = 'Check if all RecordSets are correctly configured'\n source_url = 'https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html'\n tags = ['resources', 'route53', 'record_set']\n\n # https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/ResourceRecordTypes.html\n VALID_RECORD_TYPES = [\n 'A',\n 'AAAA',\n 'CAA',\n 'CNAME',\n 'MX',\n 'NAPTR',\n 'NS',\n 'PTR',\n 'SOA'\n 'SPF',\n 'SRV',\n 'TXT'\n ]\n\n REGEX_CNAME = re.compile(r'^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9])\\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\\-]*[A-Za-z0-9])(.)$')\n\n def check_a_record(self, path, recordset):\n \"\"\"Check A record Configuration\"\"\"\n matches = []\n\n resource_records = recordset.get('ResourceRecords')\n for index, record in enumerate(resource_records):\n\n if not isinstance(record, dict):\n tree = path[:] + ['ResourceRecords', index]\n\n # Check if a valid IPv4 address is specified\n if not re.match(REGEX_IPV4, record):\n message = 'A record ({}) is not a valid IPv4 address'\n matches.append(RuleMatch(tree, message.format(record)))\n\n return matches\n\n def check_aaaa_record(self, path, recordset):\n \"\"\"Check AAAA record Configuration\"\"\"\n matches = []\n\n resource_records = recordset.get('ResourceRecords')\n for index, record in enumerate(resource_records):\n\n if not isinstance(record, dict):\n tree = path[:] + ['ResourceRecords', index]\n\n # Check if a valid IPv4 address is specified\n if not re.match(REGEX_IPV6, record):\n message = 'AAAA record ({}) is not a valid IPv6 address'\n matches.append(RuleMatch(tree, message.format(record)))\n\n return matches\n\n def check_caa_record(self, path, recordset):\n \"\"\"Check CAA record Configuration\"\"\"\n matches = []\n\n resource_records = recordset.get('ResourceRecords')\n\n for index, record in enumerate(resource_records):\n tree = path[:] + ['ResourceRecords', index]\n\n if not isinstance(record, dict):\n # Split the record up to the mandatory settings (flags tag \"value\")\n items = record.split(' ', 2)\n\n # Check if the 3 settings are given.\n if len(items) != 3:\n message = 'CAA record must contain 3 settings (flags tag \"value\"), record contains {} settings.'\n matches.append(RuleMatch(tree, message.format(len(items))))\n else:\n # Check the flag value\n if not items[0].isdigit():\n message = 'CAA record flag setting ({}) should be of type Integer.'\n matches.append(RuleMatch(tree, message.format(items[0])))\n else:\n if int(items[0]) not in [0, 128]:\n message = 'Invalid CAA record flag setting ({}) given, must be 0 or 128.'\n matches.append(RuleMatch(tree, message.format(items[0])))\n\n # Check the tag value\n if not re.match(REGEX_ALPHANUMERIC, items[1]):\n message = 'Invalid CAA record tag setting {}. Value has to be alphanumeric.'\n matches.append(RuleMatch(tree, message.format(items[0])))\n\n # Check the value\n if not items[2].startswith('\"') or not items[2].endswith('\"'):\n message = 'CAA record value setting has to be enclosed in double quotation marks (\").'\n matches.append(RuleMatch(tree, message))\n\n return matches\n\n def check_cname_record(self, path, recordset):\n \"\"\"Check CNAME record Configuration\"\"\"\n matches = []\n\n resource_records = recordset.get('ResourceRecords')\n if len(resource_records) > 1:\n message = 'A CNAME recordset can only contain 1 value'\n matches.append(RuleMatch(path + ['ResourceRecords'], message))\n else:\n for index, record in enumerate(resource_records):\n if not isinstance(record, dict):\n tree = path[:] + ['ResourceRecords', index]\n if (not re.match(self.REGEX_CNAME, record)\n # ACM Route 53 validation uses invalid CNAMEs starting with `_`,\n # special-case them rather than complicate the regex.\n and not record.endswith('.acm-validations.aws.')):\n message = 'CNAME record ({}) does not contain a valid domain name'\n matches.append(RuleMatch(tree, message.format(record)))\n\n return matches\n\n def check_txt_record(self, path, recordset):\n \"\"\"Check TXT record Configuration\"\"\"\n matches = []\n\n # Check quotation of the records\n resource_records = recordset.get('ResourceRecords')\n\n for index, record in enumerate(resource_records):\n tree = path[:] + ['ResourceRecords', index]\n\n if not isinstance(record, dict):\n if not record.startswith('\"') or not record.endswith('\"'):\n message = 'TXT record ({}) has to be enclosed in double quotation marks (\")'\n matches.append(RuleMatch(tree, message.format(record)))\n elif len(record) > 255:\n message = 'The length of the TXT record ({}) exceeds the limit (255)'\n matches.append(RuleMatch(tree, message.format(len(record))))\n\n return matches\n\n def check_recordset(self, path, recordset):\n \"\"\"Check record configuration\"\"\"\n\n matches = []\n recordset_type = recordset.get('Type')\n\n # Skip Intrinsic functions\n if not isinstance(recordset_type, dict):\n if recordset_type not in self.VALID_RECORD_TYPES:\n message = 'Invalid record type \"{0}\" specified'\n matches.append(RuleMatch(path + ['Type'], message.format(recordset_type)))\n elif not recordset.get('AliasTarget'):\n # Record type specific checks\n if recordset_type == 'A':\n matches.extend(self.check_a_record(path, recordset))\n elif recordset_type == 'AAAA':\n matches.extend(self.check_aaaa_record(path, recordset))\n elif recordset_type == 'CAA':\n matches.extend(self.check_caa_record(path, recordset))\n elif recordset_type == 'CNAME':\n matches.extend(self.check_cname_record(path, recordset))\n elif recordset_type == 'TXT':\n matches.extend(self.check_txt_record(path, recordset))\n\n return matches\n\n def match(self, cfn):\n \"\"\"Check RecordSets and RecordSetGroups Properties\"\"\"\n\n matches = []\n\n recordsets = cfn.get_resources(['AWS::Route53::RecordSet'])\n\n for name, recordset in recordsets.items():\n path = ['Resources', name, 'Properties']\n\n if isinstance(recordset, dict):\n props = recordset.get('Properties')\n if props:\n matches.extend(self.check_recordset(path, props))\n\n recordsetgroups = cfn.get_resource_properties(['AWS::Route53::RecordSetGroup', 'RecordSets'])\n\n for recordsetgroup in recordsetgroups:\n path = recordsetgroup['Path']\n value = recordsetgroup['Value']\n if isinstance(value, list):\n for index, recordset in enumerate(value):\n tree = path[:] + [index]\n matches.extend(self.check_recordset(tree, recordset))\n\n return matches\n", "path": "src/cfnlint/rules/resources/route53/RecordSet.py"}]} | 3,264 | 197 |
gh_patches_debug_16896 | rasdani/github-patches | git_diff | webkom__lego-1069 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong penalty count in email
The counter in the penalty email is still wrong:

</issue>
<code>
[start of lego/apps/feed/feed_handlers/penalty_handler.py]
1 from lego.apps.feed.activities import Activity
2 from lego.apps.feed.feed_handlers.base_handler import BaseHandler
3 from lego.apps.feed.feed_manager import feed_manager
4 from lego.apps.feed.feeds.notification_feed import NotificationFeed
5 from lego.apps.feed.registry import register_handler
6 from lego.apps.feed.verbs import PenaltyVerb
7 from lego.apps.users.models import Penalty
8 from lego.apps.users.notifications import PenaltyNotification
9
10
11 class PenaltyHandler(BaseHandler):
12 model = Penalty
13 manager = feed_manager
14
15 def get_activity(self, penalty):
16 return Activity(
17 actor=penalty.source_event, verb=PenaltyVerb, object=penalty, target=penalty.user,
18 time=penalty.created_at, extra_context={
19 'reason': penalty.reason,
20 'weight': penalty.weight,
21 'total': penalty.user.number_of_penalties()
22 }
23 )
24
25 def handle_create(self, penalty):
26 activity = self.get_activity(penalty)
27 self.manager.add_activity(activity, [penalty.user.pk], [NotificationFeed])
28
29 # Send Notification
30 notification = PenaltyNotification(penalty.user, penalty=penalty)
31 notification.notify()
32
33 def handle_update(self, penalty):
34 activity = self.get_activity(penalty)
35 self.manager.add_activity(activity, [penalty.user.pk], [NotificationFeed])
36
37 def handle_delete(self, penalty):
38 activity = self.get_activity(penalty)
39 self.manager.remove_activity(activity, [penalty.user.pk], [NotificationFeed])
40
41
42 register_handler(PenaltyHandler)
43
[end of lego/apps/feed/feed_handlers/penalty_handler.py]
[start of lego/apps/users/notifications.py]
1 from lego.apps.notifications.constants import PENALTY_CREATION
2 from lego.apps.notifications.notification import Notification
3
4
5 class PenaltyNotification(Notification):
6
7 name = PENALTY_CREATION
8
9 def generate_mail(self):
10 penalty = self.kwargs['penalty']
11
12 return self._delay_mail(
13 to_email=self.user.email,
14 context={
15 'name': self.user.full_name,
16 'weight': penalty.weight,
17 'event': penalty.source_event.title,
18 'reason': penalty.reason,
19 'total': self.user.number_of_penalties()
20 },
21 subject=f'Du har fått en ny prikk',
22 plain_template='users/email/penalty.txt',
23 html_template='users/email/penalty.html',
24 )
25
26 def generate_push(self):
27 penalty = self.kwargs['penalty']
28
29 return self._delay_push(
30 template='users/push/penalty.txt', context={
31 'weight': penalty.weight,
32 'event': penalty.source_event.title,
33 }, instance=penalty
34 )
35
[end of lego/apps/users/notifications.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lego/apps/feed/feed_handlers/penalty_handler.py b/lego/apps/feed/feed_handlers/penalty_handler.py
--- a/lego/apps/feed/feed_handlers/penalty_handler.py
+++ b/lego/apps/feed/feed_handlers/penalty_handler.py
@@ -18,7 +18,6 @@
time=penalty.created_at, extra_context={
'reason': penalty.reason,
'weight': penalty.weight,
- 'total': penalty.user.number_of_penalties()
}
)
diff --git a/lego/apps/users/notifications.py b/lego/apps/users/notifications.py
--- a/lego/apps/users/notifications.py
+++ b/lego/apps/users/notifications.py
@@ -16,7 +16,6 @@
'weight': penalty.weight,
'event': penalty.source_event.title,
'reason': penalty.reason,
- 'total': self.user.number_of_penalties()
},
subject=f'Du har fått en ny prikk',
plain_template='users/email/penalty.txt',
| {"golden_diff": "diff --git a/lego/apps/feed/feed_handlers/penalty_handler.py b/lego/apps/feed/feed_handlers/penalty_handler.py\n--- a/lego/apps/feed/feed_handlers/penalty_handler.py\n+++ b/lego/apps/feed/feed_handlers/penalty_handler.py\n@@ -18,7 +18,6 @@\n time=penalty.created_at, extra_context={\n 'reason': penalty.reason,\n 'weight': penalty.weight,\n- 'total': penalty.user.number_of_penalties()\n }\n )\n \ndiff --git a/lego/apps/users/notifications.py b/lego/apps/users/notifications.py\n--- a/lego/apps/users/notifications.py\n+++ b/lego/apps/users/notifications.py\n@@ -16,7 +16,6 @@\n 'weight': penalty.weight,\n 'event': penalty.source_event.title,\n 'reason': penalty.reason,\n- 'total': self.user.number_of_penalties()\n },\n subject=f'Du har f\u00e5tt en ny prikk',\n plain_template='users/email/penalty.txt',\n", "issue": "Wrong penalty count in email\nThe counter in the penalty email is still wrong:\r\n\r\n\r\n\n", "before_files": [{"content": "from lego.apps.feed.activities import Activity\nfrom lego.apps.feed.feed_handlers.base_handler import BaseHandler\nfrom lego.apps.feed.feed_manager import feed_manager\nfrom lego.apps.feed.feeds.notification_feed import NotificationFeed\nfrom lego.apps.feed.registry import register_handler\nfrom lego.apps.feed.verbs import PenaltyVerb\nfrom lego.apps.users.models import Penalty\nfrom lego.apps.users.notifications import PenaltyNotification\n\n\nclass PenaltyHandler(BaseHandler):\n model = Penalty\n manager = feed_manager\n\n def get_activity(self, penalty):\n return Activity(\n actor=penalty.source_event, verb=PenaltyVerb, object=penalty, target=penalty.user,\n time=penalty.created_at, extra_context={\n 'reason': penalty.reason,\n 'weight': penalty.weight,\n 'total': penalty.user.number_of_penalties()\n }\n )\n\n def handle_create(self, penalty):\n activity = self.get_activity(penalty)\n self.manager.add_activity(activity, [penalty.user.pk], [NotificationFeed])\n\n # Send Notification\n notification = PenaltyNotification(penalty.user, penalty=penalty)\n notification.notify()\n\n def handle_update(self, penalty):\n activity = self.get_activity(penalty)\n self.manager.add_activity(activity, [penalty.user.pk], [NotificationFeed])\n\n def handle_delete(self, penalty):\n activity = self.get_activity(penalty)\n self.manager.remove_activity(activity, [penalty.user.pk], [NotificationFeed])\n\n\nregister_handler(PenaltyHandler)\n", "path": "lego/apps/feed/feed_handlers/penalty_handler.py"}, {"content": "from lego.apps.notifications.constants import PENALTY_CREATION\nfrom lego.apps.notifications.notification import Notification\n\n\nclass PenaltyNotification(Notification):\n\n name = PENALTY_CREATION\n\n def generate_mail(self):\n penalty = self.kwargs['penalty']\n\n return self._delay_mail(\n to_email=self.user.email,\n context={\n 'name': self.user.full_name,\n 'weight': penalty.weight,\n 'event': penalty.source_event.title,\n 'reason': penalty.reason,\n 'total': self.user.number_of_penalties()\n },\n subject=f'Du har f\u00e5tt en ny prikk',\n plain_template='users/email/penalty.txt',\n html_template='users/email/penalty.html',\n )\n\n def generate_push(self):\n penalty = self.kwargs['penalty']\n\n return self._delay_push(\n template='users/push/penalty.txt', context={\n 'weight': penalty.weight,\n 'event': penalty.source_event.title,\n }, instance=penalty\n )\n", "path": "lego/apps/users/notifications.py"}]} | 1,349 | 231 |
gh_patches_debug_5569 | rasdani/github-patches | git_diff | pre-commit__pre-commit-803 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`stages: [commit]` hooks will run with `pre-commit run otherhookid`
minor logic bug, good new-contributor ticket
Easy to reproduce on pre-commit itself:
```diff
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml
index a146bd2..7bb382d 100644
--- a/.pre-commit-config.yaml
+++ b/.pre-commit-config.yaml
@@ -3,6 +3,7 @@ repos:
rev: v1.2.3
hooks:
- id: trailing-whitespace
+ stages: [commit]
- id: end-of-file-fixer
- id: autopep8-wrapper
- id: check-docstring-first
```
```console
$ pre-commit run end-of-file-fixer --all-files
Trim Trailing Whitespace.................................................Passed
Fix End of Files.........................................................Passed
```
(it should have only run `end-of-file-fixer` but also run `trailing-whitespace` due to a logic error).
</issue>
<code>
[start of pre_commit/commands/run.py]
1 from __future__ import unicode_literals
2
3 import logging
4 import os
5 import re
6 import subprocess
7 import sys
8
9 from identify.identify import tags_from_path
10
11 from pre_commit import color
12 from pre_commit import git
13 from pre_commit import output
14 from pre_commit.output import get_hook_message
15 from pre_commit.repository import repositories
16 from pre_commit.staged_files_only import staged_files_only
17 from pre_commit.util import cmd_output
18 from pre_commit.util import memoize_by_cwd
19 from pre_commit.util import noop_context
20
21
22 logger = logging.getLogger('pre_commit')
23
24
25 tags_from_path = memoize_by_cwd(tags_from_path)
26
27
28 def _get_skips(environ):
29 skips = environ.get('SKIP', '')
30 return {skip.strip() for skip in skips.split(',') if skip.strip()}
31
32
33 def _hook_msg_start(hook, verbose):
34 return '{}{}'.format(
35 '[{}] '.format(hook['id']) if verbose else '', hook['name'],
36 )
37
38
39 def _filter_by_include_exclude(filenames, include, exclude):
40 include_re, exclude_re = re.compile(include), re.compile(exclude)
41 return [
42 filename for filename in filenames
43 if (
44 include_re.search(filename) and
45 not exclude_re.search(filename) and
46 os.path.lexists(filename)
47 )
48 ]
49
50
51 def _filter_by_types(filenames, types, exclude_types):
52 types, exclude_types = frozenset(types), frozenset(exclude_types)
53 ret = []
54 for filename in filenames:
55 tags = tags_from_path(filename)
56 if tags >= types and not tags & exclude_types:
57 ret.append(filename)
58 return tuple(ret)
59
60
61 SKIPPED = 'Skipped'
62 NO_FILES = '(no files to check)'
63
64
65 def _run_single_hook(filenames, hook, repo, args, skips, cols):
66 include, exclude = hook['files'], hook['exclude']
67 filenames = _filter_by_include_exclude(filenames, include, exclude)
68 types, exclude_types = hook['types'], hook['exclude_types']
69 filenames = _filter_by_types(filenames, types, exclude_types)
70
71 if hook['language'] == 'pcre':
72 logger.warning(
73 '`{}` (from {}) uses the deprecated pcre language.\n'
74 'The pcre language is scheduled for removal in pre-commit 2.x.\n'
75 'The pygrep language is a more portable (and usually drop-in) '
76 'replacement.'.format(hook['id'], repo.repo_config['repo']),
77 )
78
79 if hook['id'] in skips:
80 output.write(get_hook_message(
81 _hook_msg_start(hook, args.verbose),
82 end_msg=SKIPPED,
83 end_color=color.YELLOW,
84 use_color=args.color,
85 cols=cols,
86 ))
87 return 0
88 elif not filenames and not hook['always_run']:
89 output.write(get_hook_message(
90 _hook_msg_start(hook, args.verbose),
91 postfix=NO_FILES,
92 end_msg=SKIPPED,
93 end_color=color.TURQUOISE,
94 use_color=args.color,
95 cols=cols,
96 ))
97 return 0
98
99 # Print the hook and the dots first in case the hook takes hella long to
100 # run.
101 output.write(get_hook_message(
102 _hook_msg_start(hook, args.verbose), end_len=6, cols=cols,
103 ))
104 sys.stdout.flush()
105
106 diff_before = cmd_output(
107 'git', 'diff', '--no-ext-diff', retcode=None, encoding=None,
108 )
109 retcode, stdout, stderr = repo.run_hook(
110 hook, tuple(filenames) if hook['pass_filenames'] else (),
111 )
112 diff_after = cmd_output(
113 'git', 'diff', '--no-ext-diff', retcode=None, encoding=None,
114 )
115
116 file_modifications = diff_before != diff_after
117
118 # If the hook makes changes, fail the commit
119 if file_modifications:
120 retcode = 1
121
122 if retcode:
123 retcode = 1
124 print_color = color.RED
125 pass_fail = 'Failed'
126 else:
127 retcode = 0
128 print_color = color.GREEN
129 pass_fail = 'Passed'
130
131 output.write_line(color.format_color(pass_fail, print_color, args.color))
132
133 if (
134 (stdout or stderr or file_modifications) and
135 (retcode or args.verbose or hook['verbose'])
136 ):
137 output.write_line('hookid: {}\n'.format(hook['id']))
138
139 # Print a message if failing due to file modifications
140 if file_modifications:
141 output.write('Files were modified by this hook.')
142
143 if stdout or stderr:
144 output.write_line(' Additional output:')
145
146 output.write_line()
147
148 for out in (stdout, stderr):
149 assert type(out) is bytes, type(out)
150 if out.strip():
151 output.write_line(out.strip(), logfile_name=hook['log_file'])
152 output.write_line()
153
154 return retcode
155
156
157 def _compute_cols(hooks, verbose):
158 """Compute the number of columns to display hook messages. The widest
159 that will be displayed is in the no files skipped case:
160
161 Hook name...(no files to check) Skipped
162
163 or in the verbose case
164
165 Hook name [hookid]...(no files to check) Skipped
166 """
167 if hooks:
168 name_len = max(len(_hook_msg_start(hook, verbose)) for hook in hooks)
169 else:
170 name_len = 0
171
172 cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)
173 return max(cols, 80)
174
175
176 def _all_filenames(args):
177 if args.origin and args.source:
178 return git.get_changed_files(args.origin, args.source)
179 elif args.hook_stage == 'commit-msg':
180 return (args.commit_msg_filename,)
181 elif args.files:
182 return args.files
183 elif args.all_files:
184 return git.get_all_files()
185 elif git.is_in_merge_conflict():
186 return git.get_conflicted_files()
187 else:
188 return git.get_staged_files()
189
190
191 def _run_hooks(config, repo_hooks, args, environ):
192 """Actually run the hooks."""
193 skips = _get_skips(environ)
194 cols = _compute_cols([hook for _, hook in repo_hooks], args.verbose)
195 filenames = _all_filenames(args)
196 filenames = _filter_by_include_exclude(filenames, '', config['exclude'])
197 retval = 0
198 for repo, hook in repo_hooks:
199 retval |= _run_single_hook(filenames, hook, repo, args, skips, cols)
200 if retval and config['fail_fast']:
201 break
202 if (
203 retval and
204 args.show_diff_on_failure and
205 subprocess.call(('git', 'diff', '--quiet', '--no-ext-diff')) != 0
206 ):
207 output.write_line('All changes made by hooks:')
208 subprocess.call(('git', '--no-pager', 'diff', '--no-ext-diff'))
209 return retval
210
211
212 def _has_unmerged_paths():
213 _, stdout, _ = cmd_output('git', 'ls-files', '--unmerged')
214 return bool(stdout.strip())
215
216
217 def _has_unstaged_config(runner):
218 retcode, _, _ = cmd_output(
219 'git', 'diff', '--no-ext-diff', '--exit-code', runner.config_file_path,
220 retcode=None,
221 )
222 # be explicit, other git errors don't mean it has an unstaged config.
223 return retcode == 1
224
225
226 def run(runner, store, args, environ=os.environ):
227 no_stash = args.all_files or bool(args.files)
228
229 # Check if we have unresolved merge conflict files and fail fast.
230 if _has_unmerged_paths():
231 logger.error('Unmerged files. Resolve before committing.')
232 return 1
233 if bool(args.source) != bool(args.origin):
234 logger.error('Specify both --origin and --source.')
235 return 1
236 if _has_unstaged_config(runner) and not no_stash:
237 logger.error(
238 'Your pre-commit configuration is unstaged.\n'
239 '`git add {}` to fix this.'.format(runner.config_file),
240 )
241 return 1
242
243 # Expose origin / source as environment variables for hooks to consume
244 if args.origin and args.source:
245 environ['PRE_COMMIT_ORIGIN'] = args.origin
246 environ['PRE_COMMIT_SOURCE'] = args.source
247
248 if no_stash:
249 ctx = noop_context()
250 else:
251 ctx = staged_files_only(store.directory)
252
253 with ctx:
254 repo_hooks = []
255 for repo in repositories(runner.config, store):
256 for _, hook in repo.hooks:
257 if (
258 (not args.hook or hook['id'] == args.hook) and
259 not hook['stages'] or args.hook_stage in hook['stages']
260 ):
261 repo_hooks.append((repo, hook))
262
263 if args.hook and not repo_hooks:
264 output.write_line('No hook with id `{}`'.format(args.hook))
265 return 1
266
267 for repo in {repo for repo, _ in repo_hooks}:
268 repo.require_installed()
269
270 return _run_hooks(runner.config, repo_hooks, args, environ)
271
[end of pre_commit/commands/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py
--- a/pre_commit/commands/run.py
+++ b/pre_commit/commands/run.py
@@ -256,7 +256,7 @@
for _, hook in repo.hooks:
if (
(not args.hook or hook['id'] == args.hook) and
- not hook['stages'] or args.hook_stage in hook['stages']
+ (not hook['stages'] or args.hook_stage in hook['stages'])
):
repo_hooks.append((repo, hook))
| {"golden_diff": "diff --git a/pre_commit/commands/run.py b/pre_commit/commands/run.py\n--- a/pre_commit/commands/run.py\n+++ b/pre_commit/commands/run.py\n@@ -256,7 +256,7 @@\n for _, hook in repo.hooks:\n if (\n (not args.hook or hook['id'] == args.hook) and\n- not hook['stages'] or args.hook_stage in hook['stages']\n+ (not hook['stages'] or args.hook_stage in hook['stages'])\n ):\n repo_hooks.append((repo, hook))\n", "issue": "`stages: [commit]` hooks will run with `pre-commit run otherhookid`\nminor logic bug, good new-contributor ticket\r\n\r\nEasy to reproduce on pre-commit itself:\r\n\r\n```diff\r\ndiff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml\r\nindex a146bd2..7bb382d 100644\r\n--- a/.pre-commit-config.yaml\r\n+++ b/.pre-commit-config.yaml\r\n@@ -3,6 +3,7 @@ repos:\r\n rev: v1.2.3\r\n hooks:\r\n - id: trailing-whitespace\r\n+ stages: [commit]\r\n - id: end-of-file-fixer\r\n - id: autopep8-wrapper\r\n - id: check-docstring-first\r\n```\r\n\r\n```console\r\n$ pre-commit run end-of-file-fixer --all-files\r\nTrim Trailing Whitespace.................................................Passed\r\nFix End of Files.........................................................Passed\r\n```\r\n\r\n(it should have only run `end-of-file-fixer` but also run `trailing-whitespace` due to a logic error).\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport os\nimport re\nimport subprocess\nimport sys\n\nfrom identify.identify import tags_from_path\n\nfrom pre_commit import color\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.output import get_hook_message\nfrom pre_commit.repository import repositories\nfrom pre_commit.staged_files_only import staged_files_only\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import memoize_by_cwd\nfrom pre_commit.util import noop_context\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ntags_from_path = memoize_by_cwd(tags_from_path)\n\n\ndef _get_skips(environ):\n skips = environ.get('SKIP', '')\n return {skip.strip() for skip in skips.split(',') if skip.strip()}\n\n\ndef _hook_msg_start(hook, verbose):\n return '{}{}'.format(\n '[{}] '.format(hook['id']) if verbose else '', hook['name'],\n )\n\n\ndef _filter_by_include_exclude(filenames, include, exclude):\n include_re, exclude_re = re.compile(include), re.compile(exclude)\n return [\n filename for filename in filenames\n if (\n include_re.search(filename) and\n not exclude_re.search(filename) and\n os.path.lexists(filename)\n )\n ]\n\n\ndef _filter_by_types(filenames, types, exclude_types):\n types, exclude_types = frozenset(types), frozenset(exclude_types)\n ret = []\n for filename in filenames:\n tags = tags_from_path(filename)\n if tags >= types and not tags & exclude_types:\n ret.append(filename)\n return tuple(ret)\n\n\nSKIPPED = 'Skipped'\nNO_FILES = '(no files to check)'\n\n\ndef _run_single_hook(filenames, hook, repo, args, skips, cols):\n include, exclude = hook['files'], hook['exclude']\n filenames = _filter_by_include_exclude(filenames, include, exclude)\n types, exclude_types = hook['types'], hook['exclude_types']\n filenames = _filter_by_types(filenames, types, exclude_types)\n\n if hook['language'] == 'pcre':\n logger.warning(\n '`{}` (from {}) uses the deprecated pcre language.\\n'\n 'The pcre language is scheduled for removal in pre-commit 2.x.\\n'\n 'The pygrep language is a more portable (and usually drop-in) '\n 'replacement.'.format(hook['id'], repo.repo_config['repo']),\n )\n\n if hook['id'] in skips:\n output.write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n end_msg=SKIPPED,\n end_color=color.YELLOW,\n use_color=args.color,\n cols=cols,\n ))\n return 0\n elif not filenames and not hook['always_run']:\n output.write(get_hook_message(\n _hook_msg_start(hook, args.verbose),\n postfix=NO_FILES,\n end_msg=SKIPPED,\n end_color=color.TURQUOISE,\n use_color=args.color,\n cols=cols,\n ))\n return 0\n\n # Print the hook and the dots first in case the hook takes hella long to\n # run.\n output.write(get_hook_message(\n _hook_msg_start(hook, args.verbose), end_len=6, cols=cols,\n ))\n sys.stdout.flush()\n\n diff_before = cmd_output(\n 'git', 'diff', '--no-ext-diff', retcode=None, encoding=None,\n )\n retcode, stdout, stderr = repo.run_hook(\n hook, tuple(filenames) if hook['pass_filenames'] else (),\n )\n diff_after = cmd_output(\n 'git', 'diff', '--no-ext-diff', retcode=None, encoding=None,\n )\n\n file_modifications = diff_before != diff_after\n\n # If the hook makes changes, fail the commit\n if file_modifications:\n retcode = 1\n\n if retcode:\n retcode = 1\n print_color = color.RED\n pass_fail = 'Failed'\n else:\n retcode = 0\n print_color = color.GREEN\n pass_fail = 'Passed'\n\n output.write_line(color.format_color(pass_fail, print_color, args.color))\n\n if (\n (stdout or stderr or file_modifications) and\n (retcode or args.verbose or hook['verbose'])\n ):\n output.write_line('hookid: {}\\n'.format(hook['id']))\n\n # Print a message if failing due to file modifications\n if file_modifications:\n output.write('Files were modified by this hook.')\n\n if stdout or stderr:\n output.write_line(' Additional output:')\n\n output.write_line()\n\n for out in (stdout, stderr):\n assert type(out) is bytes, type(out)\n if out.strip():\n output.write_line(out.strip(), logfile_name=hook['log_file'])\n output.write_line()\n\n return retcode\n\n\ndef _compute_cols(hooks, verbose):\n \"\"\"Compute the number of columns to display hook messages. The widest\n that will be displayed is in the no files skipped case:\n\n Hook name...(no files to check) Skipped\n\n or in the verbose case\n\n Hook name [hookid]...(no files to check) Skipped\n \"\"\"\n if hooks:\n name_len = max(len(_hook_msg_start(hook, verbose)) for hook in hooks)\n else:\n name_len = 0\n\n cols = name_len + 3 + len(NO_FILES) + 1 + len(SKIPPED)\n return max(cols, 80)\n\n\ndef _all_filenames(args):\n if args.origin and args.source:\n return git.get_changed_files(args.origin, args.source)\n elif args.hook_stage == 'commit-msg':\n return (args.commit_msg_filename,)\n elif args.files:\n return args.files\n elif args.all_files:\n return git.get_all_files()\n elif git.is_in_merge_conflict():\n return git.get_conflicted_files()\n else:\n return git.get_staged_files()\n\n\ndef _run_hooks(config, repo_hooks, args, environ):\n \"\"\"Actually run the hooks.\"\"\"\n skips = _get_skips(environ)\n cols = _compute_cols([hook for _, hook in repo_hooks], args.verbose)\n filenames = _all_filenames(args)\n filenames = _filter_by_include_exclude(filenames, '', config['exclude'])\n retval = 0\n for repo, hook in repo_hooks:\n retval |= _run_single_hook(filenames, hook, repo, args, skips, cols)\n if retval and config['fail_fast']:\n break\n if (\n retval and\n args.show_diff_on_failure and\n subprocess.call(('git', 'diff', '--quiet', '--no-ext-diff')) != 0\n ):\n output.write_line('All changes made by hooks:')\n subprocess.call(('git', '--no-pager', 'diff', '--no-ext-diff'))\n return retval\n\n\ndef _has_unmerged_paths():\n _, stdout, _ = cmd_output('git', 'ls-files', '--unmerged')\n return bool(stdout.strip())\n\n\ndef _has_unstaged_config(runner):\n retcode, _, _ = cmd_output(\n 'git', 'diff', '--no-ext-diff', '--exit-code', runner.config_file_path,\n retcode=None,\n )\n # be explicit, other git errors don't mean it has an unstaged config.\n return retcode == 1\n\n\ndef run(runner, store, args, environ=os.environ):\n no_stash = args.all_files or bool(args.files)\n\n # Check if we have unresolved merge conflict files and fail fast.\n if _has_unmerged_paths():\n logger.error('Unmerged files. Resolve before committing.')\n return 1\n if bool(args.source) != bool(args.origin):\n logger.error('Specify both --origin and --source.')\n return 1\n if _has_unstaged_config(runner) and not no_stash:\n logger.error(\n 'Your pre-commit configuration is unstaged.\\n'\n '`git add {}` to fix this.'.format(runner.config_file),\n )\n return 1\n\n # Expose origin / source as environment variables for hooks to consume\n if args.origin and args.source:\n environ['PRE_COMMIT_ORIGIN'] = args.origin\n environ['PRE_COMMIT_SOURCE'] = args.source\n\n if no_stash:\n ctx = noop_context()\n else:\n ctx = staged_files_only(store.directory)\n\n with ctx:\n repo_hooks = []\n for repo in repositories(runner.config, store):\n for _, hook in repo.hooks:\n if (\n (not args.hook or hook['id'] == args.hook) and\n not hook['stages'] or args.hook_stage in hook['stages']\n ):\n repo_hooks.append((repo, hook))\n\n if args.hook and not repo_hooks:\n output.write_line('No hook with id `{}`'.format(args.hook))\n return 1\n\n for repo in {repo for repo, _ in repo_hooks}:\n repo.require_installed()\n\n return _run_hooks(runner.config, repo_hooks, args, environ)\n", "path": "pre_commit/commands/run.py"}]} | 3,468 | 130 |
gh_patches_debug_24988 | rasdani/github-patches | git_diff | dotkom__onlineweb4-1681 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Deleting a Careeropportunity in the dashboard does not actually delete
When trying to delete a career opportunity in the dashboard, it does not actually delete it.
</issue>
<code>
[start of apps/careeropportunity/dashboard/views.py]
1 # -*- encoding: utf-8 -*-
2 from django.contrib import messages
3 from django.contrib.auth.decorators import login_required
4 from django.core.exceptions import PermissionDenied
5 from django.shortcuts import get_object_or_404, redirect, render
6 from django.utils import timezone
7 from guardian.decorators import permission_required
8
9 from apps.careeropportunity.forms import AddCareerOpportunityForm
10 from apps.careeropportunity.models import CareerOpportunity
11 from apps.dashboard.tools import get_base_context, has_access
12
13
14 @login_required
15 @permission_required('careeropportunity.view_careeropportunity', return_403=True)
16 def index(request):
17
18 if not has_access(request):
19 raise PermissionDenied
20
21 context = get_base_context(request)
22
23 # "cops" is short for "careeropportunities" which is a fucking long word
24 # "cop" is short for "careeropportunity" which also is a fucking long word
25 cops = CareerOpportunity.objects.all()
26 context['cops'] = cops.filter(end__gte=timezone.now()).order_by('end')
27 context['archive'] = cops.filter(end__lte=timezone.now()).order_by('-id')
28
29 return render(request, 'careeropportunity/dashboard/index.html', context)
30
31
32 @login_required
33 @permission_required('careeropportunity.change_careeropportunity', return_403=True)
34 def detail(request, opportunity_id=None):
35
36 if not has_access(request):
37 raise PermissionDenied
38
39 context = get_base_context(request)
40 cop = None
41 if opportunity_id:
42 cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)
43 context['cop'] = cop
44 context['form'] = AddCareerOpportunityForm(instance=cop)
45 else:
46 context['form'] = AddCareerOpportunityForm()
47
48 if request.method == 'POST':
49 if cop:
50 form = AddCareerOpportunityForm(data=request.POST, instance=cop)
51 else:
52 form = AddCareerOpportunityForm(data=request.POST)
53
54 if form.is_valid():
55 form.save()
56 messages.success(request, 'La til ny karrieremulighet')
57 return redirect(index)
58 else:
59 context['form'] = form
60 messages.error(request,
61 'Skjemaet ble ikke korrekt utfylt. Se etter markerte felter for å se hva som gikk galt.')
62
63 return render(request, 'careeropportunity/dashboard/detail.html', context)
64
65
66 @login_required
67 @permission_required('careeropportunity.change_careeropportunity', return_403=True)
68 def delete(request, opportunity_id=None):
69 if not has_access(request):
70 raise PermissionDenied
71
72 cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)
73 cop.delete()
74 messages.success(request, 'Slettet karrieremuligheten')
75 return redirect(index)
76
[end of apps/careeropportunity/dashboard/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/careeropportunity/dashboard/views.py b/apps/careeropportunity/dashboard/views.py
--- a/apps/careeropportunity/dashboard/views.py
+++ b/apps/careeropportunity/dashboard/views.py
@@ -1,4 +1,6 @@
# -*- encoding: utf-8 -*-
+import logging
+
from django.contrib import messages
from django.contrib.auth.decorators import login_required
from django.core.exceptions import PermissionDenied
@@ -32,6 +34,8 @@
@login_required
@permission_required('careeropportunity.change_careeropportunity', return_403=True)
def detail(request, opportunity_id=None):
+ logger = logging.getLogger(__name__)
+ logger.debug('Editing careeropportunity with id: %s' % (opportunity_id))
if not has_access(request):
raise PermissionDenied
@@ -66,6 +70,8 @@
@login_required
@permission_required('careeropportunity.change_careeropportunity', return_403=True)
def delete(request, opportunity_id=None):
+ logger = logging.getLogger(__name__)
+ logger.debug('Deleting careeropportunitywith id: %s' % (opportunity_id))
if not has_access(request):
raise PermissionDenied
| {"golden_diff": "diff --git a/apps/careeropportunity/dashboard/views.py b/apps/careeropportunity/dashboard/views.py\n--- a/apps/careeropportunity/dashboard/views.py\n+++ b/apps/careeropportunity/dashboard/views.py\n@@ -1,4 +1,6 @@\n # -*- encoding: utf-8 -*-\n+import logging\n+\n from django.contrib import messages\n from django.contrib.auth.decorators import login_required\n from django.core.exceptions import PermissionDenied\n@@ -32,6 +34,8 @@\n @login_required\n @permission_required('careeropportunity.change_careeropportunity', return_403=True)\n def detail(request, opportunity_id=None):\n+ logger = logging.getLogger(__name__)\n+ logger.debug('Editing careeropportunity with id: %s' % (opportunity_id))\n \n if not has_access(request):\n raise PermissionDenied\n@@ -66,6 +70,8 @@\n @login_required\n @permission_required('careeropportunity.change_careeropportunity', return_403=True)\n def delete(request, opportunity_id=None):\n+ logger = logging.getLogger(__name__)\n+ logger.debug('Deleting careeropportunitywith id: %s' % (opportunity_id))\n if not has_access(request):\n raise PermissionDenied\n", "issue": "Deleting a Careeropportunity in the dashboard does not actually delete\nWhen trying to delete a career opportunity in the dashboard, it does not actually delete it.\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nfrom django.contrib import messages\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.shortcuts import get_object_or_404, redirect, render\nfrom django.utils import timezone\nfrom guardian.decorators import permission_required\n\nfrom apps.careeropportunity.forms import AddCareerOpportunityForm\nfrom apps.careeropportunity.models import CareerOpportunity\nfrom apps.dashboard.tools import get_base_context, has_access\n\n\n@login_required\n@permission_required('careeropportunity.view_careeropportunity', return_403=True)\ndef index(request):\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n\n # \"cops\" is short for \"careeropportunities\" which is a fucking long word\n # \"cop\" is short for \"careeropportunity\" which also is a fucking long word\n cops = CareerOpportunity.objects.all()\n context['cops'] = cops.filter(end__gte=timezone.now()).order_by('end')\n context['archive'] = cops.filter(end__lte=timezone.now()).order_by('-id')\n\n return render(request, 'careeropportunity/dashboard/index.html', context)\n\n\n@login_required\n@permission_required('careeropportunity.change_careeropportunity', return_403=True)\ndef detail(request, opportunity_id=None):\n\n if not has_access(request):\n raise PermissionDenied\n\n context = get_base_context(request)\n cop = None\n if opportunity_id:\n cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n context['cop'] = cop\n context['form'] = AddCareerOpportunityForm(instance=cop)\n else:\n context['form'] = AddCareerOpportunityForm()\n\n if request.method == 'POST':\n if cop:\n form = AddCareerOpportunityForm(data=request.POST, instance=cop)\n else:\n form = AddCareerOpportunityForm(data=request.POST)\n\n if form.is_valid():\n form.save()\n messages.success(request, 'La til ny karrieremulighet')\n return redirect(index)\n else:\n context['form'] = form\n messages.error(request,\n 'Skjemaet ble ikke korrekt utfylt. Se etter markerte felter for \u00e5 se hva som gikk galt.')\n\n return render(request, 'careeropportunity/dashboard/detail.html', context)\n\n\n@login_required\n@permission_required('careeropportunity.change_careeropportunity', return_403=True)\ndef delete(request, opportunity_id=None):\n if not has_access(request):\n raise PermissionDenied\n\n cop = get_object_or_404(CareerOpportunity, pk=opportunity_id)\n cop.delete()\n messages.success(request, 'Slettet karrieremuligheten')\n return redirect(index)\n", "path": "apps/careeropportunity/dashboard/views.py"}]} | 1,340 | 269 |
gh_patches_debug_13184 | rasdani/github-patches | git_diff | huggingface__huggingface_hub-1218 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AttributeError: 'NoneType' object has no attribute 'split'
### Describe the bug
When I try to log into my account with the token via `huggingface-cli login`, I get this error:
```
Exception in thread Thread-1 (_readerthread):
Traceback (most recent call last):
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
self.run()
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1494, in _readerthread
buffer.append(fh.read())
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xee in position 387: invalid continuation byte
Traceback (most recent call last):
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\Scripts\huggingface-cli.exe\__main__.py", line 7, in <module>
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\commands\huggingface_cli.py", line 47, in main
service.run()
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\commands\user.py", line 117, in run
login()
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\_login.py", line 91, in login
interpreter_login()
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\_login.py", line 137, in interpreter_login
_login(token=token, add_to_git_credential=add_to_git_credential)
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\_login.py", line 231, in _login
if _is_git_credential_helper_configured():
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\_login.py", line 251, in _is_git_credential_helper_configured
helpers = list_credential_helpers()
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\site-packages\huggingface_hub\utils\_git_credential.py", line 44, in list_credential_helpers
for line in output.split("\n")
AttributeError: 'NoneType' object has no attribute 'split'
```
Also, when using `huggingface-cli env`, I got this error. I dunno if it's related, but here it is anyway:
```
Exception in thread Thread-1 (_readerthread):
Traceback (most recent call last):
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1009, in _bootstrap_inner
self.run()
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\threading.py", line 946, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1494, in _readerthread
buffer.append(fh.read())
File "C:\Users\mikwee\AppData\Local\Programs\Python\Python310\lib\codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xee in position 387: invalid continuation byte
```
### Reproduction
The commands I've used:
```
git clone https://github.com/harishanand95/diffusers.git
cd diffusers && git checkout dml && pip install -e .
pip install transformers ftfy scipy
pip install ort_nightly_directml-1.13.0.dev20220901005-cp310-cp310-win_amd64.whl
cd ./diffusers/examples/inference
huggingface-cli login
```
### Logs
_No response_
### System info
```shell
- huggingface_hub version: 0.11.0
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.10.4
- Running in iPython ?: No
- Running in notebook ?: No
- Running in Google Colab ?: No
- Token path ?: C:\Users\mikwee\.huggingface\token
- Has saved token ?: False
- FastAI: N/A
- Tensorflow: N/A
- Torch: 1.13.0
- Jinja2: N/A
- Graphviz: N/A
- Pydot: N/A
```
</issue>
<code>
[start of src/huggingface_hub/utils/_subprocess.py]
1 #!/usr/bin/env python
2 # coding=utf-8
3 # Copyright 2021 The HuggingFace Inc. team. All rights reserved.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License
16 """Contains utilities to easily handle subprocesses in `huggingface_hub`."""
17 import os
18 import subprocess
19 from contextlib import contextmanager
20 from pathlib import Path
21 from typing import IO, Generator, List, Optional, Tuple, Union
22
23 from .logging import get_logger
24
25
26 logger = get_logger(__name__)
27
28
29 def run_subprocess(
30 command: Union[str, List[str]],
31 folder: Optional[Union[str, Path]] = None,
32 check=True,
33 **kwargs,
34 ) -> subprocess.CompletedProcess:
35 """
36 Method to run subprocesses. Calling this will capture the `stderr` and `stdout`,
37 please call `subprocess.run` manually in case you would like for them not to
38 be captured.
39
40 Args:
41 command (`str` or `List[str]`):
42 The command to execute as a string or list of strings.
43 folder (`str`, *optional*):
44 The folder in which to run the command. Defaults to current working
45 directory (from `os.getcwd()`).
46 check (`bool`, *optional*, defaults to `True`):
47 Setting `check` to `True` will raise a `subprocess.CalledProcessError`
48 when the subprocess has a non-zero exit code.
49 kwargs (`Dict[str]`):
50 Keyword arguments to be passed to the `subprocess.run` underlying command.
51
52 Returns:
53 `subprocess.CompletedProcess`: The completed process.
54 """
55 if isinstance(command, str):
56 command = command.split()
57
58 if isinstance(folder, Path):
59 folder = str(folder)
60
61 return subprocess.run(
62 command,
63 stderr=subprocess.PIPE,
64 stdout=subprocess.PIPE,
65 check=check,
66 encoding="utf-8",
67 cwd=folder or os.getcwd(),
68 **kwargs,
69 )
70
71
72 @contextmanager
73 def run_interactive_subprocess(
74 command: Union[str, List[str]],
75 folder: Optional[Union[str, Path]] = None,
76 **kwargs,
77 ) -> Generator[Tuple[IO[str], IO[str]], None, None]:
78 """Run a subprocess in an interactive mode in a context manager.
79
80 Args:
81 command (`str` or `List[str]`):
82 The command to execute as a string or list of strings.
83 folder (`str`, *optional*):
84 The folder in which to run the command. Defaults to current working
85 directory (from `os.getcwd()`).
86 kwargs (`Dict[str]`):
87 Keyword arguments to be passed to the `subprocess.run` underlying command.
88
89 Returns:
90 `Tuple[IO[str], IO[str]]`: A tuple with `stdin` and `stdout` to interact
91 with the process (input and output are utf-8 encoded).
92
93 Example:
94 ```python
95 with _interactive_subprocess("git credential-store get") as (stdin, stdout):
96 # Write to stdin
97 stdin.write("url=hf.co\nusername=obama\n".encode("utf-8"))
98 stdin.flush()
99
100 # Read from stdout
101 output = stdout.read().decode("utf-8")
102 ```
103 """
104 if isinstance(command, str):
105 command = command.split()
106
107 with subprocess.Popen(
108 command,
109 stdin=subprocess.PIPE,
110 stdout=subprocess.PIPE,
111 stderr=subprocess.STDOUT,
112 encoding="utf-8",
113 cwd=folder or os.getcwd(),
114 **kwargs,
115 ) as process:
116 assert process.stdin is not None, "subprocess is opened as subprocess.PIPE"
117 assert process.stdout is not None, "subprocess is opened as subprocess.PIPE"
118 yield process.stdin, process.stdout
119
[end of src/huggingface_hub/utils/_subprocess.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/huggingface_hub/utils/_subprocess.py b/src/huggingface_hub/utils/_subprocess.py
--- a/src/huggingface_hub/utils/_subprocess.py
+++ b/src/huggingface_hub/utils/_subprocess.py
@@ -64,6 +64,7 @@
stdout=subprocess.PIPE,
check=check,
encoding="utf-8",
+ errors="replace", # if not utf-8, replace char by �
cwd=folder or os.getcwd(),
**kwargs,
)
@@ -110,6 +111,7 @@
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
encoding="utf-8",
+ errors="replace", # if not utf-8, replace char by �
cwd=folder or os.getcwd(),
**kwargs,
) as process:
| {"golden_diff": "diff --git a/src/huggingface_hub/utils/_subprocess.py b/src/huggingface_hub/utils/_subprocess.py\n--- a/src/huggingface_hub/utils/_subprocess.py\n+++ b/src/huggingface_hub/utils/_subprocess.py\n@@ -64,6 +64,7 @@\n stdout=subprocess.PIPE,\n check=check,\n encoding=\"utf-8\",\n+ errors=\"replace\", # if not utf-8, replace char by \ufffd\n cwd=folder or os.getcwd(),\n **kwargs,\n )\n@@ -110,6 +111,7 @@\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n encoding=\"utf-8\",\n+ errors=\"replace\", # if not utf-8, replace char by \ufffd\n cwd=folder or os.getcwd(),\n **kwargs,\n ) as process:\n", "issue": "AttributeError: 'NoneType' object has no attribute 'split'\n### Describe the bug\r\n\r\nWhen I try to log into my account with the token via `huggingface-cli login`, I get this error:\r\n```\r\nException in thread Thread-1 (_readerthread):\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py\", line 1009, in _bootstrap_inner\r\n self.run()\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py\", line 946, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\subprocess.py\", line 1494, in _readerthread\r\n buffer.append(fh.read())\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xee in position 387: invalid continuation byte\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\Scripts\\huggingface-cli.exe\\__main__.py\", line 7, in <module>\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\commands\\huggingface_cli.py\", line 47, in main\r\n service.run()\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\commands\\user.py\", line 117, in run\r\n login()\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\_login.py\", line 91, in login\r\n interpreter_login()\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\_login.py\", line 137, in interpreter_login\r\n _login(token=token, add_to_git_credential=add_to_git_credential)\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\_login.py\", line 231, in _login\r\n if _is_git_credential_helper_configured():\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\_login.py\", line 251, in _is_git_credential_helper_configured\r\n helpers = list_credential_helpers()\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\huggingface_hub\\utils\\_git_credential.py\", line 44, in list_credential_helpers\r\n for line in output.split(\"\\n\")\r\nAttributeError: 'NoneType' object has no attribute 'split'\r\n```\r\nAlso, when using `huggingface-cli env`, I got this error. I dunno if it's related, but here it is anyway:\r\n```\r\nException in thread Thread-1 (_readerthread):\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py\", line 1009, in _bootstrap_inner\r\n self.run()\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\threading.py\", line 946, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\subprocess.py\", line 1494, in _readerthread\r\n buffer.append(fh.read())\r\n File \"C:\\Users\\mikwee\\AppData\\Local\\Programs\\Python\\Python310\\lib\\codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xee in position 387: invalid continuation byte\r\n```\r\n\r\n### Reproduction\r\n\r\nThe commands I've used:\r\n```\r\ngit clone https://github.com/harishanand95/diffusers.git\r\ncd diffusers && git checkout dml && pip install -e .\r\npip install transformers ftfy scipy\r\npip install ort_nightly_directml-1.13.0.dev20220901005-cp310-cp310-win_amd64.whl\r\ncd ./diffusers/examples/inference\r\nhuggingface-cli login\r\n```\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### System info\r\n\r\n```shell\r\n- huggingface_hub version: 0.11.0\r\n- Platform: Windows-10-10.0.19044-SP0\r\n- Python version: 3.10.4\r\n- Running in iPython ?: No\r\n- Running in notebook ?: No\r\n- Running in Google Colab ?: No\r\n- Token path ?: C:\\Users\\mikwee\\.huggingface\\token\r\n- Has saved token ?: False\r\n- FastAI: N/A\r\n- Tensorflow: N/A\r\n- Torch: 1.13.0\r\n- Jinja2: N/A\r\n- Graphviz: N/A\r\n- Pydot: N/A\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# coding=utf-8\n# Copyright 2021 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License\n\"\"\"Contains utilities to easily handle subprocesses in `huggingface_hub`.\"\"\"\nimport os\nimport subprocess\nfrom contextlib import contextmanager\nfrom pathlib import Path\nfrom typing import IO, Generator, List, Optional, Tuple, Union\n\nfrom .logging import get_logger\n\n\nlogger = get_logger(__name__)\n\n\ndef run_subprocess(\n command: Union[str, List[str]],\n folder: Optional[Union[str, Path]] = None,\n check=True,\n **kwargs,\n) -> subprocess.CompletedProcess:\n \"\"\"\n Method to run subprocesses. Calling this will capture the `stderr` and `stdout`,\n please call `subprocess.run` manually in case you would like for them not to\n be captured.\n\n Args:\n command (`str` or `List[str]`):\n The command to execute as a string or list of strings.\n folder (`str`, *optional*):\n The folder in which to run the command. Defaults to current working\n directory (from `os.getcwd()`).\n check (`bool`, *optional*, defaults to `True`):\n Setting `check` to `True` will raise a `subprocess.CalledProcessError`\n when the subprocess has a non-zero exit code.\n kwargs (`Dict[str]`):\n Keyword arguments to be passed to the `subprocess.run` underlying command.\n\n Returns:\n `subprocess.CompletedProcess`: The completed process.\n \"\"\"\n if isinstance(command, str):\n command = command.split()\n\n if isinstance(folder, Path):\n folder = str(folder)\n\n return subprocess.run(\n command,\n stderr=subprocess.PIPE,\n stdout=subprocess.PIPE,\n check=check,\n encoding=\"utf-8\",\n cwd=folder or os.getcwd(),\n **kwargs,\n )\n\n\n@contextmanager\ndef run_interactive_subprocess(\n command: Union[str, List[str]],\n folder: Optional[Union[str, Path]] = None,\n **kwargs,\n) -> Generator[Tuple[IO[str], IO[str]], None, None]:\n \"\"\"Run a subprocess in an interactive mode in a context manager.\n\n Args:\n command (`str` or `List[str]`):\n The command to execute as a string or list of strings.\n folder (`str`, *optional*):\n The folder in which to run the command. Defaults to current working\n directory (from `os.getcwd()`).\n kwargs (`Dict[str]`):\n Keyword arguments to be passed to the `subprocess.run` underlying command.\n\n Returns:\n `Tuple[IO[str], IO[str]]`: A tuple with `stdin` and `stdout` to interact\n with the process (input and output are utf-8 encoded).\n\n Example:\n ```python\n with _interactive_subprocess(\"git credential-store get\") as (stdin, stdout):\n # Write to stdin\n stdin.write(\"url=hf.co\\nusername=obama\\n\".encode(\"utf-8\"))\n stdin.flush()\n\n # Read from stdout\n output = stdout.read().decode(\"utf-8\")\n ```\n \"\"\"\n if isinstance(command, str):\n command = command.split()\n\n with subprocess.Popen(\n command,\n stdin=subprocess.PIPE,\n stdout=subprocess.PIPE,\n stderr=subprocess.STDOUT,\n encoding=\"utf-8\",\n cwd=folder or os.getcwd(),\n **kwargs,\n ) as process:\n assert process.stdin is not None, \"subprocess is opened as subprocess.PIPE\"\n assert process.stdout is not None, \"subprocess is opened as subprocess.PIPE\"\n yield process.stdin, process.stdout\n", "path": "src/huggingface_hub/utils/_subprocess.py"}]} | 3,142 | 183 |
gh_patches_debug_5191 | rasdani/github-patches | git_diff | nf-core__tools-1333 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Space missing in tip message for --fix files_unchanged
<!--
# nf-core/tools bug report
Hi there!
Thanks for telling us about a problem with the nf-core/tools package.
Please delete this text and anything that's not relevant from the template below:
-->
## Description of the bug
a space is missing before `--fix files_unchanged`
```
Tip: Some of these linting errors can automatically be resolved with the
following command:
nf-core lint --dir /home/runner/work/rnavar/rnavar--fix files_unchanged
```
## Steps to reproduce
https://github.com/nf-core/rnavar/runs/4317868056?check_suite_focus=true#step:6:100
## Expected behaviour
<!-- A clear and concise description of what you expected to happen. -->
## System
- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->
- Executor: <!-- [e.g. slurm, local, awsbatch...] -->
- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->
- Version of nf-core/tools: <!-- [e.g. 1.1, 1.5, 1.8.2...] -->
- Python version: <!-- [e.g. 3.7, 3.8...] -->
## Nextflow Installation
- Version: <!-- [e.g. 19.10.0] -->
## Additional context
<!-- Add any other context about the problem here. -->
</issue>
<code>
[start of nf_core/lint_utils.py]
1 import rich
2 from rich.console import Console
3 from rich.table import Table
4 import logging
5
6 import nf_core.utils
7
8 log = logging.getLogger(__name__)
9
10 # Create a console used by all lint tests
11 console = Console(force_terminal=nf_core.utils.rich_force_colors())
12
13
14 def print_joint_summary(lint_obj, module_lint_obj):
15 """Print a joint summary of the general pipe lint tests and the module lint tests"""
16 nbr_passed = len(lint_obj.passed) + len(module_lint_obj.passed)
17 nbr_ignored = len(lint_obj.ignored)
18 nbr_fixed = len(lint_obj.fixed)
19 nbr_warned = len(lint_obj.warned) + len(module_lint_obj.warned)
20 nbr_failed = len(lint_obj.failed) + len(module_lint_obj.failed)
21
22 def _s(some_length):
23 return "" if some_length == 1 else "s"
24
25 summary_colour = "red" if nbr_failed > 0 else "green"
26 table = Table(box=rich.box.ROUNDED, style=summary_colour)
27 table.add_column(f"LINT RESULTS SUMMARY".format(nbr_passed), no_wrap=True)
28 table.add_row(r"[green][✔] {:>3} Test{} Passed".format(nbr_passed, _s(nbr_passed)))
29 if nbr_fixed:
30 table.add_row(r"[bright blue][?] {:>3} Test{} Fixed".format(nbr_fixed, _s(nbr_fixed)))
31 table.add_row(r"[grey58][?] {:>3} Test{} Ignored".format(nbr_ignored, _s(nbr_ignored)))
32 table.add_row(r"[yellow][!] {:>3} Test Warning{}".format(nbr_warned, _s(nbr_warned)))
33 table.add_row(r"[red][✗] {:>3} Test{} Failed".format(nbr_failed, _s(nbr_failed)))
34 console.print(table)
35
36
37 def print_fixes(lint_obj, module_lint_obj):
38 """Prints available and applied fixes"""
39
40 if len(lint_obj.could_fix):
41 fix_cmd = "nf-core lint {}--fix {}".format(
42 "" if lint_obj.wf_path == "." else f"--dir {lint_obj.wf_path}", " --fix ".join(lint_obj.could_fix)
43 )
44 console.print(
45 f"\nTip: Some of these linting errors can automatically be resolved with the following command:\n\n[blue] {fix_cmd}\n"
46 )
47 if len(lint_obj.fix):
48 console.print(
49 "Automatic fixes applied. Please check with 'git diff' and revert any changes you do not want with 'git checkout <file>'."
50 )
51
[end of nf_core/lint_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nf_core/lint_utils.py b/nf_core/lint_utils.py
--- a/nf_core/lint_utils.py
+++ b/nf_core/lint_utils.py
@@ -38,7 +38,7 @@
"""Prints available and applied fixes"""
if len(lint_obj.could_fix):
- fix_cmd = "nf-core lint {}--fix {}".format(
+ fix_cmd = "nf-core lint {} --fix {}".format(
"" if lint_obj.wf_path == "." else f"--dir {lint_obj.wf_path}", " --fix ".join(lint_obj.could_fix)
)
console.print(
| {"golden_diff": "diff --git a/nf_core/lint_utils.py b/nf_core/lint_utils.py\n--- a/nf_core/lint_utils.py\n+++ b/nf_core/lint_utils.py\n@@ -38,7 +38,7 @@\n \"\"\"Prints available and applied fixes\"\"\"\n \n if len(lint_obj.could_fix):\n- fix_cmd = \"nf-core lint {}--fix {}\".format(\n+ fix_cmd = \"nf-core lint {} --fix {}\".format(\n \"\" if lint_obj.wf_path == \".\" else f\"--dir {lint_obj.wf_path}\", \" --fix \".join(lint_obj.could_fix)\n )\n console.print(\n", "issue": "Space missing in tip message for --fix files_unchanged\n<!--\r\n# nf-core/tools bug report\r\n\r\nHi there!\r\n\r\nThanks for telling us about a problem with the nf-core/tools package.\r\nPlease delete this text and anything that's not relevant from the template below:\r\n-->\r\n\r\n## Description of the bug\r\n\r\na space is missing before `--fix files_unchanged`\r\n\r\n```\r\nTip: Some of these linting errors can automatically be resolved with the \r\nfollowing command:\r\n\r\n nf-core lint --dir /home/runner/work/rnavar/rnavar--fix files_unchanged\r\n```\r\n\r\n## Steps to reproduce\r\n\r\nhttps://github.com/nf-core/rnavar/runs/4317868056?check_suite_focus=true#step:6:100\r\n\r\n## Expected behaviour\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n## System\r\n\r\n- Hardware: <!-- [e.g. HPC, Desktop, Cloud...] -->\r\n- Executor: <!-- [e.g. slurm, local, awsbatch...] -->\r\n- OS: <!-- [e.g. CentOS Linux, macOS, Linux Mint...] -->\r\n- Version of nf-core/tools: <!-- [e.g. 1.1, 1.5, 1.8.2...] -->\r\n- Python version: <!-- [e.g. 3.7, 3.8...] -->\r\n\r\n## Nextflow Installation\r\n\r\n- Version: <!-- [e.g. 19.10.0] -->\r\n\r\n## Additional context\r\n\r\n<!-- Add any other context about the problem here. -->\r\n\n", "before_files": [{"content": "import rich\nfrom rich.console import Console\nfrom rich.table import Table\nimport logging\n\nimport nf_core.utils\n\nlog = logging.getLogger(__name__)\n\n# Create a console used by all lint tests\nconsole = Console(force_terminal=nf_core.utils.rich_force_colors())\n\n\ndef print_joint_summary(lint_obj, module_lint_obj):\n \"\"\"Print a joint summary of the general pipe lint tests and the module lint tests\"\"\"\n nbr_passed = len(lint_obj.passed) + len(module_lint_obj.passed)\n nbr_ignored = len(lint_obj.ignored)\n nbr_fixed = len(lint_obj.fixed)\n nbr_warned = len(lint_obj.warned) + len(module_lint_obj.warned)\n nbr_failed = len(lint_obj.failed) + len(module_lint_obj.failed)\n\n def _s(some_length):\n return \"\" if some_length == 1 else \"s\"\n\n summary_colour = \"red\" if nbr_failed > 0 else \"green\"\n table = Table(box=rich.box.ROUNDED, style=summary_colour)\n table.add_column(f\"LINT RESULTS SUMMARY\".format(nbr_passed), no_wrap=True)\n table.add_row(r\"[green][\u2714] {:>3} Test{} Passed\".format(nbr_passed, _s(nbr_passed)))\n if nbr_fixed:\n table.add_row(r\"[bright blue][?] {:>3} Test{} Fixed\".format(nbr_fixed, _s(nbr_fixed)))\n table.add_row(r\"[grey58][?] {:>3} Test{} Ignored\".format(nbr_ignored, _s(nbr_ignored)))\n table.add_row(r\"[yellow][!] {:>3} Test Warning{}\".format(nbr_warned, _s(nbr_warned)))\n table.add_row(r\"[red][\u2717] {:>3} Test{} Failed\".format(nbr_failed, _s(nbr_failed)))\n console.print(table)\n\n\ndef print_fixes(lint_obj, module_lint_obj):\n \"\"\"Prints available and applied fixes\"\"\"\n\n if len(lint_obj.could_fix):\n fix_cmd = \"nf-core lint {}--fix {}\".format(\n \"\" if lint_obj.wf_path == \".\" else f\"--dir {lint_obj.wf_path}\", \" --fix \".join(lint_obj.could_fix)\n )\n console.print(\n f\"\\nTip: Some of these linting errors can automatically be resolved with the following command:\\n\\n[blue] {fix_cmd}\\n\"\n )\n if len(lint_obj.fix):\n console.print(\n \"Automatic fixes applied. Please check with 'git diff' and revert any changes you do not want with 'git checkout <file>'.\"\n )\n", "path": "nf_core/lint_utils.py"}]} | 1,529 | 141 |
gh_patches_debug_15169 | rasdani/github-patches | git_diff | bookwyrm-social__bookwyrm-1171 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pending import csv lines displayed under "Successful" title until tried
Importing a CSV into Bookwyrm shows titles being "successfully imported" but they do not show up in the library.
Here's screenshots of the import results, neither the successful nor the failed imports seem to show up:


[Attached is the file which I attempted to import.](https://github.com/bookwyrm-social/bookwyrm/files/6523421/Tomat0.s.Library.csv)
</issue>
<code>
[start of bookwyrm/views/import_data.py]
1 """ import books from another app """
2 from io import TextIOWrapper
3
4 from django.contrib.auth.decorators import login_required
5 from django.core.exceptions import PermissionDenied
6 from django.http import HttpResponseBadRequest
7 from django.shortcuts import get_object_or_404, redirect
8 from django.template.response import TemplateResponse
9 from django.utils.decorators import method_decorator
10 from django.utils.translation import gettext_lazy as _
11 from django.views import View
12
13 from bookwyrm import forms, models
14 from bookwyrm.importers import (
15 Importer,
16 LibrarythingImporter,
17 GoodreadsImporter,
18 StorygraphImporter,
19 )
20 from bookwyrm.tasks import app
21
22 # pylint: disable= no-self-use
23 @method_decorator(login_required, name="dispatch")
24 class Import(View):
25 """import view"""
26
27 def get(self, request):
28 """load import page"""
29 return TemplateResponse(
30 request,
31 "import.html",
32 {
33 "import_form": forms.ImportForm(),
34 "jobs": models.ImportJob.objects.filter(user=request.user).order_by(
35 "-created_date"
36 ),
37 },
38 )
39
40 def post(self, request):
41 """ingest a goodreads csv"""
42 form = forms.ImportForm(request.POST, request.FILES)
43 if form.is_valid():
44 include_reviews = request.POST.get("include_reviews") == "on"
45 privacy = request.POST.get("privacy")
46 source = request.POST.get("source")
47
48 importer = None
49 if source == "LibraryThing":
50 importer = LibrarythingImporter()
51 elif source == "Storygraph":
52 importer = StorygraphImporter()
53 else:
54 # Default : GoodReads
55 importer = GoodreadsImporter()
56
57 try:
58 job = importer.create_job(
59 request.user,
60 TextIOWrapper(
61 request.FILES["csv_file"], encoding=importer.encoding
62 ),
63 include_reviews,
64 privacy,
65 )
66 except (UnicodeDecodeError, ValueError, KeyError):
67 return HttpResponseBadRequest(_("Not a valid csv file"))
68
69 importer.start_import(job)
70
71 return redirect("/import/%d" % job.id)
72 return HttpResponseBadRequest()
73
74
75 @method_decorator(login_required, name="dispatch")
76 class ImportStatus(View):
77 """status of an existing import"""
78
79 def get(self, request, job_id):
80 """status of an import job"""
81 job = models.ImportJob.objects.get(id=job_id)
82 if job.user != request.user:
83 raise PermissionDenied
84 try:
85 task = app.AsyncResult(job.task_id)
86 except ValueError:
87 task = None
88 items = job.items.order_by("index").all()
89 failed_items = [i for i in items if i.fail_reason]
90 items = [i for i in items if not i.fail_reason]
91 return TemplateResponse(
92 request,
93 "import_status.html",
94 {"job": job, "items": items, "failed_items": failed_items, "task": task},
95 )
96
97 def post(self, request, job_id):
98 """retry lines from an import"""
99 job = get_object_or_404(models.ImportJob, id=job_id)
100 items = []
101 for item in request.POST.getlist("import_item"):
102 items.append(get_object_or_404(models.ImportItem, id=item))
103
104 importer = Importer()
105 job = importer.create_retry_job(
106 request.user,
107 job,
108 items,
109 )
110 importer.start_import(job)
111 return redirect("/import/%d" % job.id)
112
[end of bookwyrm/views/import_data.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bookwyrm/views/import_data.py b/bookwyrm/views/import_data.py
--- a/bookwyrm/views/import_data.py
+++ b/bookwyrm/views/import_data.py
@@ -78,13 +78,15 @@
def get(self, request, job_id):
"""status of an import job"""
- job = models.ImportJob.objects.get(id=job_id)
+ job = get_object_or_404(models.ImportJob, id=job_id)
if job.user != request.user:
raise PermissionDenied
+
try:
task = app.AsyncResult(job.task_id)
except ValueError:
task = None
+
items = job.items.order_by("index").all()
failed_items = [i for i in items if i.fail_reason]
items = [i for i in items if not i.fail_reason]
| {"golden_diff": "diff --git a/bookwyrm/views/import_data.py b/bookwyrm/views/import_data.py\n--- a/bookwyrm/views/import_data.py\n+++ b/bookwyrm/views/import_data.py\n@@ -78,13 +78,15 @@\n \n def get(self, request, job_id):\n \"\"\"status of an import job\"\"\"\n- job = models.ImportJob.objects.get(id=job_id)\n+ job = get_object_or_404(models.ImportJob, id=job_id)\n if job.user != request.user:\n raise PermissionDenied\n+\n try:\n task = app.AsyncResult(job.task_id)\n except ValueError:\n task = None\n+\n items = job.items.order_by(\"index\").all()\n failed_items = [i for i in items if i.fail_reason]\n items = [i for i in items if not i.fail_reason]\n", "issue": "Pending import csv lines displayed under \"Successful\" title until tried\nImporting a CSV into Bookwyrm shows titles being \"successfully imported\" but they do not show up in the library.\r\n\r\nHere's screenshots of the import results, neither the successful nor the failed imports seem to show up:\r\n\r\n\r\n\r\n\r\n[Attached is the file which I attempted to import.](https://github.com/bookwyrm-social/bookwyrm/files/6523421/Tomat0.s.Library.csv)\r\n\r\n\n", "before_files": [{"content": "\"\"\" import books from another app \"\"\"\nfrom io import TextIOWrapper\n\nfrom django.contrib.auth.decorators import login_required\nfrom django.core.exceptions import PermissionDenied\nfrom django.http import HttpResponseBadRequest\nfrom django.shortcuts import get_object_or_404, redirect\nfrom django.template.response import TemplateResponse\nfrom django.utils.decorators import method_decorator\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import View\n\nfrom bookwyrm import forms, models\nfrom bookwyrm.importers import (\n Importer,\n LibrarythingImporter,\n GoodreadsImporter,\n StorygraphImporter,\n)\nfrom bookwyrm.tasks import app\n\n# pylint: disable= no-self-use\n@method_decorator(login_required, name=\"dispatch\")\nclass Import(View):\n \"\"\"import view\"\"\"\n\n def get(self, request):\n \"\"\"load import page\"\"\"\n return TemplateResponse(\n request,\n \"import.html\",\n {\n \"import_form\": forms.ImportForm(),\n \"jobs\": models.ImportJob.objects.filter(user=request.user).order_by(\n \"-created_date\"\n ),\n },\n )\n\n def post(self, request):\n \"\"\"ingest a goodreads csv\"\"\"\n form = forms.ImportForm(request.POST, request.FILES)\n if form.is_valid():\n include_reviews = request.POST.get(\"include_reviews\") == \"on\"\n privacy = request.POST.get(\"privacy\")\n source = request.POST.get(\"source\")\n\n importer = None\n if source == \"LibraryThing\":\n importer = LibrarythingImporter()\n elif source == \"Storygraph\":\n importer = StorygraphImporter()\n else:\n # Default : GoodReads\n importer = GoodreadsImporter()\n\n try:\n job = importer.create_job(\n request.user,\n TextIOWrapper(\n request.FILES[\"csv_file\"], encoding=importer.encoding\n ),\n include_reviews,\n privacy,\n )\n except (UnicodeDecodeError, ValueError, KeyError):\n return HttpResponseBadRequest(_(\"Not a valid csv file\"))\n\n importer.start_import(job)\n\n return redirect(\"/import/%d\" % job.id)\n return HttpResponseBadRequest()\n\n\n@method_decorator(login_required, name=\"dispatch\")\nclass ImportStatus(View):\n \"\"\"status of an existing import\"\"\"\n\n def get(self, request, job_id):\n \"\"\"status of an import job\"\"\"\n job = models.ImportJob.objects.get(id=job_id)\n if job.user != request.user:\n raise PermissionDenied\n try:\n task = app.AsyncResult(job.task_id)\n except ValueError:\n task = None\n items = job.items.order_by(\"index\").all()\n failed_items = [i for i in items if i.fail_reason]\n items = [i for i in items if not i.fail_reason]\n return TemplateResponse(\n request,\n \"import_status.html\",\n {\"job\": job, \"items\": items, \"failed_items\": failed_items, \"task\": task},\n )\n\n def post(self, request, job_id):\n \"\"\"retry lines from an import\"\"\"\n job = get_object_or_404(models.ImportJob, id=job_id)\n items = []\n for item in request.POST.getlist(\"import_item\"):\n items.append(get_object_or_404(models.ImportItem, id=item))\n\n importer = Importer()\n job = importer.create_retry_job(\n request.user,\n job,\n items,\n )\n importer.start_import(job)\n return redirect(\"/import/%d\" % job.id)\n", "path": "bookwyrm/views/import_data.py"}]} | 1,625 | 186 |
gh_patches_debug_16337 | rasdani/github-patches | git_diff | ansible__ansible-31514 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[rpm_key] When no key is installed module fail to install any new key
<!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
rpm_key
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.4.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/lbednar/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/lbednar/work/kubevirt-org/kubevirt-ansible/E/lib/python2.7/site-packages/ansible
executable location = /home/lbednar/work/kubevirt-org/kubevirt-ansible/E/bin/ansible
python version = 2.7.13 (default, May 10 2017, 20:04:28) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]
```
##### CONFIGURATION
<!---
If using Ansible 2.4 or above, paste the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
```
DEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/home/lbednar/work/kubevirt-org/kubevirt-ansible/galaxy-roles']
```
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.
-->
Target system CentOS 7.3
##### SUMMARY
<!--- Explain the problem briefly -->
When running `rpm_key` to add new key on system which doesn't have any keys installed yet (I want to install first key), then module fails to add new key during execution following line:
https://github.com/ansible/ansible/blob/e609618274db6a7e3c273abde457f53de8c9976c/lib/ansible/modules/packaging/os/rpm_key.py#L173
the command fails with:
```
$ /usr/bin/rpm -q gpg-pubkey --qf "%{description}"
package gpg-pubkey is not installed
```
and then following command in shell pipe fails on
```
$ /usr/bin/gpg --no-tty --batch --with-colons --fixed-list-mode -
gpg: no valid OpenPGP data found.
gpg: processing message failed: Unknown system error
```
At this point module stops execution and fail.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
**You need to make sure that you don't have any RPM KEY installed yet.**
This issue is reproducible only in case of when you are adding first rpm key.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: import rpm keys
rpm_key:
state: present
key: "{{ item }}"
with_items:
- "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I would like to get rpm keys imported.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
No key is added and getting following output instead.
<!--- Paste verbatim command output between quotes below -->
```
failed: [vm-69-15.qa.lab.tlv.redhat.com] (item=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg) => {"changed": false, "failed": true, "item": "https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg", "msg": "gpg: no valid OpenPGP data found.\ngpg: processing message failed: Unknown system error\n"}
```
</issue>
<code>
[start of lib/ansible/modules/packaging/os/rpm_key.py]
1 #!/usr/bin/python
2 # -*- coding: utf-8 -*-
3
4 # Ansible module to import third party repo keys to your rpm db
5 # (c) 2013, Héctor Acosta <[email protected]>
6 #
7 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
8
9 from __future__ import absolute_import, division, print_function
10 __metaclass__ = type
11
12
13 ANSIBLE_METADATA = {'metadata_version': '1.1',
14 'status': ['preview'],
15 'supported_by': 'core'}
16
17
18 DOCUMENTATION = '''
19 ---
20 module: rpm_key
21 author: "Hector Acosta (@hacosta) <[email protected]>"
22 short_description: Adds or removes a gpg key from the rpm db
23 description:
24 - Adds or removes (rpm --import) a gpg key to your rpm database.
25 version_added: "1.3"
26 options:
27 key:
28 required: true
29 default: null
30 aliases: []
31 description:
32 - Key that will be modified. Can be a url, a file, or a keyid if the key already exists in the database.
33 state:
34 required: false
35 default: "present"
36 choices: [present, absent]
37 description:
38 - If the key will be imported or removed from the rpm db.
39 validate_certs:
40 description:
41 - If C(no) and the C(key) is a url starting with https, SSL certificates will not be validated. This should only be used
42 on personally controlled sites using self-signed certificates.
43 required: false
44 default: 'yes'
45 choices: ['yes', 'no']
46
47 '''
48
49 EXAMPLES = '''
50 # Example action to import a key from a url
51 - rpm_key:
52 state: present
53 key: http://apt.sw.be/RPM-GPG-KEY.dag.txt
54
55 # Example action to import a key from a file
56 - rpm_key:
57 state: present
58 key: /path/to/key.gpg
59
60 # Example action to ensure a key is not present in the db
61 - rpm_key:
62 state: absent
63 key: DEADB33F
64 '''
65 import re
66 import os.path
67 import tempfile
68
69 # import module snippets
70 from ansible.module_utils.basic import AnsibleModule
71 from ansible.module_utils.urls import fetch_url
72 from ansible.module_utils._text import to_native
73
74
75 def is_pubkey(string):
76 """Verifies if string is a pubkey"""
77 pgp_regex = ".*?(-----BEGIN PGP PUBLIC KEY BLOCK-----.*?-----END PGP PUBLIC KEY BLOCK-----).*"
78 return bool(re.match(pgp_regex, to_native(string, errors='surrogate_or_strict'), re.DOTALL))
79
80
81 class RpmKey(object):
82
83 def __init__(self, module):
84 # If the key is a url, we need to check if it's present to be idempotent,
85 # to do that, we need to check the keyid, which we can get from the armor.
86 keyfile = None
87 should_cleanup_keyfile = False
88 self.module = module
89 self.rpm = self.module.get_bin_path('rpm', True)
90 state = module.params['state']
91 key = module.params['key']
92
93 self.gpg = self.module.get_bin_path('gpg')
94 if not self.gpg:
95 self.gpg = self.module.get_bin_path('gpg2',required=True)
96
97 if '://' in key:
98 keyfile = self.fetch_key(key)
99 keyid = self.getkeyid(keyfile)
100 should_cleanup_keyfile = True
101 elif self.is_keyid(key):
102 keyid = key
103 elif os.path.isfile(key):
104 keyfile = key
105 keyid = self.getkeyid(keyfile)
106 else:
107 self.module.fail_json(msg="Not a valid key %s" % key)
108 keyid = self.normalize_keyid(keyid)
109
110 if state == 'present':
111 if self.is_key_imported(keyid):
112 module.exit_json(changed=False)
113 else:
114 if not keyfile:
115 self.module.fail_json(msg="When importing a key, a valid file must be given")
116 self.import_key(keyfile)
117 if should_cleanup_keyfile:
118 self.module.cleanup(keyfile)
119 module.exit_json(changed=True)
120 else:
121 if self.is_key_imported(keyid):
122 self.drop_key(keyid)
123 module.exit_json(changed=True)
124 else:
125 module.exit_json(changed=False)
126
127 def fetch_key(self, url):
128 """Downloads a key from url, returns a valid path to a gpg key"""
129 rsp, info = fetch_url(self.module, url)
130 if info['status'] != 200:
131 self.module.fail_json(msg="failed to fetch key at %s , error was: %s" % (url, info['msg']))
132
133 key = rsp.read()
134 if not is_pubkey(key):
135 self.module.fail_json(msg="Not a public key: %s" % url)
136 tmpfd, tmpname = tempfile.mkstemp()
137 self.module.add_cleanup_file(tmpname)
138 tmpfile = os.fdopen(tmpfd, "w+b")
139 tmpfile.write(key)
140 tmpfile.close()
141 return tmpname
142
143 def normalize_keyid(self, keyid):
144 """Ensure a keyid doesn't have a leading 0x, has leading or trailing whitespace, and make sure is uppercase"""
145 ret = keyid.strip().upper()
146 if ret.startswith('0x'):
147 return ret[2:]
148 elif ret.startswith('0X'):
149 return ret[2:]
150 else:
151 return ret
152
153 def getkeyid(self, keyfile):
154 stdout, stderr = self.execute_command([self.gpg, '--no-tty', '--batch', '--with-colons', '--fixed-list-mode', keyfile])
155 for line in stdout.splitlines():
156 line = line.strip()
157 if line.startswith('pub:'):
158 return line.split(':')[4]
159
160 self.module.fail_json(msg="Unexpected gpg output")
161
162 def is_keyid(self, keystr):
163 """Verifies if a key, as provided by the user is a keyid"""
164 return re.match('(0x)?[0-9a-f]{8}', keystr, flags=re.IGNORECASE)
165
166 def execute_command(self, cmd):
167 rc, stdout, stderr = self.module.run_command(cmd, use_unsafe_shell=True)
168 if rc != 0:
169 self.module.fail_json(msg=stderr)
170 return stdout, stderr
171
172 def is_key_imported(self, keyid):
173 cmd=self.rpm + ' -q gpg-pubkey --qf "%{description}" | ' + self.gpg + ' --no-tty --batch --with-colons --fixed-list-mode -'
174 stdout, stderr = self.execute_command(cmd)
175 for line in stdout.splitlines():
176 if keyid in line.split(':')[4]:
177 return True
178 return False
179
180 def import_key(self, keyfile):
181 if not self.module.check_mode:
182 self.execute_command([self.rpm, '--import', keyfile])
183
184 def drop_key(self, keyid):
185 if not self.module.check_mode:
186 self.execute_command([self.rpm, '--erase', '--allmatches', "gpg-pubkey-%s" % keyid[-8:].lower()])
187
188
189 def main():
190 module = AnsibleModule(
191 argument_spec = dict(
192 state=dict(default='present', choices=['present', 'absent'], type='str'),
193 key=dict(required=True, type='str'),
194 validate_certs=dict(default='yes', type='bool'),
195 ),
196 supports_check_mode=True
197 )
198
199 RpmKey(module)
200
201
202 if __name__ == '__main__':
203 main()
204
[end of lib/ansible/modules/packaging/os/rpm_key.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/ansible/modules/packaging/os/rpm_key.py b/lib/ansible/modules/packaging/os/rpm_key.py
--- a/lib/ansible/modules/packaging/os/rpm_key.py
+++ b/lib/ansible/modules/packaging/os/rpm_key.py
@@ -170,11 +170,15 @@
return stdout, stderr
def is_key_imported(self, keyid):
- cmd=self.rpm + ' -q gpg-pubkey --qf "%{description}" | ' + self.gpg + ' --no-tty --batch --with-colons --fixed-list-mode -'
+ cmd = self.rpm + ' -q gpg-pubkey'
+ rc, stdout, stderr = self.module.run_command(cmd)
+ if rc != 0: # No key is installed on system
+ return False
+ cmd += ' --qf "%{description}" | ' + self.gpg + ' --no-tty --batch --with-colons --fixed-list-mode -'
stdout, stderr = self.execute_command(cmd)
for line in stdout.splitlines():
if keyid in line.split(':')[4]:
- return True
+ return True
return False
def import_key(self, keyfile):
| {"golden_diff": "diff --git a/lib/ansible/modules/packaging/os/rpm_key.py b/lib/ansible/modules/packaging/os/rpm_key.py\n--- a/lib/ansible/modules/packaging/os/rpm_key.py\n+++ b/lib/ansible/modules/packaging/os/rpm_key.py\n@@ -170,11 +170,15 @@\n return stdout, stderr\n \n def is_key_imported(self, keyid):\n- cmd=self.rpm + ' -q gpg-pubkey --qf \"%{description}\" | ' + self.gpg + ' --no-tty --batch --with-colons --fixed-list-mode -'\n+ cmd = self.rpm + ' -q gpg-pubkey'\n+ rc, stdout, stderr = self.module.run_command(cmd)\n+ if rc != 0: # No key is installed on system\n+ return False\n+ cmd += ' --qf \"%{description}\" | ' + self.gpg + ' --no-tty --batch --with-colons --fixed-list-mode -'\n stdout, stderr = self.execute_command(cmd)\n for line in stdout.splitlines():\n if keyid in line.split(':')[4]:\n- return True\n+ return True\n return False\n \n def import_key(self, keyfile):\n", "issue": "[rpm_key] When no key is installed module fail to install any new key\n<!---\r\nVerify first that your issue/request is not already reported on GitHub.\r\nAlso test if the latest release, and master branch are affected too.\r\n-->\r\n\r\n##### ISSUE TYPE\r\n<!--- Pick one below and delete the rest: -->\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\n<!--- Name of the module/plugin/task/feature -->\r\nrpm_key\r\n\r\n##### ANSIBLE VERSION\r\n<!--- Paste verbatim output from \"ansible --version\" between quotes below -->\r\n```\r\nansible 2.4.0.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/lbednar/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/lbednar/work/kubevirt-org/kubevirt-ansible/E/lib/python2.7/site-packages/ansible\r\n executable location = /home/lbednar/work/kubevirt-org/kubevirt-ansible/E/bin/ansible\r\n python version = 2.7.13 (default, May 10 2017, 20:04:28) [GCC 6.3.1 20161221 (Red Hat 6.3.1-1)]\r\n```\r\n\r\n##### CONFIGURATION\r\n<!---\r\nIf using Ansible 2.4 or above, paste the results of \"ansible-config dump --only-changed\"\r\n\r\nOtherwise, mention any settings you have changed/added/removed in ansible.cfg\r\n(or using the ANSIBLE_* environment variables).\r\n\r\n-->\r\n```\r\nDEFAULT_ROLES_PATH(env: ANSIBLE_ROLES_PATH) = [u'/home/lbednar/work/kubevirt-org/kubevirt-ansible/galaxy-roles']\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n<!---\r\nMention the OS you are running Ansible from, and the OS you are\r\nmanaging, or say \"N/A\" for anything that is not platform-specific.\r\nAlso mention the specific version of what you are trying to control,\r\ne.g. if this is a network bug the version of firmware on the network device.\r\n-->\r\nTarget system CentOS 7.3\r\n\r\n##### SUMMARY\r\n<!--- Explain the problem briefly -->\r\n\r\nWhen running `rpm_key` to add new key on system which doesn't have any keys installed yet (I want to install first key), then module fails to add new key during execution following line:\r\n\r\nhttps://github.com/ansible/ansible/blob/e609618274db6a7e3c273abde457f53de8c9976c/lib/ansible/modules/packaging/os/rpm_key.py#L173\r\n\r\nthe command fails with:\r\n```\r\n$ /usr/bin/rpm -q gpg-pubkey --qf \"%{description}\"\r\npackage gpg-pubkey is not installed\r\n```\r\nand then following command in shell pipe fails on\r\n```\r\n$ /usr/bin/gpg --no-tty --batch --with-colons --fixed-list-mode -\r\ngpg: no valid OpenPGP data found.\r\ngpg: processing message failed: Unknown system error\r\n```\r\nAt this point module stops execution and fail.\r\n\r\n##### STEPS TO REPRODUCE\r\n<!---\r\nFor bugs, show exactly how to reproduce the problem, using a minimal test-case.\r\nFor new features, show how the feature would be used.\r\n-->\r\n\r\n**You need to make sure that you don't have any RPM KEY installed yet.**\r\nThis issue is reproducible only in case of when you are adding first rpm key.\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: import rpm keys\r\n rpm_key: \r\n state: present\r\n key: \"{{ item }}\"\r\n with_items:\r\n - \"https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\"\r\n```\r\n\r\n<!--- You can also paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\n<!--- What did you expect to happen when running the steps above? -->\r\n\r\nI would like to get rpm keys imported.\r\n\r\n##### ACTUAL RESULTS\r\n<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->\r\n\r\nNo key is added and getting following output instead.\r\n\r\n<!--- Paste verbatim command output between quotes below -->\r\n```\r\nfailed: [vm-69-15.qa.lab.tlv.redhat.com] (item=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg) => {\"changed\": false, \"failed\": true, \"item\": \"https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\", \"msg\": \"gpg: no valid OpenPGP data found.\\ngpg: processing message failed: Unknown system error\\n\"}\r\n```\r\n\n", "before_files": [{"content": "#!/usr/bin/python\n# -*- coding: utf-8 -*-\n\n# Ansible module to import third party repo keys to your rpm db\n# (c) 2013, H\u00e9ctor Acosta <[email protected]>\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nANSIBLE_METADATA = {'metadata_version': '1.1',\n 'status': ['preview'],\n 'supported_by': 'core'}\n\n\nDOCUMENTATION = '''\n---\nmodule: rpm_key\nauthor: \"Hector Acosta (@hacosta) <[email protected]>\"\nshort_description: Adds or removes a gpg key from the rpm db\ndescription:\n - Adds or removes (rpm --import) a gpg key to your rpm database.\nversion_added: \"1.3\"\noptions:\n key:\n required: true\n default: null\n aliases: []\n description:\n - Key that will be modified. Can be a url, a file, or a keyid if the key already exists in the database.\n state:\n required: false\n default: \"present\"\n choices: [present, absent]\n description:\n - If the key will be imported or removed from the rpm db.\n validate_certs:\n description:\n - If C(no) and the C(key) is a url starting with https, SSL certificates will not be validated. This should only be used\n on personally controlled sites using self-signed certificates.\n required: false\n default: 'yes'\n choices: ['yes', 'no']\n\n'''\n\nEXAMPLES = '''\n# Example action to import a key from a url\n- rpm_key:\n state: present\n key: http://apt.sw.be/RPM-GPG-KEY.dag.txt\n\n# Example action to import a key from a file\n- rpm_key:\n state: present\n key: /path/to/key.gpg\n\n# Example action to ensure a key is not present in the db\n- rpm_key:\n state: absent\n key: DEADB33F\n'''\nimport re\nimport os.path\nimport tempfile\n\n# import module snippets\nfrom ansible.module_utils.basic import AnsibleModule\nfrom ansible.module_utils.urls import fetch_url\nfrom ansible.module_utils._text import to_native\n\n\ndef is_pubkey(string):\n \"\"\"Verifies if string is a pubkey\"\"\"\n pgp_regex = \".*?(-----BEGIN PGP PUBLIC KEY BLOCK-----.*?-----END PGP PUBLIC KEY BLOCK-----).*\"\n return bool(re.match(pgp_regex, to_native(string, errors='surrogate_or_strict'), re.DOTALL))\n\n\nclass RpmKey(object):\n\n def __init__(self, module):\n # If the key is a url, we need to check if it's present to be idempotent,\n # to do that, we need to check the keyid, which we can get from the armor.\n keyfile = None\n should_cleanup_keyfile = False\n self.module = module\n self.rpm = self.module.get_bin_path('rpm', True)\n state = module.params['state']\n key = module.params['key']\n\n self.gpg = self.module.get_bin_path('gpg')\n if not self.gpg:\n self.gpg = self.module.get_bin_path('gpg2',required=True)\n\n if '://' in key:\n keyfile = self.fetch_key(key)\n keyid = self.getkeyid(keyfile)\n should_cleanup_keyfile = True\n elif self.is_keyid(key):\n keyid = key\n elif os.path.isfile(key):\n keyfile = key\n keyid = self.getkeyid(keyfile)\n else:\n self.module.fail_json(msg=\"Not a valid key %s\" % key)\n keyid = self.normalize_keyid(keyid)\n\n if state == 'present':\n if self.is_key_imported(keyid):\n module.exit_json(changed=False)\n else:\n if not keyfile:\n self.module.fail_json(msg=\"When importing a key, a valid file must be given\")\n self.import_key(keyfile)\n if should_cleanup_keyfile:\n self.module.cleanup(keyfile)\n module.exit_json(changed=True)\n else:\n if self.is_key_imported(keyid):\n self.drop_key(keyid)\n module.exit_json(changed=True)\n else:\n module.exit_json(changed=False)\n\n def fetch_key(self, url):\n \"\"\"Downloads a key from url, returns a valid path to a gpg key\"\"\"\n rsp, info = fetch_url(self.module, url)\n if info['status'] != 200:\n self.module.fail_json(msg=\"failed to fetch key at %s , error was: %s\" % (url, info['msg']))\n\n key = rsp.read()\n if not is_pubkey(key):\n self.module.fail_json(msg=\"Not a public key: %s\" % url)\n tmpfd, tmpname = tempfile.mkstemp()\n self.module.add_cleanup_file(tmpname)\n tmpfile = os.fdopen(tmpfd, \"w+b\")\n tmpfile.write(key)\n tmpfile.close()\n return tmpname\n\n def normalize_keyid(self, keyid):\n \"\"\"Ensure a keyid doesn't have a leading 0x, has leading or trailing whitespace, and make sure is uppercase\"\"\"\n ret = keyid.strip().upper()\n if ret.startswith('0x'):\n return ret[2:]\n elif ret.startswith('0X'):\n return ret[2:]\n else:\n return ret\n\n def getkeyid(self, keyfile):\n stdout, stderr = self.execute_command([self.gpg, '--no-tty', '--batch', '--with-colons', '--fixed-list-mode', keyfile])\n for line in stdout.splitlines():\n line = line.strip()\n if line.startswith('pub:'):\n return line.split(':')[4]\n\n self.module.fail_json(msg=\"Unexpected gpg output\")\n\n def is_keyid(self, keystr):\n \"\"\"Verifies if a key, as provided by the user is a keyid\"\"\"\n return re.match('(0x)?[0-9a-f]{8}', keystr, flags=re.IGNORECASE)\n\n def execute_command(self, cmd):\n rc, stdout, stderr = self.module.run_command(cmd, use_unsafe_shell=True)\n if rc != 0:\n self.module.fail_json(msg=stderr)\n return stdout, stderr\n\n def is_key_imported(self, keyid):\n cmd=self.rpm + ' -q gpg-pubkey --qf \"%{description}\" | ' + self.gpg + ' --no-tty --batch --with-colons --fixed-list-mode -'\n stdout, stderr = self.execute_command(cmd)\n for line in stdout.splitlines():\n if keyid in line.split(':')[4]:\n return True\n return False\n\n def import_key(self, keyfile):\n if not self.module.check_mode:\n self.execute_command([self.rpm, '--import', keyfile])\n\n def drop_key(self, keyid):\n if not self.module.check_mode:\n self.execute_command([self.rpm, '--erase', '--allmatches', \"gpg-pubkey-%s\" % keyid[-8:].lower()])\n\n\ndef main():\n module = AnsibleModule(\n argument_spec = dict(\n state=dict(default='present', choices=['present', 'absent'], type='str'),\n key=dict(required=True, type='str'),\n validate_certs=dict(default='yes', type='bool'),\n ),\n supports_check_mode=True\n )\n\n RpmKey(module)\n\n\nif __name__ == '__main__':\n main()\n", "path": "lib/ansible/modules/packaging/os/rpm_key.py"}]} | 3,744 | 279 |
gh_patches_debug_30804 | rasdani/github-patches | git_diff | easybuilders__easybuild-easyblocks-2153 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MESA on arm/aarch64 lacks gallium drivers and fails to build with -Dlibunwind=true
`easybuild/easyblocks/m/mesa.py` specifies the following:
```
if not gallium_drivers:
# Add appropriate Gallium drivers for current architecture
arch = get_cpu_architecture()
arch_gallium_drivers = {
'x86_64': ['swrast', 'swr'],
'POWER': ['swrast'],
}
```
this leads to:
```
== processing EasyBuild easyconfig /home/terjekv/easybuild/software/EasyBuild/4.2.2/easybuild/easyconfigs/m/Mesa/Mesa-20.0.2-GCCcore-9.3.0.eb
ERROR: Traceback (most recent call last):
File "/home/terjekv/easybuild/software/EasyBuild/4.2.2/lib/python3.6/site-packages/easybuild/main.py", line 115, in build_and_install_software
(ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)
File "/home/terjekv/easybuild/software/EasyBuild/4.2.2/lib/python3.6/site-packages/easybuild/framework/easyblock.py", line 3264, in build_and_install_one
app = app_class(ecdict['ec'])
File "/home/terjekv/easybuild/software/EasyBuild/4.2.2/lib/python3.6/site-packages/easybuild/easyblocks/m/mesa.py", line 66, in __init__
self.log.debug('Gallium driver(s) included in the installation: %s' % ', '.join(gallium_drivers))
TypeError: can only join an iterable
```
Adding `'aarch64': ['swrast']` should be enough. Will patch and test.
</issue>
<code>
[start of easybuild/easyblocks/m/mesa.py]
1 ##
2 # Copyright 2009-2020 Ghent University
3 #
4 # This file is part of EasyBuild,
5 # originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),
6 # with support of Ghent University (http://ugent.be/hpc),
7 # the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),
8 # Flemish Research Foundation (FWO) (http://www.fwo.be/en)
9 # and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).
10 #
11 # https://github.com/easybuilders/easybuild
12 #
13 # EasyBuild is free software: you can redistribute it and/or modify
14 # it under the terms of the GNU General Public License as published by
15 # the Free Software Foundation v2.
16 #
17 # EasyBuild is distributed in the hope that it will be useful,
18 # but WITHOUT ANY WARRANTY; without even the implied warranty of
19 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
20 # GNU General Public License for more details.
21 #
22 # You should have received a copy of the GNU General Public License
23 # along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.
24 ##
25 """
26 EasyBuild support for installing Mesa, implemented as an easyblock
27
28 @author: Andrew Edmondson (University of Birmingham)
29 @author: Kenneth Hoste (HPC-UGent)
30 @author: Alex Domingo (Vrije Universiteit Brussel)
31 @author: Alexander Grund (TU Dresden)
32 """
33 import os
34 from distutils.version import LooseVersion
35
36 from easybuild.easyblocks.generic.mesonninja import MesonNinja
37 from easybuild.tools.filetools import copy_dir
38 from easybuild.tools.systemtools import POWER, X86_64, get_cpu_architecture, get_cpu_features, get_shared_lib_ext
39
40
41 class EB_Mesa(MesonNinja):
42 """Custom easyblock for building and installing Mesa."""
43
44 def __init__(self, *args, **kwargs):
45 """Constructor for custom Mesa easyblock: figure out which values to pass to swr-arches configuration option."""
46
47 super(EB_Mesa, self).__init__(*args, **kwargs)
48
49 self.gallium_configopts = []
50
51 # Check user-defined Gallium drivers
52 gallium_drivers = self.get_configopt_value('gallium-drivers')
53
54 if not gallium_drivers:
55 # Add appropriate Gallium drivers for current architecture
56 arch = get_cpu_architecture()
57 arch_gallium_drivers = {
58 X86_64: ['swrast', 'swr'],
59 POWER: ['swrast'],
60 }
61 if arch in arch_gallium_drivers:
62 gallium_drivers = arch_gallium_drivers[arch]
63 # Add configopt for additional Gallium drivers
64 self.gallium_configopts.append('-Dgallium-drivers=' + ','.join(gallium_drivers))
65
66 self.log.debug('Gallium driver(s) included in the installation: %s' % ', '.join(gallium_drivers))
67
68 self.swr_arches = []
69
70 if 'swr' in gallium_drivers:
71 # Check user-defined SWR arches
72 self.swr_arches = self.get_configopt_value('swr-arches')
73
74 if not self.swr_arches:
75 # Set cpu features of SWR for current micro-architecture
76 feat_to_swrarch = {
77 'avx': 'avx',
78 'avx1.0': 'avx', # on macOS, AVX is indicated with 'avx1.0' rather than 'avx'
79 'avx2': 'avx2',
80 'avx512f': 'skx', # AVX-512 Foundation - introduced in Skylake
81 'avx512er': 'knl', # AVX-512 Exponential and Reciprocal Instructions implemented in Knights Landing
82 }
83 # Determine list of values to pass to swr-arches configuration option
84 cpu_features = get_cpu_features()
85 self.swr_arches = sorted([swrarch for feat, swrarch in feat_to_swrarch.items() if feat in cpu_features])
86 # Add configopt for additional SWR arches
87 self.gallium_configopts.append('-Dswr-arches=' + ','.join(self.swr_arches))
88
89 self.log.debug('SWR Gallium driver will support: %s' % ', '.join(self.swr_arches))
90
91 def get_configopt_value(self, configopt_name):
92 """
93 Return list of values for the given configuration option in configopts
94 """
95 configopt_args = [opt for opt in self.cfg['configopts'].split() if opt.startswith('-D%s=' % configopt_name)]
96
97 if configopt_args:
98 if len(configopt_args) > 1:
99 self.log.warning("Found multiple instances of %s in configopts, using last one: %s",
100 configopt_name, configopt_args[-1])
101 # Get value of last option added
102 configopt_value = configopt_args[-1].split('=')[-1]
103 # Remove quotes and extract individual values
104 configopt_value = configopt_value.strip('"\'').split(',')
105 else:
106 configopt_value = None
107
108 return configopt_value
109
110 def configure_step(self):
111 """
112 Customise the configure options based on the processor architecture of the host
113 (Gallium drivers installed, SWR CPU features, ...)
114 """
115
116 if self.gallium_configopts:
117 self.cfg.update('configopts', self.gallium_configopts)
118
119 return super(EB_Mesa, self).configure_step()
120
121 def install_step(self):
122 """Also copy additional header files after installing Mesa."""
123
124 super(EB_Mesa, self).install_step()
125
126 # also install header files located in include/GL/internal, unless they're available already;
127 # we can't enable both DRI and Gallium drivers,
128 # but we can provide the DRI header file (GL/internal/dri_interface.h)
129 target_inc_GL_internal = os.path.join(self.installdir, 'include', 'GL', 'internal')
130 if not os.path.exists(target_inc_GL_internal):
131 src_inc_GL_internal = os.path.join(self.start_dir, 'include', 'GL', 'internal')
132 copy_dir(src_inc_GL_internal, target_inc_GL_internal)
133 self.log.info("Copied %s to %s" % (src_inc_GL_internal, target_inc_GL_internal))
134
135 def sanity_check_step(self):
136 """Custom sanity check for Mesa."""
137
138 shlib_ext = get_shared_lib_ext()
139
140 if LooseVersion(self.version) >= LooseVersion('20.0'):
141 header_files = [os.path.join('include', 'EGL', x) for x in ['eglmesaext.h', 'eglextchromium.h']]
142 header_files.extend([
143 os.path.join('include', 'GL', 'osmesa.h'),
144 os.path.join('include', 'GL', 'internal', 'dri_interface.h'),
145 ])
146 else:
147 gl_inc_files = ['glext.h', 'gl_mangle.h', 'glx.h', 'osmesa.h', 'gl.h', 'glxext.h', 'glx_mangle.h']
148 gles_inc_files = [('GLES', 'gl.h'), ('GLES2', 'gl2.h'), ('GLES3', 'gl3.h')]
149 header_files = [os.path.join('include', 'GL', x) for x in gl_inc_files]
150 header_files.extend([os.path.join('include', x, y) for (x, y) in gles_inc_files])
151
152 custom_paths = {
153 'files': [os.path.join('lib', 'libOSMesa.%s' % shlib_ext)] + header_files,
154 'dirs': [os.path.join('include', 'GL', 'internal')],
155 }
156
157 if self.swr_arches:
158 swr_arch_libs = [os.path.join('lib', 'libswr%s.%s' % (a.upper(), shlib_ext)) for a in self.swr_arches]
159 custom_paths['files'].extend(swr_arch_libs)
160
161 super(EB_Mesa, self).sanity_check_step(custom_paths=custom_paths)
162
[end of easybuild/easyblocks/m/mesa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/easybuild/easyblocks/m/mesa.py b/easybuild/easyblocks/m/mesa.py
--- a/easybuild/easyblocks/m/mesa.py
+++ b/easybuild/easyblocks/m/mesa.py
@@ -35,7 +35,8 @@
from easybuild.easyblocks.generic.mesonninja import MesonNinja
from easybuild.tools.filetools import copy_dir
-from easybuild.tools.systemtools import POWER, X86_64, get_cpu_architecture, get_cpu_features, get_shared_lib_ext
+from easybuild.tools.systemtools import POWER, X86_64, AARCH64
+from easybuild.tools.systemtools import get_cpu_architecture, get_cpu_features, get_shared_lib_ext
class EB_Mesa(MesonNinja):
@@ -48,6 +49,16 @@
self.gallium_configopts = []
+ # Mesa fails to build with libunwind on aarch64
+ # See https://github.com/easybuilders/easybuild-easyblocks/issues/2150
+ if get_cpu_architecture() == AARCH64:
+ given_config_opts = self.cfg.get('configopts')
+ if "-Dlibunwind=true" in given_config_opts:
+ self.log.warning('libunwind not supported on aarch64, stripping from configopts!')
+ configopts_libunwind_stripped = given_config_opts.replace('-Dlibunwind=true', '-Dlibunwind=false')
+ self.cfg.set_keys({'configopts': configopts_libunwind_stripped})
+ self.log.warning('New configopts after stripping: ' + self.cfg.get('configopts'))
+
# Check user-defined Gallium drivers
gallium_drivers = self.get_configopt_value('gallium-drivers')
@@ -57,6 +68,7 @@
arch_gallium_drivers = {
X86_64: ['swrast', 'swr'],
POWER: ['swrast'],
+ AARCH64: ['swrast'],
}
if arch in arch_gallium_drivers:
gallium_drivers = arch_gallium_drivers[arch]
| {"golden_diff": "diff --git a/easybuild/easyblocks/m/mesa.py b/easybuild/easyblocks/m/mesa.py\n--- a/easybuild/easyblocks/m/mesa.py\n+++ b/easybuild/easyblocks/m/mesa.py\n@@ -35,7 +35,8 @@\n \n from easybuild.easyblocks.generic.mesonninja import MesonNinja\n from easybuild.tools.filetools import copy_dir\n-from easybuild.tools.systemtools import POWER, X86_64, get_cpu_architecture, get_cpu_features, get_shared_lib_ext\n+from easybuild.tools.systemtools import POWER, X86_64, AARCH64\n+from easybuild.tools.systemtools import get_cpu_architecture, get_cpu_features, get_shared_lib_ext\n \n \n class EB_Mesa(MesonNinja):\n@@ -48,6 +49,16 @@\n \n self.gallium_configopts = []\n \n+ # Mesa fails to build with libunwind on aarch64\n+ # See https://github.com/easybuilders/easybuild-easyblocks/issues/2150\n+ if get_cpu_architecture() == AARCH64:\n+ given_config_opts = self.cfg.get('configopts')\n+ if \"-Dlibunwind=true\" in given_config_opts:\n+ self.log.warning('libunwind not supported on aarch64, stripping from configopts!')\n+ configopts_libunwind_stripped = given_config_opts.replace('-Dlibunwind=true', '-Dlibunwind=false')\n+ self.cfg.set_keys({'configopts': configopts_libunwind_stripped})\n+ self.log.warning('New configopts after stripping: ' + self.cfg.get('configopts'))\n+\n # Check user-defined Gallium drivers\n gallium_drivers = self.get_configopt_value('gallium-drivers')\n \n@@ -57,6 +68,7 @@\n arch_gallium_drivers = {\n X86_64: ['swrast', 'swr'],\n POWER: ['swrast'],\n+ AARCH64: ['swrast'],\n }\n if arch in arch_gallium_drivers:\n gallium_drivers = arch_gallium_drivers[arch]\n", "issue": "MESA on arm/aarch64 lacks gallium drivers and fails to build with -Dlibunwind=true\n`easybuild/easyblocks/m/mesa.py` specifies the following:\r\n```\r\n if not gallium_drivers:\r\n # Add appropriate Gallium drivers for current architecture\r\n arch = get_cpu_architecture()\r\n arch_gallium_drivers = {\r\n 'x86_64': ['swrast', 'swr'],\r\n 'POWER': ['swrast'],\r\n }\r\n```\r\nthis leads to:\r\n```\r\n== processing EasyBuild easyconfig /home/terjekv/easybuild/software/EasyBuild/4.2.2/easybuild/easyconfigs/m/Mesa/Mesa-20.0.2-GCCcore-9.3.0.eb\r\nERROR: Traceback (most recent call last):\r\n File \"/home/terjekv/easybuild/software/EasyBuild/4.2.2/lib/python3.6/site-packages/easybuild/main.py\", line 115, in build_and_install_software\r\n (ec_res['success'], app_log, err) = build_and_install_one(ec, init_env)\r\n File \"/home/terjekv/easybuild/software/EasyBuild/4.2.2/lib/python3.6/site-packages/easybuild/framework/easyblock.py\", line 3264, in build_and_install_one\r\n app = app_class(ecdict['ec'])\r\n File \"/home/terjekv/easybuild/software/EasyBuild/4.2.2/lib/python3.6/site-packages/easybuild/easyblocks/m/mesa.py\", line 66, in __init__\r\n self.log.debug('Gallium driver(s) included in the installation: %s' % ', '.join(gallium_drivers))\r\nTypeError: can only join an iterable\r\n```\r\nAdding `'aarch64': ['swrast']` should be enough. Will patch and test.\n", "before_files": [{"content": "##\n# Copyright 2009-2020 Ghent University\n#\n# This file is part of EasyBuild,\n# originally created by the HPC team of Ghent University (http://ugent.be/hpc/en),\n# with support of Ghent University (http://ugent.be/hpc),\n# the Flemish Supercomputer Centre (VSC) (https://www.vscentrum.be),\n# Flemish Research Foundation (FWO) (http://www.fwo.be/en)\n# and the Department of Economy, Science and Innovation (EWI) (http://www.ewi-vlaanderen.be/en).\n#\n# https://github.com/easybuilders/easybuild\n#\n# EasyBuild is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation v2.\n#\n# EasyBuild is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with EasyBuild. If not, see <http://www.gnu.org/licenses/>.\n##\n\"\"\"\nEasyBuild support for installing Mesa, implemented as an easyblock\n\n@author: Andrew Edmondson (University of Birmingham)\n@author: Kenneth Hoste (HPC-UGent)\n@author: Alex Domingo (Vrije Universiteit Brussel)\n@author: Alexander Grund (TU Dresden)\n\"\"\"\nimport os\nfrom distutils.version import LooseVersion\n\nfrom easybuild.easyblocks.generic.mesonninja import MesonNinja\nfrom easybuild.tools.filetools import copy_dir\nfrom easybuild.tools.systemtools import POWER, X86_64, get_cpu_architecture, get_cpu_features, get_shared_lib_ext\n\n\nclass EB_Mesa(MesonNinja):\n \"\"\"Custom easyblock for building and installing Mesa.\"\"\"\n\n def __init__(self, *args, **kwargs):\n \"\"\"Constructor for custom Mesa easyblock: figure out which values to pass to swr-arches configuration option.\"\"\"\n\n super(EB_Mesa, self).__init__(*args, **kwargs)\n\n self.gallium_configopts = []\n\n # Check user-defined Gallium drivers\n gallium_drivers = self.get_configopt_value('gallium-drivers')\n\n if not gallium_drivers:\n # Add appropriate Gallium drivers for current architecture\n arch = get_cpu_architecture()\n arch_gallium_drivers = {\n X86_64: ['swrast', 'swr'],\n POWER: ['swrast'],\n }\n if arch in arch_gallium_drivers:\n gallium_drivers = arch_gallium_drivers[arch]\n # Add configopt for additional Gallium drivers\n self.gallium_configopts.append('-Dgallium-drivers=' + ','.join(gallium_drivers))\n\n self.log.debug('Gallium driver(s) included in the installation: %s' % ', '.join(gallium_drivers))\n\n self.swr_arches = []\n\n if 'swr' in gallium_drivers:\n # Check user-defined SWR arches\n self.swr_arches = self.get_configopt_value('swr-arches')\n\n if not self.swr_arches:\n # Set cpu features of SWR for current micro-architecture\n feat_to_swrarch = {\n 'avx': 'avx',\n 'avx1.0': 'avx', # on macOS, AVX is indicated with 'avx1.0' rather than 'avx'\n 'avx2': 'avx2',\n 'avx512f': 'skx', # AVX-512 Foundation - introduced in Skylake\n 'avx512er': 'knl', # AVX-512 Exponential and Reciprocal Instructions implemented in Knights Landing\n }\n # Determine list of values to pass to swr-arches configuration option\n cpu_features = get_cpu_features()\n self.swr_arches = sorted([swrarch for feat, swrarch in feat_to_swrarch.items() if feat in cpu_features])\n # Add configopt for additional SWR arches\n self.gallium_configopts.append('-Dswr-arches=' + ','.join(self.swr_arches))\n\n self.log.debug('SWR Gallium driver will support: %s' % ', '.join(self.swr_arches))\n\n def get_configopt_value(self, configopt_name):\n \"\"\"\n Return list of values for the given configuration option in configopts\n \"\"\"\n configopt_args = [opt for opt in self.cfg['configopts'].split() if opt.startswith('-D%s=' % configopt_name)]\n\n if configopt_args:\n if len(configopt_args) > 1:\n self.log.warning(\"Found multiple instances of %s in configopts, using last one: %s\",\n configopt_name, configopt_args[-1])\n # Get value of last option added\n configopt_value = configopt_args[-1].split('=')[-1]\n # Remove quotes and extract individual values\n configopt_value = configopt_value.strip('\"\\'').split(',')\n else:\n configopt_value = None\n\n return configopt_value\n\n def configure_step(self):\n \"\"\"\n Customise the configure options based on the processor architecture of the host\n (Gallium drivers installed, SWR CPU features, ...)\n \"\"\"\n\n if self.gallium_configopts:\n self.cfg.update('configopts', self.gallium_configopts)\n\n return super(EB_Mesa, self).configure_step()\n\n def install_step(self):\n \"\"\"Also copy additional header files after installing Mesa.\"\"\"\n\n super(EB_Mesa, self).install_step()\n\n # also install header files located in include/GL/internal, unless they're available already;\n # we can't enable both DRI and Gallium drivers,\n # but we can provide the DRI header file (GL/internal/dri_interface.h)\n target_inc_GL_internal = os.path.join(self.installdir, 'include', 'GL', 'internal')\n if not os.path.exists(target_inc_GL_internal):\n src_inc_GL_internal = os.path.join(self.start_dir, 'include', 'GL', 'internal')\n copy_dir(src_inc_GL_internal, target_inc_GL_internal)\n self.log.info(\"Copied %s to %s\" % (src_inc_GL_internal, target_inc_GL_internal))\n\n def sanity_check_step(self):\n \"\"\"Custom sanity check for Mesa.\"\"\"\n\n shlib_ext = get_shared_lib_ext()\n\n if LooseVersion(self.version) >= LooseVersion('20.0'):\n header_files = [os.path.join('include', 'EGL', x) for x in ['eglmesaext.h', 'eglextchromium.h']]\n header_files.extend([\n os.path.join('include', 'GL', 'osmesa.h'),\n os.path.join('include', 'GL', 'internal', 'dri_interface.h'),\n ])\n else:\n gl_inc_files = ['glext.h', 'gl_mangle.h', 'glx.h', 'osmesa.h', 'gl.h', 'glxext.h', 'glx_mangle.h']\n gles_inc_files = [('GLES', 'gl.h'), ('GLES2', 'gl2.h'), ('GLES3', 'gl3.h')]\n header_files = [os.path.join('include', 'GL', x) for x in gl_inc_files]\n header_files.extend([os.path.join('include', x, y) for (x, y) in gles_inc_files])\n\n custom_paths = {\n 'files': [os.path.join('lib', 'libOSMesa.%s' % shlib_ext)] + header_files,\n 'dirs': [os.path.join('include', 'GL', 'internal')],\n }\n\n if self.swr_arches:\n swr_arch_libs = [os.path.join('lib', 'libswr%s.%s' % (a.upper(), shlib_ext)) for a in self.swr_arches]\n custom_paths['files'].extend(swr_arch_libs)\n\n super(EB_Mesa, self).sanity_check_step(custom_paths=custom_paths)\n", "path": "easybuild/easyblocks/m/mesa.py"}]} | 3,159 | 488 |
gh_patches_debug_15661 | rasdani/github-patches | git_diff | PaddlePaddle__PaddleOCR-66 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
检测模型转inference模型,'use_gpu': False,但仍然转换失败
你好,在检测模型转inference模型时,已修改det_mv3_db.yml 中'use_gpu': False,但仍然报错,如下所示:
python tools/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./ch_lite/det_mv3_db/best_accuracy Global.save_inference_dir=./inference_model/det_db/
2020-05-19 10:29:59,237-INFO: {'Global': {'algorithm': 'DB', 'use_gpu': False, 'epoch_num': 1200, 'log_smooth_window': 20, 'print_batch_step': 2, 'save_model_dir': './output/det_db/', 'save_epoch_step': 200, 'eval_batch_step': 5000, 'train_batch_size_per_card': 16, 'test_batch_size_per_card': 16, 'image_shape': [3, 640, 640], 'reader_yml': './configs/det/det_db_icdar15_reader.yml', 'pretrain_weights': './pretrain_models/MobileNetV3_large_x0_5_pretrained/', 'checkpoints': './ch_lite/det_mv3_db/best_accuracy', 'save_res_path': './output/det_db/predicts_db.txt', 'save_inference_dir': './inference_model/det_db/'}, 'Architecture': {'function': 'ppocr.modeling.architectures.det_model,DetModel'}, 'Backbone': {'function': 'ppocr.modeling.backbones.det_mobilenet_v3,MobileNetV3', 'scale': 0.5, 'model_name': 'large'}, 'Head': {'function': 'ppocr.modeling.heads.det_db_head,DBHead', 'model_name': 'large', 'k': 50, 'inner_channels': 96, 'out_channels': 2}, 'Loss': {'function': 'ppocr.modeling.losses.det_db_loss,DBLoss', 'balance_loss': True, 'main_loss_type': 'DiceLoss', 'alpha': 5, 'beta': 10, 'ohem_ratio': 3}, 'Optimizer': {'function': 'ppocr.optimizer,AdamDecay', 'base_lr': 0.001, 'beta1': 0.9, 'beta2': 0.999}, 'PostProcess': {'function': 'ppocr.postprocess.db_postprocess,DBPostProcess', 'thresh': 0.3, 'box_thresh': 0.7, 'max_candidates': 1000, 'unclip_ratio': 1.5}, 'TrainReader': {'reader_function': 'ppocr.data.det.dataset_traversal,TrainReader', 'process_function': 'ppocr.data.det.db_process,DBProcessTrain', 'num_workers': 8, 'img_set_dir': './train_data/icdar2015/text_localization/', 'label_file_path': './train_data/icdar2015/text_localization/train_icdar2015_label.txt'}, 'EvalReader': {'reader_function': 'ppocr.data.det.dataset_traversal,EvalTestReader', 'process_function': 'ppocr.data.det.db_process,DBProcessTest', 'img_set_dir': './train_data/icdar2015/text_localization/', 'label_file_path': './train_data/icdar2015/text_localization/test_icdar2015_label.txt', 'test_image_shape': [736, 1280]}, 'TestReader': {'reader_function': 'ppocr.data.det.dataset_traversal,EvalTestReader', 'process_function': 'ppocr.data.det.db_process,DBProcessTest', 'single_img_path': None, 'img_set_dir': './train_data/icdar2015/text_localization/', 'label_file_path': './train_data/icdar2015/text_localization/test_icdar2015_label.txt', 'test_image_shape': [736, 1280], 'do_eval': True}}
2020-05-19 10:29:59,238-ERROR: Config use_gpu cannot be set as true while you are using paddlepaddle cpu version !
Please try:
1. Install paddlepaddle-gpu to run model on GPU
2. Set use_gpu as false in config file to run model on CPU
</issue>
<code>
[start of tools/export_model.py]
1 # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from __future__ import absolute_import
16 from __future__ import division
17 from __future__ import print_function
18
19 import os
20 import sys
21 import time
22 import multiprocessing
23 import numpy as np
24
25
26 def set_paddle_flags(**kwargs):
27 for key, value in kwargs.items():
28 if os.environ.get(key, None) is None:
29 os.environ[key] = str(value)
30
31
32 # NOTE(paddle-dev): All of these flags should be
33 # set before `import paddle`. Otherwise, it would
34 # not take any effect.
35 set_paddle_flags(
36 FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory
37 )
38
39 import program
40 from paddle import fluid
41 from ppocr.utils.utility import initial_logger
42 logger = initial_logger()
43 from ppocr.utils.save_load import init_model
44 from ppocr.utils.character import CharacterOps
45 from ppocr.utils.utility import create_module
46
47
48 def main():
49 config = program.load_config(FLAGS.config)
50 program.merge_config(FLAGS.opt)
51 logger.info(config)
52
53 # check if set use_gpu=True in paddlepaddle cpu version
54 use_gpu = config['Global']['use_gpu']
55 program.check_gpu(True)
56
57 alg = config['Global']['algorithm']
58 assert alg in ['EAST', 'DB', 'Rosetta', 'CRNN', 'STARNet', 'RARE']
59 if alg in ['Rosetta', 'CRNN', 'STARNet', 'RARE']:
60 config['Global']['char_ops'] = CharacterOps(config['Global'])
61
62 place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
63 startup_prog = fluid.Program()
64 eval_program = fluid.Program()
65
66 feeded_var_names, target_vars, fetches_var_name = program.build_export(
67 config, eval_program, startup_prog)
68 eval_program = eval_program.clone(for_test=True)
69 exe = fluid.Executor(place)
70 exe.run(startup_prog)
71
72 init_model(config, eval_program, exe)
73
74 save_inference_dir = config['Global']['save_inference_dir']
75 if not os.path.exists(save_inference_dir):
76 os.makedirs(save_inference_dir)
77 fluid.io.save_inference_model(
78 dirname=save_inference_dir,
79 feeded_var_names=feeded_var_names,
80 main_program=eval_program,
81 target_vars=target_vars,
82 executor=exe,
83 model_filename='model',
84 params_filename='params')
85 print("inference model saved in {}/model and {}/params".format(
86 save_inference_dir, save_inference_dir))
87 print("save success, output_name_list:", fetches_var_name)
88
89
90 if __name__ == '__main__':
91 parser = program.ArgsParser()
92 FLAGS = parser.parse_args()
93 main()
94
[end of tools/export_model.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/export_model.py b/tools/export_model.py
--- a/tools/export_model.py
+++ b/tools/export_model.py
@@ -31,7 +31,7 @@
# NOTE(paddle-dev): All of these flags should be
# set before `import paddle`. Otherwise, it would
-# not take any effect.
+# not take any effect.
set_paddle_flags(
FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory
)
@@ -52,7 +52,7 @@
# check if set use_gpu=True in paddlepaddle cpu version
use_gpu = config['Global']['use_gpu']
- program.check_gpu(True)
+ program.check_gpu(use_gpu)
alg = config['Global']['algorithm']
assert alg in ['EAST', 'DB', 'Rosetta', 'CRNN', 'STARNet', 'RARE']
| {"golden_diff": "diff --git a/tools/export_model.py b/tools/export_model.py\n--- a/tools/export_model.py\n+++ b/tools/export_model.py\n@@ -31,7 +31,7 @@\n \n # NOTE(paddle-dev): All of these flags should be\n # set before `import paddle`. Otherwise, it would\n-# not take any effect. \n+# not take any effect.\n set_paddle_flags(\n FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory\n )\n@@ -52,7 +52,7 @@\n \n # check if set use_gpu=True in paddlepaddle cpu version\n use_gpu = config['Global']['use_gpu']\n- program.check_gpu(True)\n+ program.check_gpu(use_gpu)\n \n alg = config['Global']['algorithm']\n assert alg in ['EAST', 'DB', 'Rosetta', 'CRNN', 'STARNet', 'RARE']\n", "issue": "\u68c0\u6d4b\u6a21\u578b\u8f6cinference\u6a21\u578b\uff0c'use_gpu': False\uff0c\u4f46\u4ecd\u7136\u8f6c\u6362\u5931\u8d25\n\u4f60\u597d\uff0c\u5728\u68c0\u6d4b\u6a21\u578b\u8f6cinference\u6a21\u578b\u65f6\uff0c\u5df2\u4fee\u6539det_mv3_db.yml \u4e2d'use_gpu': False\uff0c\u4f46\u4ecd\u7136\u62a5\u9519\uff0c\u5982\u4e0b\u6240\u793a\uff1a\r\npython tools/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./ch_lite/det_mv3_db/best_accuracy Global.save_inference_dir=./inference_model/det_db/\r\n2020-05-19 10:29:59,237-INFO: {'Global': {'algorithm': 'DB', 'use_gpu': False, 'epoch_num': 1200, 'log_smooth_window': 20, 'print_batch_step': 2, 'save_model_dir': './output/det_db/', 'save_epoch_step': 200, 'eval_batch_step': 5000, 'train_batch_size_per_card': 16, 'test_batch_size_per_card': 16, 'image_shape': [3, 640, 640], 'reader_yml': './configs/det/det_db_icdar15_reader.yml', 'pretrain_weights': './pretrain_models/MobileNetV3_large_x0_5_pretrained/', 'checkpoints': './ch_lite/det_mv3_db/best_accuracy', 'save_res_path': './output/det_db/predicts_db.txt', 'save_inference_dir': './inference_model/det_db/'}, 'Architecture': {'function': 'ppocr.modeling.architectures.det_model,DetModel'}, 'Backbone': {'function': 'ppocr.modeling.backbones.det_mobilenet_v3,MobileNetV3', 'scale': 0.5, 'model_name': 'large'}, 'Head': {'function': 'ppocr.modeling.heads.det_db_head,DBHead', 'model_name': 'large', 'k': 50, 'inner_channels': 96, 'out_channels': 2}, 'Loss': {'function': 'ppocr.modeling.losses.det_db_loss,DBLoss', 'balance_loss': True, 'main_loss_type': 'DiceLoss', 'alpha': 5, 'beta': 10, 'ohem_ratio': 3}, 'Optimizer': {'function': 'ppocr.optimizer,AdamDecay', 'base_lr': 0.001, 'beta1': 0.9, 'beta2': 0.999}, 'PostProcess': {'function': 'ppocr.postprocess.db_postprocess,DBPostProcess', 'thresh': 0.3, 'box_thresh': 0.7, 'max_candidates': 1000, 'unclip_ratio': 1.5}, 'TrainReader': {'reader_function': 'ppocr.data.det.dataset_traversal,TrainReader', 'process_function': 'ppocr.data.det.db_process,DBProcessTrain', 'num_workers': 8, 'img_set_dir': './train_data/icdar2015/text_localization/', 'label_file_path': './train_data/icdar2015/text_localization/train_icdar2015_label.txt'}, 'EvalReader': {'reader_function': 'ppocr.data.det.dataset_traversal,EvalTestReader', 'process_function': 'ppocr.data.det.db_process,DBProcessTest', 'img_set_dir': './train_data/icdar2015/text_localization/', 'label_file_path': './train_data/icdar2015/text_localization/test_icdar2015_label.txt', 'test_image_shape': [736, 1280]}, 'TestReader': {'reader_function': 'ppocr.data.det.dataset_traversal,EvalTestReader', 'process_function': 'ppocr.data.det.db_process,DBProcessTest', 'single_img_path': None, 'img_set_dir': './train_data/icdar2015/text_localization/', 'label_file_path': './train_data/icdar2015/text_localization/test_icdar2015_label.txt', 'test_image_shape': [736, 1280], 'do_eval': True}}\r\n2020-05-19 10:29:59,238-ERROR: Config use_gpu cannot be set as true while you are using paddlepaddle cpu version !\r\nPlease try:\r\n 1. Install paddlepaddle-gpu to run model on GPU\r\n 2. Set use_gpu as false in config file to run model on CPU\n", "before_files": [{"content": "# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import\nfrom __future__ import division\nfrom __future__ import print_function\n\nimport os\nimport sys\nimport time\nimport multiprocessing\nimport numpy as np\n\n\ndef set_paddle_flags(**kwargs):\n for key, value in kwargs.items():\n if os.environ.get(key, None) is None:\n os.environ[key] = str(value)\n\n\n# NOTE(paddle-dev): All of these flags should be\n# set before `import paddle`. Otherwise, it would\n# not take any effect. \nset_paddle_flags(\n FLAGS_eager_delete_tensor_gb=0, # enable GC to save memory\n)\n\nimport program\nfrom paddle import fluid\nfrom ppocr.utils.utility import initial_logger\nlogger = initial_logger()\nfrom ppocr.utils.save_load import init_model\nfrom ppocr.utils.character import CharacterOps\nfrom ppocr.utils.utility import create_module\n\n\ndef main():\n config = program.load_config(FLAGS.config)\n program.merge_config(FLAGS.opt)\n logger.info(config)\n\n # check if set use_gpu=True in paddlepaddle cpu version\n use_gpu = config['Global']['use_gpu']\n program.check_gpu(True)\n\n alg = config['Global']['algorithm']\n assert alg in ['EAST', 'DB', 'Rosetta', 'CRNN', 'STARNet', 'RARE']\n if alg in ['Rosetta', 'CRNN', 'STARNet', 'RARE']:\n config['Global']['char_ops'] = CharacterOps(config['Global'])\n\n place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()\n startup_prog = fluid.Program()\n eval_program = fluid.Program()\n\n feeded_var_names, target_vars, fetches_var_name = program.build_export(\n config, eval_program, startup_prog)\n eval_program = eval_program.clone(for_test=True)\n exe = fluid.Executor(place)\n exe.run(startup_prog)\n\n init_model(config, eval_program, exe)\n\n save_inference_dir = config['Global']['save_inference_dir']\n if not os.path.exists(save_inference_dir):\n os.makedirs(save_inference_dir)\n fluid.io.save_inference_model(\n dirname=save_inference_dir,\n feeded_var_names=feeded_var_names,\n main_program=eval_program,\n target_vars=target_vars,\n executor=exe,\n model_filename='model',\n params_filename='params')\n print(\"inference model saved in {}/model and {}/params\".format(\n save_inference_dir, save_inference_dir))\n print(\"save success, output_name_list:\", fetches_var_name)\n\n\nif __name__ == '__main__':\n parser = program.ArgsParser()\n FLAGS = parser.parse_args()\n main()\n", "path": "tools/export_model.py"}]} | 2,400 | 195 |
gh_patches_debug_33751 | rasdani/github-patches | git_diff | Parsl__parsl-1083 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
remote_side_bash_executor global logger conflates logs from multiple tasks/sources
`remote_side_bash_executor` configures and uses the global `logging` logger, rather than one scoped to that function. Once it has configured a log file for output, all subsequent global logs from any further `remote_side_bash_executor` invocation in that process, as well as other uses of logging, such as htex `process_worker_pool.py`, end up in earlier configured log files.
This results in `/tmp/bashexec` logs containing a confused assortment of logs from different sources, rather than being focuses on a single bash execution.
</issue>
<code>
[start of parsl/app/bash.py]
1 import logging
2 from functools import update_wrapper
3 from inspect import signature, Parameter
4
5 from parsl.app.errors import wrap_error
6 from parsl.app.futures import DataFuture
7 from parsl.app.app import AppBase
8 from parsl.dataflow.dflow import DataFlowKernelLoader
9
10 logger = logging.getLogger(__name__)
11
12
13 def remote_side_bash_executor(func, *args, **kwargs):
14 """Execute the bash app type function and return the command line string.
15
16 This string is reformatted with the *args, and **kwargs
17 from call time.
18 """
19 import os
20 import time
21 import subprocess
22 import logging
23 import parsl.app.errors as pe
24
25 logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)
26
27 func_name = func.__name__
28
29 partial_cmdline = None
30
31 # Try to run the func to compose the commandline
32 try:
33 # Execute the func to get the commandline
34 partial_cmdline = func(*args, **kwargs)
35 # Reformat the commandline with current args and kwargs
36 executable = partial_cmdline.format(*args, **kwargs)
37
38 except AttributeError as e:
39 if partial_cmdline is not None:
40 raise pe.AppBadFormatting("App formatting failed for app '{}' with AttributeError: {}".format(func_name, e))
41 else:
42 raise pe.BashAppNoReturn("Bash app '{}' did not return a value, or returned none - with this exception: {}".format(func_name, e), None)
43
44 except IndexError as e:
45 raise pe.AppBadFormatting("App formatting failed for app '{}' with IndexError: {}".format(func_name, e))
46 except Exception as e:
47 logging.error("Caught exception during formatting of app '{}': {}".format(func_name, e))
48 raise e
49
50 logging.debug("Executable: %s", executable)
51
52 # Updating stdout, stderr if values passed at call time.
53
54 def open_std_fd(fdname):
55 # fdname is 'stdout' or 'stderr'
56 stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)
57 if stdfspec is None:
58 return None
59 elif isinstance(stdfspec, str):
60 fname = stdfspec
61 mode = 'a+'
62 elif isinstance(stdfspec, tuple):
63 if len(stdfspec) != 2:
64 raise pe.BadStdStreamFile("std descriptor %s has incorrect tuple length %s" % (fdname, len(stdfspec)), TypeError('Bad Tuple Length'))
65 fname, mode = stdfspec
66 else:
67 raise pe.BadStdStreamFile("std descriptor %s has unexpected type %s" % (fdname, str(type(stdfspec))), TypeError('Bad Tuple Type'))
68
69 try:
70 if os.path.dirname(fname):
71 os.makedirs(os.path.dirname(fname), exist_ok=True)
72 fd = open(fname, mode)
73 except Exception as e:
74 raise pe.BadStdStreamFile(fname, e)
75 return fd
76
77 std_out = open_std_fd('stdout')
78 std_err = open_std_fd('stderr')
79 timeout = kwargs.get('walltime')
80
81 if std_err is not None:
82 print('--> executable follows <--\n{}\n--> end executable <--'.format(executable), file=std_err, flush=True)
83
84 returncode = None
85 try:
86 proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')
87 proc.wait(timeout=timeout)
88 returncode = proc.returncode
89
90 except subprocess.TimeoutExpired:
91 raise pe.AppTimeout("[{}] App exceeded walltime: {}".format(func_name, timeout))
92
93 except Exception as e:
94 raise pe.AppException("[{}] App caught exception: {}".format(func_name, proc.returncode), e)
95
96 if returncode != 0:
97 raise pe.AppFailure("[{}] App failed with exit code: {}".format(func_name, proc.returncode), proc.returncode)
98
99 # TODO : Add support for globs here
100
101 missing = []
102 for outputfile in kwargs.get('outputs', []):
103 fpath = outputfile
104 if type(outputfile) != str:
105 fpath = outputfile.filepath
106
107 if not os.path.exists(fpath):
108 missing.extend([outputfile])
109
110 if missing:
111 raise pe.MissingOutputs("[{}] Missing outputs".format(func_name), missing)
112
113 return returncode
114
115
116 class BashApp(AppBase):
117
118 def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):
119 super().__init__(func, data_flow_kernel=data_flow_kernel, walltime=60, executors=executors, cache=cache)
120 self.kwargs = {}
121
122 # We duplicate the extraction of parameter defaults
123 # to self.kwargs to ensure availability at point of
124 # command string format. Refer: #349
125 sig = signature(func)
126
127 for s in sig.parameters:
128 if sig.parameters[s].default != Parameter.empty:
129 self.kwargs[s] = sig.parameters[s].default
130
131 def __call__(self, *args, **kwargs):
132 """Handle the call to a Bash app.
133
134 Args:
135 - Arbitrary
136
137 Kwargs:
138 - Arbitrary
139
140 Returns:
141 If outputs=[...] was a kwarg then:
142 App_fut, [Data_Futures...]
143 else:
144 App_fut
145
146 """
147 # Update kwargs in the app definition with ones passed in at calltime
148 self.kwargs.update(kwargs)
149
150 if self.data_flow_kernel is None:
151 dfk = DataFlowKernelLoader.dfk()
152 else:
153 dfk = self.data_flow_kernel
154
155 app_fut = dfk.submit(wrap_error(update_wrapper(remote_side_bash_executor, self.func)),
156 self.func, *args,
157 executors=self.executors,
158 fn_hash=self.func_hash,
159 cache=self.cache,
160 **self.kwargs)
161
162 out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)
163 for o in kwargs.get('outputs', [])]
164 app_fut._outputs = out_futs
165
166 return app_fut
167
[end of parsl/app/bash.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/parsl/app/bash.py b/parsl/app/bash.py
--- a/parsl/app/bash.py
+++ b/parsl/app/bash.py
@@ -1,4 +1,3 @@
-import logging
from functools import update_wrapper
from inspect import signature, Parameter
@@ -7,8 +6,6 @@
from parsl.app.app import AppBase
from parsl.dataflow.dflow import DataFlowKernelLoader
-logger = logging.getLogger(__name__)
-
def remote_side_bash_executor(func, *args, **kwargs):
"""Execute the bash app type function and return the command line string.
@@ -21,8 +18,20 @@
import subprocess
import logging
import parsl.app.errors as pe
+ from parsl import set_file_logger
+
+ logbase = "/tmp"
+ format_string = "%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s"
+
+ # make this name unique per invocation so that each invocation can
+ # log to its own file. It would be better to include the task_id here
+ # but that is awkward to wire through at the moment as apps do not
+ # have access to that execution context.
+ t = time.time()
- logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)
+ logname = __name__ + "." + str(t)
+ logger = logging.getLogger(logname)
+ set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)
func_name = func.__name__
@@ -44,10 +53,10 @@
except IndexError as e:
raise pe.AppBadFormatting("App formatting failed for app '{}' with IndexError: {}".format(func_name, e))
except Exception as e:
- logging.error("Caught exception during formatting of app '{}': {}".format(func_name, e))
+ logger.error("Caught exception during formatting of app '{}': {}".format(func_name, e))
raise e
- logging.debug("Executable: %s", executable)
+ logger.debug("Executable: %s", executable)
# Updating stdout, stderr if values passed at call time.
| {"golden_diff": "diff --git a/parsl/app/bash.py b/parsl/app/bash.py\n--- a/parsl/app/bash.py\n+++ b/parsl/app/bash.py\n@@ -1,4 +1,3 @@\n-import logging\n from functools import update_wrapper\n from inspect import signature, Parameter\n \n@@ -7,8 +6,6 @@\n from parsl.app.app import AppBase\n from parsl.dataflow.dflow import DataFlowKernelLoader\n \n-logger = logging.getLogger(__name__)\n-\n \n def remote_side_bash_executor(func, *args, **kwargs):\n \"\"\"Execute the bash app type function and return the command line string.\n@@ -21,8 +18,20 @@\n import subprocess\n import logging\n import parsl.app.errors as pe\n+ from parsl import set_file_logger\n+\n+ logbase = \"/tmp\"\n+ format_string = \"%(asctime)s.%(msecs)03d %(name)s:%(lineno)d [%(levelname)s] %(message)s\"\n+\n+ # make this name unique per invocation so that each invocation can\n+ # log to its own file. It would be better to include the task_id here\n+ # but that is awkward to wire through at the moment as apps do not\n+ # have access to that execution context.\n+ t = time.time()\n \n- logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)\n+ logname = __name__ + \".\" + str(t)\n+ logger = logging.getLogger(logname)\n+ set_file_logger(filename='{0}/bashexec.{1}.log'.format(logbase, t), name=logname, level=logging.DEBUG, format_string=format_string)\n \n func_name = func.__name__\n \n@@ -44,10 +53,10 @@\n except IndexError as e:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with IndexError: {}\".format(func_name, e))\n except Exception as e:\n- logging.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n+ logger.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n raise e\n \n- logging.debug(\"Executable: %s\", executable)\n+ logger.debug(\"Executable: %s\", executable)\n \n # Updating stdout, stderr if values passed at call time.\n", "issue": "remote_side_bash_executor global logger conflates logs from multiple tasks/sources\n`remote_side_bash_executor` configures and uses the global `logging` logger, rather than one scoped to that function. Once it has configured a log file for output, all subsequent global logs from any further `remote_side_bash_executor` invocation in that process, as well as other uses of logging, such as htex `process_worker_pool.py`, end up in earlier configured log files.\r\n\r\nThis results in `/tmp/bashexec` logs containing a confused assortment of logs from different sources, rather than being focuses on a single bash execution.\n", "before_files": [{"content": "import logging\nfrom functools import update_wrapper\nfrom inspect import signature, Parameter\n\nfrom parsl.app.errors import wrap_error\nfrom parsl.app.futures import DataFuture\nfrom parsl.app.app import AppBase\nfrom parsl.dataflow.dflow import DataFlowKernelLoader\n\nlogger = logging.getLogger(__name__)\n\n\ndef remote_side_bash_executor(func, *args, **kwargs):\n \"\"\"Execute the bash app type function and return the command line string.\n\n This string is reformatted with the *args, and **kwargs\n from call time.\n \"\"\"\n import os\n import time\n import subprocess\n import logging\n import parsl.app.errors as pe\n\n logging.basicConfig(filename='/tmp/bashexec.{0}.log'.format(time.time()), level=logging.DEBUG)\n\n func_name = func.__name__\n\n partial_cmdline = None\n\n # Try to run the func to compose the commandline\n try:\n # Execute the func to get the commandline\n partial_cmdline = func(*args, **kwargs)\n # Reformat the commandline with current args and kwargs\n executable = partial_cmdline.format(*args, **kwargs)\n\n except AttributeError as e:\n if partial_cmdline is not None:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with AttributeError: {}\".format(func_name, e))\n else:\n raise pe.BashAppNoReturn(\"Bash app '{}' did not return a value, or returned none - with this exception: {}\".format(func_name, e), None)\n\n except IndexError as e:\n raise pe.AppBadFormatting(\"App formatting failed for app '{}' with IndexError: {}\".format(func_name, e))\n except Exception as e:\n logging.error(\"Caught exception during formatting of app '{}': {}\".format(func_name, e))\n raise e\n\n logging.debug(\"Executable: %s\", executable)\n\n # Updating stdout, stderr if values passed at call time.\n\n def open_std_fd(fdname):\n # fdname is 'stdout' or 'stderr'\n stdfspec = kwargs.get(fdname) # spec is str name or tuple (name, mode)\n if stdfspec is None:\n return None\n elif isinstance(stdfspec, str):\n fname = stdfspec\n mode = 'a+'\n elif isinstance(stdfspec, tuple):\n if len(stdfspec) != 2:\n raise pe.BadStdStreamFile(\"std descriptor %s has incorrect tuple length %s\" % (fdname, len(stdfspec)), TypeError('Bad Tuple Length'))\n fname, mode = stdfspec\n else:\n raise pe.BadStdStreamFile(\"std descriptor %s has unexpected type %s\" % (fdname, str(type(stdfspec))), TypeError('Bad Tuple Type'))\n\n try:\n if os.path.dirname(fname):\n os.makedirs(os.path.dirname(fname), exist_ok=True)\n fd = open(fname, mode)\n except Exception as e:\n raise pe.BadStdStreamFile(fname, e)\n return fd\n\n std_out = open_std_fd('stdout')\n std_err = open_std_fd('stderr')\n timeout = kwargs.get('walltime')\n\n if std_err is not None:\n print('--> executable follows <--\\n{}\\n--> end executable <--'.format(executable), file=std_err, flush=True)\n\n returncode = None\n try:\n proc = subprocess.Popen(executable, stdout=std_out, stderr=std_err, shell=True, executable='/bin/bash')\n proc.wait(timeout=timeout)\n returncode = proc.returncode\n\n except subprocess.TimeoutExpired:\n raise pe.AppTimeout(\"[{}] App exceeded walltime: {}\".format(func_name, timeout))\n\n except Exception as e:\n raise pe.AppException(\"[{}] App caught exception: {}\".format(func_name, proc.returncode), e)\n\n if returncode != 0:\n raise pe.AppFailure(\"[{}] App failed with exit code: {}\".format(func_name, proc.returncode), proc.returncode)\n\n # TODO : Add support for globs here\n\n missing = []\n for outputfile in kwargs.get('outputs', []):\n fpath = outputfile\n if type(outputfile) != str:\n fpath = outputfile.filepath\n\n if not os.path.exists(fpath):\n missing.extend([outputfile])\n\n if missing:\n raise pe.MissingOutputs(\"[{}] Missing outputs\".format(func_name), missing)\n\n return returncode\n\n\nclass BashApp(AppBase):\n\n def __init__(self, func, data_flow_kernel=None, walltime=60, cache=False, executors='all'):\n super().__init__(func, data_flow_kernel=data_flow_kernel, walltime=60, executors=executors, cache=cache)\n self.kwargs = {}\n\n # We duplicate the extraction of parameter defaults\n # to self.kwargs to ensure availability at point of\n # command string format. Refer: #349\n sig = signature(func)\n\n for s in sig.parameters:\n if sig.parameters[s].default != Parameter.empty:\n self.kwargs[s] = sig.parameters[s].default\n\n def __call__(self, *args, **kwargs):\n \"\"\"Handle the call to a Bash app.\n\n Args:\n - Arbitrary\n\n Kwargs:\n - Arbitrary\n\n Returns:\n If outputs=[...] was a kwarg then:\n App_fut, [Data_Futures...]\n else:\n App_fut\n\n \"\"\"\n # Update kwargs in the app definition with ones passed in at calltime\n self.kwargs.update(kwargs)\n\n if self.data_flow_kernel is None:\n dfk = DataFlowKernelLoader.dfk()\n else:\n dfk = self.data_flow_kernel\n\n app_fut = dfk.submit(wrap_error(update_wrapper(remote_side_bash_executor, self.func)),\n self.func, *args,\n executors=self.executors,\n fn_hash=self.func_hash,\n cache=self.cache,\n **self.kwargs)\n\n out_futs = [DataFuture(app_fut, o, tid=app_fut.tid)\n for o in kwargs.get('outputs', [])]\n app_fut._outputs = out_futs\n\n return app_fut\n", "path": "parsl/app/bash.py"}]} | 2,390 | 509 |
gh_patches_debug_18875 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1749 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Useless Request of a streetlist from muellmax.de
### I Have A Problem With:
A specific source
### What's Your Problem
We are the developers of müllmax. We can accept automated daily ical-requests, currently more than 500 a day. But it is annoying, when useless dataload is produced. To select a street name, an empty formfield mm_frm_str_name is sent, which is the request for a complete list of streetnames. This can cause a dataload of 100.000 kb or more and is comletely useless. Instead of an empty field the requested streetname should be submitted. The second call with the requested streetname in formfield mm_frm_str_sel is unnecessary and should be omitted.
### Source (if relevant)
muellmax_de.py
### Logs
```Shell
if self._mm_frm_str_sel is not None:
# show street selection page
args = {
"mm_ses": mm_ses.value,
"xxx": 1,
"mm_frm_str_name": "",
"mm_aus_str_txt_submit": "suchen",
}
r = requests.post(url, data=args)
mm_ses.feed(r.text)
# select street
args = {
"mm_ses": mm_ses.value,
"xxx": 1,
"mm_frm_str_sel": self._mm_frm_str_sel,
"mm_aus_str_sel_submit": "weiter",
}
r = requests.post(url, data=args)
mm_ses.feed(r.text)
```
### Relevant Configuration
```YAML
We do not have hacs_waste_collection_schedule installed.
```
### Checklist Source Error
- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)
- [X] Checked that the website of your service provider is still working
- [X] Tested my attributes on the service provider website (if possible)
- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on "Redownload" and choose master as version)
### Checklist Sensor Error
- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)
### Required
- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.
- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py]
1 from html.parser import HTMLParser
2
3 import requests
4 from waste_collection_schedule import Collection # type: ignore[attr-defined]
5 from waste_collection_schedule.service.ICS import ICS
6 from waste_collection_schedule.service.MuellmaxDe import SERVICE_MAP
7
8 TITLE = "Müllmax"
9 DESCRIPTION = "Source for Müllmax waste collection."
10 URL = "https://www.muellmax.de"
11
12
13 def EXTRA_INFO():
14 return [{"title": s["title"], "url": s["url"]} for s in SERVICE_MAP]
15
16
17 TEST_CASES = {
18 "Rhein-Sieg-Kreis, Alfter": {
19 "service": "Rsa",
20 "mm_frm_ort_sel": "Alfter",
21 "mm_frm_str_sel": "Ahrweg (105-Ende/94-Ende)",
22 },
23 "Münster, Achatiusweg": {"service": "Awm", "mm_frm_str_sel": "Achatiusweg"},
24 }
25
26
27 # Parser for HTML checkbox
28 class InputCheckboxParser(HTMLParser):
29 def __init__(self, startswith):
30 super().__init__()
31 self._startswith = startswith
32 self._value = {}
33
34 @property
35 def value(self):
36 return self._value
37
38 def handle_starttag(self, tag, attrs):
39 if tag == "input":
40 d = dict(attrs)
41 if d.get("name", "").startswith(self._startswith):
42 self._value[d["name"]] = d.get("value")
43
44
45 # Parser for HTML input (hidden) text
46 class InputTextParser(HTMLParser):
47 def __init__(self, **identifiers):
48 super().__init__()
49 self._identifiers = identifiers
50 self._value = None
51
52 @property
53 def value(self):
54 return self._value
55
56 def handle_starttag(self, tag, attrs):
57 if tag == "input":
58 d = dict(attrs)
59 for key, value in self._identifiers.items():
60 if key not in d or d[key] != value:
61 return
62 self._value = d.get("value")
63
64
65 class Source:
66 def __init__(
67 self, service, mm_frm_ort_sel=None, mm_frm_str_sel=None, mm_frm_hnr_sel=None
68 ):
69 self._service = service
70 self._mm_frm_ort_sel = mm_frm_ort_sel
71 self._mm_frm_str_sel = mm_frm_str_sel
72 self._mm_frm_hnr_sel = mm_frm_hnr_sel
73 self._ics = ICS()
74
75 def fetch(self):
76 mm_ses = InputTextParser(name="mm_ses")
77
78 url = f"https://www.muellmax.de/abfallkalender/{self._service.lower()}/res/{self._service}Start.php"
79 r = requests.get(url)
80 mm_ses.feed(r.text)
81
82 # select "Abfuhrtermine", returns ort or an empty street search field
83 args = {"mm_ses": mm_ses.value, "mm_aus_ort.x": 0, "mm_aus_ort.x": 0}
84 r = requests.post(url, data=args)
85 mm_ses.feed(r.text)
86
87 if self._mm_frm_ort_sel is not None:
88 # select city
89 args = {
90 "mm_ses": mm_ses.value,
91 "xxx": 1,
92 "mm_frm_ort_sel": self._mm_frm_ort_sel,
93 "mm_aus_ort_submit": "weiter",
94 }
95 r = requests.post(url, data=args)
96 mm_ses.feed(r.text)
97
98 if self._mm_frm_str_sel is not None:
99 # show street selection page
100 args = {
101 "mm_ses": mm_ses.value,
102 "xxx": 1,
103 "mm_frm_str_name": "",
104 "mm_aus_str_txt_submit": "suchen",
105 }
106 r = requests.post(url, data=args)
107 mm_ses.feed(r.text)
108
109 # select street
110 args = {
111 "mm_ses": mm_ses.value,
112 "xxx": 1,
113 "mm_frm_str_sel": self._mm_frm_str_sel,
114 "mm_aus_str_sel_submit": "weiter",
115 }
116 r = requests.post(url, data=args)
117 mm_ses.feed(r.text)
118
119 if self._mm_frm_hnr_sel is not None:
120 # select house number
121 args = {
122 "mm_ses": mm_ses.value,
123 "xxx": 1,
124 "mm_frm_hnr_sel": self._mm_frm_hnr_sel,
125 "mm_aus_hnr_sel_submit": "weiter",
126 }
127 r = requests.post(url, data=args)
128 mm_ses.feed(r.text)
129
130 # select to get ical
131 args = {"mm_ses": mm_ses.value, "xxx": 1, "mm_ica_auswahl": "iCalendar-Datei"}
132 r = requests.post(url, data=args)
133 mm_ses.feed(r.text)
134
135 mm_frm_fra = InputCheckboxParser(startswith="mm_frm_fra")
136 mm_frm_fra.feed(r.text)
137
138 # get ics file
139 args = {"mm_ses": mm_ses.value, "xxx": 1, "mm_frm_type": "termine"}
140 args.update(mm_frm_fra.value)
141 args.update({"mm_ica_gen": "iCalendar-Datei laden"})
142 r = requests.post(url, data=args)
143 mm_ses.feed(r.text)
144
145 entries = []
146
147 # parse ics file
148 dates = self._ics.convert(r.text)
149
150 entries = []
151 for d in dates:
152 entries.append(Collection(d[0], d[1]))
153 return entries
154
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py
@@ -96,22 +96,12 @@
mm_ses.feed(r.text)
if self._mm_frm_str_sel is not None:
- # show street selection page
- args = {
- "mm_ses": mm_ses.value,
- "xxx": 1,
- "mm_frm_str_name": "",
- "mm_aus_str_txt_submit": "suchen",
- }
- r = requests.post(url, data=args)
- mm_ses.feed(r.text)
-
# select street
args = {
"mm_ses": mm_ses.value,
"xxx": 1,
"mm_frm_str_sel": self._mm_frm_str_sel,
- "mm_aus_str_sel_submit": "weiter",
+ "mm_aus_str_sel_submit": "suchen",
}
r = requests.post(url, data=args)
mm_ses.feed(r.text)
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py\n@@ -96,22 +96,12 @@\n mm_ses.feed(r.text)\n \n if self._mm_frm_str_sel is not None:\n- # show street selection page\n- args = {\n- \"mm_ses\": mm_ses.value,\n- \"xxx\": 1,\n- \"mm_frm_str_name\": \"\",\n- \"mm_aus_str_txt_submit\": \"suchen\",\n- }\n- r = requests.post(url, data=args)\n- mm_ses.feed(r.text)\n-\n # select street\n args = {\n \"mm_ses\": mm_ses.value,\n \"xxx\": 1,\n \"mm_frm_str_sel\": self._mm_frm_str_sel,\n- \"mm_aus_str_sel_submit\": \"weiter\",\n+ \"mm_aus_str_sel_submit\": \"suchen\",\n }\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n", "issue": "Useless Request of a streetlist from muellmax.de\n### I Have A Problem With:\n\nA specific source\n\n### What's Your Problem\n\nWe are the developers of m\u00fcllmax. We can accept automated daily ical-requests, currently more than 500 a day. But it is annoying, when useless dataload is produced. To select a street name, an empty formfield mm_frm_str_name is sent, which is the request for a complete list of streetnames. This can cause a dataload of 100.000 kb or more and is comletely useless. Instead of an empty field the requested streetname should be submitted. The second call with the requested streetname in formfield mm_frm_str_sel is unnecessary and should be omitted.\n\n### Source (if relevant)\n\nmuellmax_de.py \n\n### Logs\n\n```Shell\nif self._mm_frm_str_sel is not None:\r\n # show street selection page\r\n args = {\r\n \"mm_ses\": mm_ses.value,\r\n \"xxx\": 1,\r\n \"mm_frm_str_name\": \"\",\r\n \"mm_aus_str_txt_submit\": \"suchen\",\r\n }\r\n r = requests.post(url, data=args)\r\n mm_ses.feed(r.text)\r\n\r\n # select street\r\n args = {\r\n \"mm_ses\": mm_ses.value,\r\n \"xxx\": 1,\r\n \"mm_frm_str_sel\": self._mm_frm_str_sel,\r\n \"mm_aus_str_sel_submit\": \"weiter\",\r\n }\r\n r = requests.post(url, data=args)\r\n mm_ses.feed(r.text)\n```\n\n\n### Relevant Configuration\n\n```YAML\nWe do not have hacs_waste_collection_schedule installed.\n```\n\n\n### Checklist Source Error\n\n- [X] Use the example parameters for your source (often available in the documentation) (don't forget to restart Home Assistant after changing the configuration)\n- [X] Checked that the website of your service provider is still working\n- [X] Tested my attributes on the service provider website (if possible)\n- [X] I have tested with the latest version of the integration (master) (for HACS in the 3 dot menu of the integration click on \"Redownload\" and choose master as version)\n\n### Checklist Sensor Error\n\n- [X] Checked in the Home Assistant Calendar tab if the event names match the types names (if types argument is used)\n\n### Required\n\n- [X] I have searched past (closed AND opened) issues to see if this bug has already been reported, and it hasn't been.\n- [X] I understand that people give their precious time for free, and thus I've done my very best to make this problem as easy as possible to investigate.\n", "before_files": [{"content": "from html.parser import HTMLParser\n\nimport requests\nfrom waste_collection_schedule import Collection # type: ignore[attr-defined]\nfrom waste_collection_schedule.service.ICS import ICS\nfrom waste_collection_schedule.service.MuellmaxDe import SERVICE_MAP\n\nTITLE = \"M\u00fcllmax\"\nDESCRIPTION = \"Source for M\u00fcllmax waste collection.\"\nURL = \"https://www.muellmax.de\"\n\n\ndef EXTRA_INFO():\n return [{\"title\": s[\"title\"], \"url\": s[\"url\"]} for s in SERVICE_MAP]\n\n\nTEST_CASES = {\n \"Rhein-Sieg-Kreis, Alfter\": {\n \"service\": \"Rsa\",\n \"mm_frm_ort_sel\": \"Alfter\",\n \"mm_frm_str_sel\": \"Ahrweg (105-Ende/94-Ende)\",\n },\n \"M\u00fcnster, Achatiusweg\": {\"service\": \"Awm\", \"mm_frm_str_sel\": \"Achatiusweg\"},\n}\n\n\n# Parser for HTML checkbox\nclass InputCheckboxParser(HTMLParser):\n def __init__(self, startswith):\n super().__init__()\n self._startswith = startswith\n self._value = {}\n\n @property\n def value(self):\n return self._value\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n if d.get(\"name\", \"\").startswith(self._startswith):\n self._value[d[\"name\"]] = d.get(\"value\")\n\n\n# Parser for HTML input (hidden) text\nclass InputTextParser(HTMLParser):\n def __init__(self, **identifiers):\n super().__init__()\n self._identifiers = identifiers\n self._value = None\n\n @property\n def value(self):\n return self._value\n\n def handle_starttag(self, tag, attrs):\n if tag == \"input\":\n d = dict(attrs)\n for key, value in self._identifiers.items():\n if key not in d or d[key] != value:\n return\n self._value = d.get(\"value\")\n\n\nclass Source:\n def __init__(\n self, service, mm_frm_ort_sel=None, mm_frm_str_sel=None, mm_frm_hnr_sel=None\n ):\n self._service = service\n self._mm_frm_ort_sel = mm_frm_ort_sel\n self._mm_frm_str_sel = mm_frm_str_sel\n self._mm_frm_hnr_sel = mm_frm_hnr_sel\n self._ics = ICS()\n\n def fetch(self):\n mm_ses = InputTextParser(name=\"mm_ses\")\n\n url = f\"https://www.muellmax.de/abfallkalender/{self._service.lower()}/res/{self._service}Start.php\"\n r = requests.get(url)\n mm_ses.feed(r.text)\n\n # select \"Abfuhrtermine\", returns ort or an empty street search field\n args = {\"mm_ses\": mm_ses.value, \"mm_aus_ort.x\": 0, \"mm_aus_ort.x\": 0}\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n if self._mm_frm_ort_sel is not None:\n # select city\n args = {\n \"mm_ses\": mm_ses.value,\n \"xxx\": 1,\n \"mm_frm_ort_sel\": self._mm_frm_ort_sel,\n \"mm_aus_ort_submit\": \"weiter\",\n }\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n if self._mm_frm_str_sel is not None:\n # show street selection page\n args = {\n \"mm_ses\": mm_ses.value,\n \"xxx\": 1,\n \"mm_frm_str_name\": \"\",\n \"mm_aus_str_txt_submit\": \"suchen\",\n }\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n # select street\n args = {\n \"mm_ses\": mm_ses.value,\n \"xxx\": 1,\n \"mm_frm_str_sel\": self._mm_frm_str_sel,\n \"mm_aus_str_sel_submit\": \"weiter\",\n }\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n if self._mm_frm_hnr_sel is not None:\n # select house number\n args = {\n \"mm_ses\": mm_ses.value,\n \"xxx\": 1,\n \"mm_frm_hnr_sel\": self._mm_frm_hnr_sel,\n \"mm_aus_hnr_sel_submit\": \"weiter\",\n }\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n # select to get ical\n args = {\"mm_ses\": mm_ses.value, \"xxx\": 1, \"mm_ica_auswahl\": \"iCalendar-Datei\"}\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n mm_frm_fra = InputCheckboxParser(startswith=\"mm_frm_fra\")\n mm_frm_fra.feed(r.text)\n\n # get ics file\n args = {\"mm_ses\": mm_ses.value, \"xxx\": 1, \"mm_frm_type\": \"termine\"}\n args.update(mm_frm_fra.value)\n args.update({\"mm_ica_gen\": \"iCalendar-Datei laden\"})\n r = requests.post(url, data=args)\n mm_ses.feed(r.text)\n\n entries = []\n\n # parse ics file\n dates = self._ics.convert(r.text)\n\n entries = []\n for d in dates:\n entries.append(Collection(d[0], d[1]))\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/muellmax_de.py"}]} | 2,721 | 280 |
gh_patches_debug_13448 | rasdani/github-patches | git_diff | huggingface__diffusers-1932 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Error] got an unexpected keyword argument `eta`
### Describe the bug
When I try to make a sampling using DDIM pipeline, an error occurs
```bash
python utils/ddim.py
0%| | 0/50 [00:00<?, ?it/s]
Traceback (most recent call last):
File "utils/ddim.py", line 10, in <module>
image = ddim(num_inference_steps=50).images[0]
File "/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/pipelines/ddim/pipeline_ddim.py", line 129, in __call__
image = self.scheduler.step(
File "/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/schedulers/scheduling_ddpm.py", line 259, in step
predict_epsilon = deprecate("predict_epsilon", "0.12.0", message, take_from=kwargs)
File "/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/utils/deprecation_utils.py", line 43, in deprecate
raise TypeError(f"{function} in {filename} line {line_number-1} got an unexpected keyword argument `{key}`")
TypeError: step in /home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/schedulers/scheduling_ddpm.py line 258 got an unexpected keyword argument `eta`
```
Is there anything I can do?
Thnaks!
### Reproduction
I used code lines the same as the typical example code:
```python
from diffusers import DDIMPipeline
model_id = "/home/sr5/se91.kim/AMP/Diffusers/models/google/ddpm-cifar10-32"
# load model and scheduler
ddim = DDIMPipeline.from_pretrained(model_id)
# run pipeline in inference (sample random noise and denoise)
image = ddim(num_inference_steps=50).images[0]
# save image
image.save("ddim_generated_image.png")
```
### Logs
_No response_
### System Info
I saw a closed issue (https://github.com/huggingface/diffusers/issues/170) but the diffusers in my virtual env. is already of the latest version (0.12.dev):
```bash
$ pip list
Package Version
------------------ ------------
accelerate 0.15.0
certifi 2020.12.5
chardet 4.0.0
diffusers 0.12.0.dev0
filelock 3.0.12
fsspec 2022.11.0
huggingface-hub 0.11.1
idna 2.10
importlib-metadata 3.7.3
numpy 1.20.1
packaging 20.9
Pillow 8.1.2
pip 21.0.1
psutil 5.8.0
pyarrow 10.0.1
pyparsing 2.4.7
PyYAML 5.4.1
regex 2021.3.17
requests 2.25.1
setuptools 54.1.2
torch 1.9.1+cu111
torchaudio 0.9.1
torchvision 0.10.1+cu111
tqdm 4.64.1
typing-extensions 3.7.4.3
urllib3 1.26.4
wheel 0.36.2
zipp 3.4.1
```
I tried under two versions of diffusers; 0.12.dev and 0.11.1, and both give the same error message.
</issue>
<code>
[start of src/diffusers/pipelines/ddim/pipeline_ddim.py]
1 # Copyright 2022 The HuggingFace Team. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import List, Optional, Tuple, Union
16
17 import torch
18
19 from ...utils import deprecate, randn_tensor
20 from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
21
22
23 class DDIMPipeline(DiffusionPipeline):
24 r"""
25 This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
26 library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
27
28 Parameters:
29 unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.
30 scheduler ([`SchedulerMixin`]):
31 A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of
32 [`DDPMScheduler`], or [`DDIMScheduler`].
33 """
34
35 def __init__(self, unet, scheduler):
36 super().__init__()
37 self.register_modules(unet=unet, scheduler=scheduler)
38
39 @torch.no_grad()
40 def __call__(
41 self,
42 batch_size: int = 1,
43 generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
44 eta: float = 0.0,
45 num_inference_steps: int = 50,
46 use_clipped_model_output: Optional[bool] = None,
47 output_type: Optional[str] = "pil",
48 return_dict: bool = True,
49 ) -> Union[ImagePipelineOutput, Tuple]:
50 r"""
51 Args:
52 batch_size (`int`, *optional*, defaults to 1):
53 The number of images to generate.
54 generator (`torch.Generator`, *optional*):
55 One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
56 to make generation deterministic.
57 eta (`float`, *optional*, defaults to 0.0):
58 The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM).
59 num_inference_steps (`int`, *optional*, defaults to 50):
60 The number of denoising steps. More denoising steps usually lead to a higher quality image at the
61 expense of slower inference.
62 use_clipped_model_output (`bool`, *optional*, defaults to `None`):
63 if `True` or `False`, see documentation for `DDIMScheduler.step`. If `None`, nothing is passed
64 downstream to the scheduler. So use `None` for schedulers which don't support this argument.
65 output_type (`str`, *optional*, defaults to `"pil"`):
66 The output format of the generate image. Choose between
67 [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
68 return_dict (`bool`, *optional*, defaults to `True`):
69 Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.
70
71 Returns:
72 [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if `return_dict` is
73 True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.
74 """
75
76 if (
77 generator is not None
78 and isinstance(generator, torch.Generator)
79 and generator.device.type != self.device.type
80 and self.device.type != "mps"
81 ):
82 message = (
83 f"The `generator` device is `{generator.device}` and does not match the pipeline "
84 f"device `{self.device}`, so the `generator` will be ignored. "
85 f'Please use `generator=torch.Generator(device="{self.device}")` instead.'
86 )
87 deprecate(
88 "generator.device == 'cpu'",
89 "0.13.0",
90 message,
91 )
92 generator = None
93
94 # Sample gaussian noise to begin loop
95 if isinstance(self.unet.sample_size, int):
96 image_shape = (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size)
97 else:
98 image_shape = (batch_size, self.unet.in_channels, *self.unet.sample_size)
99
100 if isinstance(generator, list) and len(generator) != batch_size:
101 raise ValueError(
102 f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
103 f" size of {batch_size}. Make sure the batch size matches the length of the generators."
104 )
105
106 image = randn_tensor(image_shape, generator=generator, device=self.device, dtype=self.unet.dtype)
107
108 # set step values
109 self.scheduler.set_timesteps(num_inference_steps)
110
111 for t in self.progress_bar(self.scheduler.timesteps):
112 # 1. predict noise model_output
113 model_output = self.unet(image, t).sample
114
115 # 2. predict previous mean of image x_t-1 and add variance depending on eta
116 # eta corresponds to η in paper and should be between [0, 1]
117 # do x_t -> x_t-1
118 image = self.scheduler.step(
119 model_output, t, image, eta=eta, use_clipped_model_output=use_clipped_model_output, generator=generator
120 ).prev_sample
121
122 image = (image / 2 + 0.5).clamp(0, 1)
123 image = image.cpu().permute(0, 2, 3, 1).numpy()
124 if output_type == "pil":
125 image = self.numpy_to_pil(image)
126
127 if not return_dict:
128 return (image,)
129
130 return ImagePipelineOutput(images=image)
131
[end of src/diffusers/pipelines/ddim/pipeline_ddim.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/diffusers/pipelines/ddim/pipeline_ddim.py b/src/diffusers/pipelines/ddim/pipeline_ddim.py
--- a/src/diffusers/pipelines/ddim/pipeline_ddim.py
+++ b/src/diffusers/pipelines/ddim/pipeline_ddim.py
@@ -16,6 +16,7 @@
import torch
+from ...schedulers import DDIMScheduler
from ...utils import deprecate, randn_tensor
from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput
@@ -34,6 +35,10 @@
def __init__(self, unet, scheduler):
super().__init__()
+
+ # make sure scheduler can always be converted to DDIM
+ scheduler = DDIMScheduler.from_config(scheduler.config)
+
self.register_modules(unet=unet, scheduler=scheduler)
@torch.no_grad()
| {"golden_diff": "diff --git a/src/diffusers/pipelines/ddim/pipeline_ddim.py b/src/diffusers/pipelines/ddim/pipeline_ddim.py\n--- a/src/diffusers/pipelines/ddim/pipeline_ddim.py\n+++ b/src/diffusers/pipelines/ddim/pipeline_ddim.py\n@@ -16,6 +16,7 @@\n \n import torch\n \n+from ...schedulers import DDIMScheduler\n from ...utils import deprecate, randn_tensor\n from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput\n \n@@ -34,6 +35,10 @@\n \n def __init__(self, unet, scheduler):\n super().__init__()\n+\n+ # make sure scheduler can always be converted to DDIM\n+ scheduler = DDIMScheduler.from_config(scheduler.config)\n+\n self.register_modules(unet=unet, scheduler=scheduler)\n \n @torch.no_grad()\n", "issue": "[Error] got an unexpected keyword argument `eta`\n### Describe the bug\n\nWhen I try to make a sampling using DDIM pipeline, an error occurs\r\n```bash\r\npython utils/ddim.py\r\n 0%| | 0/50 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"utils/ddim.py\", line 10, in <module>\r\n image = ddim(num_inference_steps=50).images[0]\r\n File \"/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/torch/autograd/grad_mode.py\", line 28, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/pipelines/ddim/pipeline_ddim.py\", line 129, in __call__\r\n image = self.scheduler.step(\r\n File \"/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/schedulers/scheduling_ddpm.py\", line 259, in step\r\n predict_epsilon = deprecate(\"predict_epsilon\", \"0.12.0\", message, take_from=kwargs)\r\n File \"/home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/utils/deprecation_utils.py\", line 43, in deprecate\r\n raise TypeError(f\"{function} in {filename} line {line_number-1} got an unexpected keyword argument `{key}`\")\r\nTypeError: step in /home/sr5/se91.kim/.venv/diffusers/lib/python3.8/site-packages/diffusers/schedulers/scheduling_ddpm.py line 258 got an unexpected keyword argument `eta`\r\n```\r\n\r\nIs there anything I can do? \r\n\r\nThnaks!\n\n### Reproduction\n\nI used code lines the same as the typical example code:\r\n```python\r\nfrom diffusers import DDIMPipeline\r\n\r\nmodel_id = \"/home/sr5/se91.kim/AMP/Diffusers/models/google/ddpm-cifar10-32\"\r\n\r\n# load model and scheduler\r\nddim = DDIMPipeline.from_pretrained(model_id)\r\n\r\n# run pipeline in inference (sample random noise and denoise)\r\nimage = ddim(num_inference_steps=50).images[0]\r\n\r\n# save image\r\nimage.save(\"ddim_generated_image.png\")\r\n```\n\n### Logs\n\n_No response_\n\n### System Info\n\nI saw a closed issue (https://github.com/huggingface/diffusers/issues/170) but the diffusers in my virtual env. is already of the latest version (0.12.dev):\r\n```bash\r\n$ pip list\r\nPackage Version\r\n------------------ ------------\r\naccelerate 0.15.0\r\ncertifi 2020.12.5\r\nchardet 4.0.0\r\ndiffusers 0.12.0.dev0\r\nfilelock 3.0.12\r\nfsspec 2022.11.0\r\nhuggingface-hub 0.11.1\r\nidna 2.10\r\nimportlib-metadata 3.7.3\r\nnumpy 1.20.1\r\npackaging 20.9\r\nPillow 8.1.2\r\npip 21.0.1\r\npsutil 5.8.0\r\npyarrow 10.0.1\r\npyparsing 2.4.7\r\nPyYAML 5.4.1\r\nregex 2021.3.17\r\nrequests 2.25.1\r\nsetuptools 54.1.2\r\ntorch 1.9.1+cu111\r\ntorchaudio 0.9.1\r\ntorchvision 0.10.1+cu111\r\ntqdm 4.64.1\r\ntyping-extensions 3.7.4.3\r\nurllib3 1.26.4\r\nwheel 0.36.2\r\nzipp 3.4.1\r\n```\r\nI tried under two versions of diffusers; 0.12.dev and 0.11.1, and both give the same error message.\n", "before_files": [{"content": "# Copyright 2022 The HuggingFace Team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import List, Optional, Tuple, Union\n\nimport torch\n\nfrom ...utils import deprecate, randn_tensor\nfrom ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput\n\n\nclass DDIMPipeline(DiffusionPipeline):\n r\"\"\"\n This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the\n library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)\n\n Parameters:\n unet ([`UNet2DModel`]): U-Net architecture to denoise the encoded image.\n scheduler ([`SchedulerMixin`]):\n A scheduler to be used in combination with `unet` to denoise the encoded image. Can be one of\n [`DDPMScheduler`], or [`DDIMScheduler`].\n \"\"\"\n\n def __init__(self, unet, scheduler):\n super().__init__()\n self.register_modules(unet=unet, scheduler=scheduler)\n\n @torch.no_grad()\n def __call__(\n self,\n batch_size: int = 1,\n generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,\n eta: float = 0.0,\n num_inference_steps: int = 50,\n use_clipped_model_output: Optional[bool] = None,\n output_type: Optional[str] = \"pil\",\n return_dict: bool = True,\n ) -> Union[ImagePipelineOutput, Tuple]:\n r\"\"\"\n Args:\n batch_size (`int`, *optional*, defaults to 1):\n The number of images to generate.\n generator (`torch.Generator`, *optional*):\n One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)\n to make generation deterministic.\n eta (`float`, *optional*, defaults to 0.0):\n The eta parameter which controls the scale of the variance (0 is DDIM and 1 is one type of DDPM).\n num_inference_steps (`int`, *optional*, defaults to 50):\n The number of denoising steps. More denoising steps usually lead to a higher quality image at the\n expense of slower inference.\n use_clipped_model_output (`bool`, *optional*, defaults to `None`):\n if `True` or `False`, see documentation for `DDIMScheduler.step`. If `None`, nothing is passed\n downstream to the scheduler. So use `None` for schedulers which don't support this argument.\n output_type (`str`, *optional*, defaults to `\"pil\"`):\n The output format of the generate image. Choose between\n [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.\n return_dict (`bool`, *optional*, defaults to `True`):\n Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple.\n\n Returns:\n [`~pipelines.ImagePipelineOutput`] or `tuple`: [`~pipelines.utils.ImagePipelineOutput`] if `return_dict` is\n True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images.\n \"\"\"\n\n if (\n generator is not None\n and isinstance(generator, torch.Generator)\n and generator.device.type != self.device.type\n and self.device.type != \"mps\"\n ):\n message = (\n f\"The `generator` device is `{generator.device}` and does not match the pipeline \"\n f\"device `{self.device}`, so the `generator` will be ignored. \"\n f'Please use `generator=torch.Generator(device=\"{self.device}\")` instead.'\n )\n deprecate(\n \"generator.device == 'cpu'\",\n \"0.13.0\",\n message,\n )\n generator = None\n\n # Sample gaussian noise to begin loop\n if isinstance(self.unet.sample_size, int):\n image_shape = (batch_size, self.unet.in_channels, self.unet.sample_size, self.unet.sample_size)\n else:\n image_shape = (batch_size, self.unet.in_channels, *self.unet.sample_size)\n\n if isinstance(generator, list) and len(generator) != batch_size:\n raise ValueError(\n f\"You have passed a list of generators of length {len(generator)}, but requested an effective batch\"\n f\" size of {batch_size}. Make sure the batch size matches the length of the generators.\"\n )\n\n image = randn_tensor(image_shape, generator=generator, device=self.device, dtype=self.unet.dtype)\n\n # set step values\n self.scheduler.set_timesteps(num_inference_steps)\n\n for t in self.progress_bar(self.scheduler.timesteps):\n # 1. predict noise model_output\n model_output = self.unet(image, t).sample\n\n # 2. predict previous mean of image x_t-1 and add variance depending on eta\n # eta corresponds to \u03b7 in paper and should be between [0, 1]\n # do x_t -> x_t-1\n image = self.scheduler.step(\n model_output, t, image, eta=eta, use_clipped_model_output=use_clipped_model_output, generator=generator\n ).prev_sample\n\n image = (image / 2 + 0.5).clamp(0, 1)\n image = image.cpu().permute(0, 2, 3, 1).numpy()\n if output_type == \"pil\":\n image = self.numpy_to_pil(image)\n\n if not return_dict:\n return (image,)\n\n return ImagePipelineOutput(images=image)\n", "path": "src/diffusers/pipelines/ddim/pipeline_ddim.py"}]} | 3,157 | 196 |
gh_patches_debug_23873 | rasdani/github-patches | git_diff | cloudtools__troposphere-186 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Invalid json generated with SecurityGroupIngress
Ref: https://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/EC2InstanceWithSecurityGroupSample.template
Invalid format generated:
``` json
"SecurityGroupIngress": [
{
"Properties": {
"CidrIp": "0.0.0.0/0",
"FromPort": "0",
"IpProtocol": "-1",
"ToPort": "65535"
},
"Type": "AWS::EC2::SecurityGroupIngress"
}
]
```
With the above template AWS will complain:
```
Encountered unsupported property Type
```
Correct format:
``` json
"SecurityGroupIngress": [
{
"CidrIp": "0.0.0.0/0",
"FromPort": "0",
"IpProtocol": "-1",
"ToPort": "65535"
}
]
```
</issue>
<code>
[start of examples/RedshiftClusterInVpc.py]
1 # Converted from Redshift.template located at:
2 # http://aws.amazon.com/cloudformation/aws-cloudformation-templates/
3
4 from troposphere import Template, Parameter, Ref, Equals
5 from troposphere import If, Output, Join, GetAtt
6 from troposphere.redshift import Cluster, ClusterParameterGroup
7 from troposphere.redshift import AmazonRedshiftParameter, ClusterSubnetGroup
8 from troposphere.ec2 import VPC, Subnet, InternetGateway, VPCGatewayAttachment
9 from troposphere.ec2 import SecurityGroup, SecurityGroupIngress
10
11
12 t = Template()
13
14 t.add_version("2010-09-09")
15
16 t.add_description(
17 "AWS CloudFormation Sample Template: Redshift cluster in a VPC")
18
19 dbname = t.add_parameter(Parameter(
20 "DatabaseName",
21 Description="The name of the first database to be created when the "
22 "redshift cluster is created",
23 Type="String",
24 Default="defaultdb",
25 AllowedPattern="([a-z]|[0-9])+",
26 ))
27
28 clustertype = t.add_parameter(Parameter(
29 "ClusterType",
30 Description="The type of the cluster",
31 Type="String",
32 Default="single-node",
33 AllowedValues=[
34 "single-node",
35 "multi-mode"
36 ],
37 ))
38
39 numberofnodes = t.add_parameter(Parameter(
40 "NumberOfNodes",
41 Description="The number of compute nodes in the redshift cluster. "
42 "When cluster type is specified as: 1) single-node, the NumberOfNodes "
43 "parameter should be specified as 1, 2) multi-node, the NumberOfNodes "
44 "parameter should be greater than 1",
45 Type="Number",
46 Default="1",
47 ))
48
49 nodetype = t.add_parameter(Parameter(
50 "NodeType",
51 Description="The node type to be provisioned for the redshift cluster",
52 Type="String",
53 Default="dw2.large",
54 ))
55
56 masterusername = t.add_parameter(Parameter(
57 "MasterUsername",
58 Description="The user name associated with the master user account for "
59 "the redshift cluster that is being created",
60 Type="String",
61 Default="defaultuser",
62 AllowedPattern="([a-z])([a-z]|[0-9])*",
63 NoEcho=True,
64 ))
65
66 masteruserpassword = t.add_parameter(Parameter(
67 "MasterUserPassword",
68 Description="The password associated with the master user account for the "
69 "redshift cluster that is being created.",
70 Type="String",
71 NoEcho=True,
72 ))
73
74 conditions = {
75 "IsMultiNodeCluster": Equals(
76 Ref("ClusterType"),
77 "multi-mode"
78 ),
79 }
80
81 for k in conditions:
82 t.add_condition(k, conditions[k])
83
84 redshiftcluster = t.add_resource(Cluster(
85 "RedshiftCluster",
86 ClusterType=Ref("ClusterType"),
87 NumberOfNodes=If("IsMultiNodeCluster",
88 Ref("NumberOfNodes"), Ref("AWS::NoValue")),
89 NodeType=Ref("NodeType"),
90 DBName=Ref("DatabaseName"),
91 MasterUsername=Ref("MasterUsername"),
92 MasterUserPassword=Ref("MasterUserPassword"),
93 ClusterParameterGroupName=Ref("RedshiftClusterParameterGroup"),
94 VpcSecurityGroupIds=Ref("SecurityGroup"),
95 ClusterSubnetGroupName=Ref("RedshiftClusterSubnetGroup"),
96 ))
97
98 amazonredshiftparameter1 = AmazonRedshiftParameter(
99 "AmazonRedshiftParameter1",
100 ParameterName="enable_user_activity_logging",
101 ParameterValue="true",
102 )
103
104 redshiftclusterparametergroup = t.add_resource(ClusterParameterGroup(
105 "RedshiftClusterParameterGroup",
106 Description="Cluster parameter group",
107 ParameterGroupFamily="redshift-1.0",
108 Parameters=[amazonredshiftparameter1],
109 ))
110
111 redshiftclustersubnetgroup = t.add_resource(ClusterSubnetGroup(
112 "RedshiftClusterSubnetGroup",
113 Description="Cluster subnet group",
114 SubnetIds=Ref("Subnet"),
115 ))
116
117 vpc = t.add_resource(VPC(
118 "VPC",
119 CidrBlock="10.0.0.0/16",
120 ))
121
122 subnet = t.add_resource(Subnet(
123 "Subnet",
124 CidrBlock="10.0.0.0/24",
125 VpcId=Ref("VPC"),
126 ))
127
128 internetgateway = t.add_resource(InternetGateway(
129 "InternetGateway",
130 ))
131
132 gatewayattachment = t.add_resource(VPCGatewayAttachment(
133 "GatewayAttachment",
134 VpcId=Ref("VPC"),
135 InternetGatewayId=Ref("InternetGateway"),
136 ))
137
138 securitygroupingress1 = SecurityGroupIngress(
139 "SecurityGroupIngress1",
140 CidrIp="10.0.0.0/16",
141 FromPort="80",
142 ToPort="80",
143 IpProtocol="tcp",
144 )
145
146 securitygroup = t.add_resource(SecurityGroup(
147 "SecurityGroup",
148 GroupDescription="Security Group",
149 SecurityGroupIngress=[securitygroupingress1],
150 VpcId=Ref("VPC"),
151 ))
152
153 t.add_output(Output(
154 "ClusterEndpoint",
155 Value=Join(":", [GetAtt(redshiftcluster, "Endpoint.Address"),
156 GetAtt(redshiftcluster, "Endpoint.Port")]),
157 ))
158
159 print(t.to_json())
160
[end of examples/RedshiftClusterInVpc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/RedshiftClusterInVpc.py b/examples/RedshiftClusterInVpc.py
--- a/examples/RedshiftClusterInVpc.py
+++ b/examples/RedshiftClusterInVpc.py
@@ -6,7 +6,7 @@
from troposphere.redshift import Cluster, ClusterParameterGroup
from troposphere.redshift import AmazonRedshiftParameter, ClusterSubnetGroup
from troposphere.ec2 import VPC, Subnet, InternetGateway, VPCGatewayAttachment
-from troposphere.ec2 import SecurityGroup, SecurityGroupIngress
+from troposphere.ec2 import SecurityGroup, SecurityGroupRule
t = Template()
@@ -135,18 +135,18 @@
InternetGatewayId=Ref("InternetGateway"),
))
-securitygroupingress1 = SecurityGroupIngress(
- "SecurityGroupIngress1",
- CidrIp="10.0.0.0/16",
- FromPort="80",
- ToPort="80",
- IpProtocol="tcp",
-)
-
securitygroup = t.add_resource(SecurityGroup(
"SecurityGroup",
GroupDescription="Security Group",
- SecurityGroupIngress=[securitygroupingress1],
+ SecurityGroupIngress=[
+ SecurityGroupRule(
+ "SecurityGroupIngress1",
+ CidrIp="10.0.0.0/16",
+ FromPort="80",
+ ToPort="80",
+ IpProtocol="tcp",
+ )
+ ],
VpcId=Ref("VPC"),
))
| {"golden_diff": "diff --git a/examples/RedshiftClusterInVpc.py b/examples/RedshiftClusterInVpc.py\n--- a/examples/RedshiftClusterInVpc.py\n+++ b/examples/RedshiftClusterInVpc.py\n@@ -6,7 +6,7 @@\n from troposphere.redshift import Cluster, ClusterParameterGroup\n from troposphere.redshift import AmazonRedshiftParameter, ClusterSubnetGroup\n from troposphere.ec2 import VPC, Subnet, InternetGateway, VPCGatewayAttachment\n-from troposphere.ec2 import SecurityGroup, SecurityGroupIngress\n+from troposphere.ec2 import SecurityGroup, SecurityGroupRule\n \n \n t = Template()\n@@ -135,18 +135,18 @@\n InternetGatewayId=Ref(\"InternetGateway\"),\n ))\n \n-securitygroupingress1 = SecurityGroupIngress(\n- \"SecurityGroupIngress1\",\n- CidrIp=\"10.0.0.0/16\",\n- FromPort=\"80\",\n- ToPort=\"80\",\n- IpProtocol=\"tcp\",\n-)\n-\n securitygroup = t.add_resource(SecurityGroup(\n \"SecurityGroup\",\n GroupDescription=\"Security Group\",\n- SecurityGroupIngress=[securitygroupingress1],\n+ SecurityGroupIngress=[\n+ SecurityGroupRule(\n+ \"SecurityGroupIngress1\",\n+ CidrIp=\"10.0.0.0/16\",\n+ FromPort=\"80\",\n+ ToPort=\"80\",\n+ IpProtocol=\"tcp\",\n+ )\n+ ],\n VpcId=Ref(\"VPC\"),\n ))\n", "issue": "Invalid json generated with SecurityGroupIngress\nRef: https://s3-us-west-2.amazonaws.com/cloudformation-templates-us-west-2/EC2InstanceWithSecurityGroupSample.template\n\nInvalid format generated:\n\n``` json\n\"SecurityGroupIngress\": [\n {\n \"Properties\": {\n \"CidrIp\": \"0.0.0.0/0\",\n \"FromPort\": \"0\",\n \"IpProtocol\": \"-1\",\n \"ToPort\": \"65535\"\n },\n \"Type\": \"AWS::EC2::SecurityGroupIngress\"\n }\n ]\n```\n\nWith the above template AWS will complain:\n\n```\nEncountered unsupported property Type\n```\n\nCorrect format:\n\n``` json\n\"SecurityGroupIngress\": [\n {\n \"CidrIp\": \"0.0.0.0/0\",\n \"FromPort\": \"0\",\n \"IpProtocol\": \"-1\",\n \"ToPort\": \"65535\"\n }\n ]\n```\n\n", "before_files": [{"content": "# Converted from Redshift.template located at:\n# http://aws.amazon.com/cloudformation/aws-cloudformation-templates/\n\nfrom troposphere import Template, Parameter, Ref, Equals\nfrom troposphere import If, Output, Join, GetAtt\nfrom troposphere.redshift import Cluster, ClusterParameterGroup\nfrom troposphere.redshift import AmazonRedshiftParameter, ClusterSubnetGroup\nfrom troposphere.ec2 import VPC, Subnet, InternetGateway, VPCGatewayAttachment\nfrom troposphere.ec2 import SecurityGroup, SecurityGroupIngress\n\n\nt = Template()\n\nt.add_version(\"2010-09-09\")\n\nt.add_description(\n \"AWS CloudFormation Sample Template: Redshift cluster in a VPC\")\n\ndbname = t.add_parameter(Parameter(\n \"DatabaseName\",\n Description=\"The name of the first database to be created when the \"\n \"redshift cluster is created\",\n Type=\"String\",\n Default=\"defaultdb\",\n AllowedPattern=\"([a-z]|[0-9])+\",\n))\n\nclustertype = t.add_parameter(Parameter(\n \"ClusterType\",\n Description=\"The type of the cluster\",\n Type=\"String\",\n Default=\"single-node\",\n AllowedValues=[\n \"single-node\",\n \"multi-mode\"\n ],\n))\n\nnumberofnodes = t.add_parameter(Parameter(\n \"NumberOfNodes\",\n Description=\"The number of compute nodes in the redshift cluster. \"\n \"When cluster type is specified as: 1) single-node, the NumberOfNodes \"\n \"parameter should be specified as 1, 2) multi-node, the NumberOfNodes \"\n \"parameter should be greater than 1\",\n Type=\"Number\",\n Default=\"1\",\n))\n\nnodetype = t.add_parameter(Parameter(\n \"NodeType\",\n Description=\"The node type to be provisioned for the redshift cluster\",\n Type=\"String\",\n Default=\"dw2.large\",\n))\n\nmasterusername = t.add_parameter(Parameter(\n \"MasterUsername\",\n Description=\"The user name associated with the master user account for \"\n \"the redshift cluster that is being created\",\n Type=\"String\",\n Default=\"defaultuser\",\n AllowedPattern=\"([a-z])([a-z]|[0-9])*\",\n NoEcho=True,\n))\n\nmasteruserpassword = t.add_parameter(Parameter(\n \"MasterUserPassword\",\n Description=\"The password associated with the master user account for the \"\n \"redshift cluster that is being created.\",\n Type=\"String\",\n NoEcho=True,\n))\n\nconditions = {\n \"IsMultiNodeCluster\": Equals(\n Ref(\"ClusterType\"),\n \"multi-mode\"\n ),\n}\n\nfor k in conditions:\n t.add_condition(k, conditions[k])\n\nredshiftcluster = t.add_resource(Cluster(\n \"RedshiftCluster\",\n ClusterType=Ref(\"ClusterType\"),\n NumberOfNodes=If(\"IsMultiNodeCluster\",\n Ref(\"NumberOfNodes\"), Ref(\"AWS::NoValue\")),\n NodeType=Ref(\"NodeType\"),\n DBName=Ref(\"DatabaseName\"),\n MasterUsername=Ref(\"MasterUsername\"),\n MasterUserPassword=Ref(\"MasterUserPassword\"),\n ClusterParameterGroupName=Ref(\"RedshiftClusterParameterGroup\"),\n VpcSecurityGroupIds=Ref(\"SecurityGroup\"),\n ClusterSubnetGroupName=Ref(\"RedshiftClusterSubnetGroup\"),\n))\n\namazonredshiftparameter1 = AmazonRedshiftParameter(\n \"AmazonRedshiftParameter1\",\n ParameterName=\"enable_user_activity_logging\",\n ParameterValue=\"true\",\n)\n\nredshiftclusterparametergroup = t.add_resource(ClusterParameterGroup(\n \"RedshiftClusterParameterGroup\",\n Description=\"Cluster parameter group\",\n ParameterGroupFamily=\"redshift-1.0\",\n Parameters=[amazonredshiftparameter1],\n))\n\nredshiftclustersubnetgroup = t.add_resource(ClusterSubnetGroup(\n \"RedshiftClusterSubnetGroup\",\n Description=\"Cluster subnet group\",\n SubnetIds=Ref(\"Subnet\"),\n))\n\nvpc = t.add_resource(VPC(\n \"VPC\",\n CidrBlock=\"10.0.0.0/16\",\n))\n\nsubnet = t.add_resource(Subnet(\n \"Subnet\",\n CidrBlock=\"10.0.0.0/24\",\n VpcId=Ref(\"VPC\"),\n))\n\ninternetgateway = t.add_resource(InternetGateway(\n \"InternetGateway\",\n))\n\ngatewayattachment = t.add_resource(VPCGatewayAttachment(\n \"GatewayAttachment\",\n VpcId=Ref(\"VPC\"),\n InternetGatewayId=Ref(\"InternetGateway\"),\n))\n\nsecuritygroupingress1 = SecurityGroupIngress(\n \"SecurityGroupIngress1\",\n CidrIp=\"10.0.0.0/16\",\n FromPort=\"80\",\n ToPort=\"80\",\n IpProtocol=\"tcp\",\n)\n\nsecuritygroup = t.add_resource(SecurityGroup(\n \"SecurityGroup\",\n GroupDescription=\"Security Group\",\n SecurityGroupIngress=[securitygroupingress1],\n VpcId=Ref(\"VPC\"),\n))\n\nt.add_output(Output(\n \"ClusterEndpoint\",\n Value=Join(\":\", [GetAtt(redshiftcluster, \"Endpoint.Address\"),\n GetAtt(redshiftcluster, \"Endpoint.Port\")]),\n))\n\nprint(t.to_json())\n", "path": "examples/RedshiftClusterInVpc.py"}]} | 2,242 | 348 |
gh_patches_debug_30936 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-2994 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Login user after password reset notify events with Anonymous User
## BUG/PROBLEM REPORT
The option **Login user after password reset** allows a user that just performed a password reset to automatically log in after the .process is complete.
One of the two events (**UserLoggedInEvent**, **UserInitialLoginInEvent**) is triggered by this process, but with the Anonymous User instead of the user that just performed the password request.
### What I did:
- Newly created Plone site (no addons)
- **Login user after password reset** selected on */@@security-controlpanel*
- Create a new user
- Request a reset user password
- Follow the generated link
### What I expect to happen:
**UserLoggedInEvent** or **UserInitialLoginInEvent** should be triggered with the newly logged in user.
### What actually happened:
**UserLoggedInEvent** and **UserInitialLoginInEvent** are triggrered with **<SpecialUser 'Anonymous User'>**.
### What version of Plone/ Addons I am using:
* Plone 5.2
* No addons
</issue>
<code>
[start of Products/CMFPlone/browser/login/password_reset.py]
1 # -*- coding: utf-8 -*-
2 from AccessControl.SecurityManagement import getSecurityManager
3 from email.header import Header
4 from plone.app.layout.navigation.interfaces import INavigationRoot
5 from plone.memoize import view
6 from plone.registry.interfaces import IRegistry
7 from Products.CMFCore.utils import getToolByName
8 from Products.CMFPlone import PloneMessageFactory as _
9 from Products.CMFPlone.interfaces import IPasswordResetToolView
10 from Products.CMFPlone.interfaces.controlpanel import IMailSchema
11 from Products.CMFPlone.PasswordResetTool import ExpiredRequestError
12 from Products.CMFPlone.PasswordResetTool import InvalidRequestError
13 from Products.CMFPlone.utils import safe_unicode
14 from Products.CMFPlone.utils import safeToInt
15 from Products.Five import BrowserView
16 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
17 from Products.PlonePAS.events import UserInitialLoginInEvent
18 from Products.PlonePAS.events import UserLoggedInEvent
19 from Products.PluggableAuthService.interfaces.plugins import ICredentialsUpdatePlugin # noqa
20 from Products.statusmessages.interfaces import IStatusMessage
21 from zope.component import getMultiAdapter
22 from zope.component import getUtility
23 from zope.event import notify
24 from zope.i18n import translate
25 from zope.interface import implementer
26 from zope.publisher.interfaces import IPublishTraverse
27
28
29 @implementer(IPasswordResetToolView)
30 class PasswordResetToolView(BrowserView):
31
32 @view.memoize_contextless
33 def portal_state(self):
34 """ return portal_state of plone.app.layout
35 """
36 return getMultiAdapter((self.context, self.request),
37 name=u"plone_portal_state")
38
39 def encode_mail_header(self, text):
40 """ Encodes text into correctly encoded email header """
41 return Header(safe_unicode(text), 'utf-8')
42
43 def encoded_mail_sender(self):
44 """ returns encoded version of Portal name <portal_email> """
45 registry = getUtility(IRegistry)
46 mail_settings = registry.forInterface(IMailSchema, prefix="plone")
47 from_ = mail_settings.email_from_name
48 mail = mail_settings.email_from_address
49 return '"%s" <%s>' % (self.encode_mail_header(from_).encode(), mail)
50
51 def registered_notify_subject(self):
52 portal_name = self.portal_state().portal_title()
53 return translate(
54 _(
55 u'mailtemplate_user_account_info',
56 default=u'User Account Information for ${portal_name}',
57 mapping={'portal_name': safe_unicode(portal_name)},
58 ),
59 context=self.request,
60 )
61
62 def mail_password_subject(self):
63 return translate(
64 _(
65 u'mailtemplate_subject_resetpasswordrequest',
66 default=u'Password reset request',
67 ),
68 context=self.request,
69 )
70
71 def construct_url(self, randomstring):
72 return '%s/passwordreset/%s' % (
73 self.portal_state().navigation_root_url(), randomstring)
74
75 def expiration_timeout(self):
76 pw_tool = getToolByName(self.context, 'portal_password_reset')
77 timeout = int(pw_tool.getExpirationTimeout() or 0)
78 return timeout * 24 # timeout is in days, but templates want in hours.
79
80
81 @implementer(IPublishTraverse)
82 class PasswordResetView(BrowserView):
83 """ """
84
85 invalid = ViewPageTemplateFile('templates/pwreset_invalid.pt')
86 expired = ViewPageTemplateFile('templates/pwreset_expired.pt')
87 finish = ViewPageTemplateFile('templates/pwreset_finish.pt')
88 form = ViewPageTemplateFile('templates/pwreset_form.pt')
89 subpath = None
90
91 def _auto_login(self, userid, password):
92 aclu = getToolByName(self.context, 'acl_users')
93 for name, plugin in aclu.plugins.listPlugins(ICredentialsUpdatePlugin):
94 plugin.updateCredentials(
95 self.request,
96 self.request.response,
97 userid,
98 password
99 )
100 user = getSecurityManager().getUser()
101 login_time = user.getProperty('login_time', None)
102 if login_time is None:
103 notify(UserInitialLoginInEvent(user))
104 else:
105 notify(UserLoggedInEvent(user))
106
107 IStatusMessage(self.request).addStatusMessage(
108 _(
109 'password_reset_successful',
110 default='Password reset successful, '
111 'you are logged in now!',
112 ),
113 'info',
114 )
115 url = INavigationRoot(self.context).absolute_url()
116 self.request.response.redirect(url)
117 return
118
119 def _reset_password(self, pw_tool, randomstring):
120 state = self.getErrors()
121 if state:
122 return self.form()
123 userid = self.request.form.get('userid')
124 password = self.request.form.get('password')
125 try:
126 pw_tool.resetPassword(userid, randomstring, password)
127 except ExpiredRequestError:
128 return self.expired()
129 except InvalidRequestError:
130 return self.invalid()
131 except RuntimeError:
132 return self.invalid()
133 registry = getUtility(IRegistry)
134 if registry.get('plone.autologin_after_password_reset', False):
135 return self._auto_login(userid, password)
136 return self.finish()
137
138 def __call__(self):
139 if self.subpath:
140 # Try traverse subpath first:
141 randomstring = self.subpath[0]
142 else:
143 randomstring = self.request.get('key', None)
144
145 pw_tool = getToolByName(self.context, 'portal_password_reset')
146 if self.request.method == 'POST':
147 return self._reset_password(pw_tool, randomstring)
148 try:
149 pw_tool.verifyKey(randomstring)
150 except InvalidRequestError:
151 return self.invalid()
152 except ExpiredRequestError:
153 return self.expired()
154 return self.form()
155
156 def publishTraverse(self, request, name):
157 if self.subpath is None:
158 self.subpath = []
159 self.subpath.append(name)
160 return self
161
162 def getErrors(self):
163 if self.request.method != 'POST':
164 return
165 password = self.request.form.get('password')
166 password2 = self.request.form.get('password2')
167 userid = self.request.form.get('userid')
168 reg_tool = getToolByName(self.context, 'portal_registration')
169 pw_fail = reg_tool.testPasswordValidity(password, password2)
170 state = {}
171 if pw_fail:
172 state['password'] = pw_fail
173
174 # Determine if we're checking userids or not
175 pw_tool = getToolByName(self.context, 'portal_password_reset')
176 if not pw_tool.checkUser():
177 return state
178
179 if not userid:
180 state['userid'] = _(
181 'This field is required, please provide some information.',
182 )
183 if state:
184 state['status'] = 'failure'
185 state['portal_status_message'] = _(
186 'Please correct the indicated errors.',
187 )
188 return state
189
190 def login_url(self):
191 portal_state = getMultiAdapter((self.context, self.request),
192 name=u"plone_portal_state")
193 return '{0}/login?__ac_name={1}'.format(
194 portal_state.navigation_root_url(),
195 self.request.form.get('userid', ''))
196
197 def expiration_timeout(self):
198 pw_tool = getToolByName(self.context, 'portal_password_reset')
199 timeout = int(pw_tool.getExpirationTimeout() or 0)
200 return timeout * 24 # timeout is in days, but templates want in hours.
201
202
203 class ExplainPWResetToolView(BrowserView):
204 """ """
205
206 def timeout_days(self):
207 return self.context.getExpirationTimeout()
208
209 def user_check(self):
210 return self.context._user_check and 'checked' or None
211
212 @property
213 def stats(self):
214 """Return a dictionary like so:
215 {"open":3, "expired":0}
216 about the number of open and expired reset requests.
217 """
218 # count expired reset requests by creating a list of it
219 bad = len([1 for expiry in self.context._requests.values()
220 if self.context.expired(expiry)])
221 # open reset requests are all requests without the expired ones
222 good = len(self.context._requests) - bad
223 return {"open": good, "expired": bad}
224
225 def __call__(self):
226 if self.request.method == 'POST':
227 timeout_days = safeToInt(self.request.get('timeout_days'), 7)
228 self.context.setExpirationTimeout(timeout_days)
229 self.context._user_check = bool(
230 self.request.get('user_check', False),
231 )
232 return self.index()
233
[end of Products/CMFPlone/browser/login/password_reset.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/browser/login/password_reset.py b/Products/CMFPlone/browser/login/password_reset.py
--- a/Products/CMFPlone/browser/login/password_reset.py
+++ b/Products/CMFPlone/browser/login/password_reset.py
@@ -10,6 +10,7 @@
from Products.CMFPlone.interfaces.controlpanel import IMailSchema
from Products.CMFPlone.PasswordResetTool import ExpiredRequestError
from Products.CMFPlone.PasswordResetTool import InvalidRequestError
+from Products.CMFPlone.RegistrationTool import get_member_by_login_name
from Products.CMFPlone.utils import safe_unicode
from Products.CMFPlone.utils import safeToInt
from Products.Five import BrowserView
@@ -89,7 +90,8 @@
subpath = None
def _auto_login(self, userid, password):
- aclu = getToolByName(self.context, 'acl_users')
+ context = self.context
+ aclu = getToolByName(context, 'acl_users')
for name, plugin in aclu.plugins.listPlugins(ICredentialsUpdatePlugin):
plugin.updateCredentials(
self.request,
@@ -97,7 +99,16 @@
userid,
password
)
- user = getSecurityManager().getUser()
+
+ member = get_member_by_login_name(context, userid, False)
+
+ if member:
+ user = member.getUser()
+ else:
+ # Fallback in case we cannot find a user
+ # with the given userid
+ user = getSecurityManager().getUser()
+
login_time = user.getProperty('login_time', None)
if login_time is None:
notify(UserInitialLoginInEvent(user))
| {"golden_diff": "diff --git a/Products/CMFPlone/browser/login/password_reset.py b/Products/CMFPlone/browser/login/password_reset.py\n--- a/Products/CMFPlone/browser/login/password_reset.py\n+++ b/Products/CMFPlone/browser/login/password_reset.py\n@@ -10,6 +10,7 @@\n from Products.CMFPlone.interfaces.controlpanel import IMailSchema\n from Products.CMFPlone.PasswordResetTool import ExpiredRequestError\n from Products.CMFPlone.PasswordResetTool import InvalidRequestError\n+from Products.CMFPlone.RegistrationTool import get_member_by_login_name\n from Products.CMFPlone.utils import safe_unicode\n from Products.CMFPlone.utils import safeToInt\n from Products.Five import BrowserView\n@@ -89,7 +90,8 @@\n subpath = None\n \n def _auto_login(self, userid, password):\n- aclu = getToolByName(self.context, 'acl_users')\n+ context = self.context\n+ aclu = getToolByName(context, 'acl_users')\n for name, plugin in aclu.plugins.listPlugins(ICredentialsUpdatePlugin):\n plugin.updateCredentials(\n self.request,\n@@ -97,7 +99,16 @@\n userid,\n password\n )\n- user = getSecurityManager().getUser()\n+\n+ member = get_member_by_login_name(context, userid, False)\n+\n+ if member:\n+ user = member.getUser()\n+ else:\n+ # Fallback in case we cannot find a user\n+ # with the given userid\n+ user = getSecurityManager().getUser()\n+\n login_time = user.getProperty('login_time', None)\n if login_time is None:\n notify(UserInitialLoginInEvent(user))\n", "issue": "Login user after password reset notify events with Anonymous User\n## BUG/PROBLEM REPORT\r\n\r\nThe option **Login user after password reset** allows a user that just performed a password reset to automatically log in after the .process is complete.\r\n\r\nOne of the two events (**UserLoggedInEvent**, **UserInitialLoginInEvent**) is triggered by this process, but with the Anonymous User instead of the user that just performed the password request.\r\n\r\n### What I did:\r\n\r\n- Newly created Plone site (no addons)\r\n- **Login user after password reset** selected on */@@security-controlpanel*\r\n- Create a new user\r\n- Request a reset user password\r\n- Follow the generated link\r\n\r\n### What I expect to happen:\r\n\r\n**UserLoggedInEvent** or **UserInitialLoginInEvent** should be triggered with the newly logged in user.\r\n\r\n### What actually happened:\r\n\r\n**UserLoggedInEvent** and **UserInitialLoginInEvent** are triggrered with **<SpecialUser 'Anonymous User'>**.\r\n\r\n### What version of Plone/ Addons I am using:\r\n\r\n* Plone 5.2\r\n* No addons\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom AccessControl.SecurityManagement import getSecurityManager\nfrom email.header import Header\nfrom plone.app.layout.navigation.interfaces import INavigationRoot\nfrom plone.memoize import view\nfrom plone.registry.interfaces import IRegistry\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _\nfrom Products.CMFPlone.interfaces import IPasswordResetToolView\nfrom Products.CMFPlone.interfaces.controlpanel import IMailSchema\nfrom Products.CMFPlone.PasswordResetTool import ExpiredRequestError\nfrom Products.CMFPlone.PasswordResetTool import InvalidRequestError\nfrom Products.CMFPlone.utils import safe_unicode\nfrom Products.CMFPlone.utils import safeToInt\nfrom Products.Five import BrowserView\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom Products.PlonePAS.events import UserInitialLoginInEvent\nfrom Products.PlonePAS.events import UserLoggedInEvent\nfrom Products.PluggableAuthService.interfaces.plugins import ICredentialsUpdatePlugin # noqa\nfrom Products.statusmessages.interfaces import IStatusMessage\nfrom zope.component import getMultiAdapter\nfrom zope.component import getUtility\nfrom zope.event import notify\nfrom zope.i18n import translate\nfrom zope.interface import implementer\nfrom zope.publisher.interfaces import IPublishTraverse\n\n\n@implementer(IPasswordResetToolView)\nclass PasswordResetToolView(BrowserView):\n\n @view.memoize_contextless\n def portal_state(self):\n \"\"\" return portal_state of plone.app.layout\n \"\"\"\n return getMultiAdapter((self.context, self.request),\n name=u\"plone_portal_state\")\n\n def encode_mail_header(self, text):\n \"\"\" Encodes text into correctly encoded email header \"\"\"\n return Header(safe_unicode(text), 'utf-8')\n\n def encoded_mail_sender(self):\n \"\"\" returns encoded version of Portal name <portal_email> \"\"\"\n registry = getUtility(IRegistry)\n mail_settings = registry.forInterface(IMailSchema, prefix=\"plone\")\n from_ = mail_settings.email_from_name\n mail = mail_settings.email_from_address\n return '\"%s\" <%s>' % (self.encode_mail_header(from_).encode(), mail)\n\n def registered_notify_subject(self):\n portal_name = self.portal_state().portal_title()\n return translate(\n _(\n u'mailtemplate_user_account_info',\n default=u'User Account Information for ${portal_name}',\n mapping={'portal_name': safe_unicode(portal_name)},\n ),\n context=self.request,\n )\n\n def mail_password_subject(self):\n return translate(\n _(\n u'mailtemplate_subject_resetpasswordrequest',\n default=u'Password reset request',\n ),\n context=self.request,\n )\n\n def construct_url(self, randomstring):\n return '%s/passwordreset/%s' % (\n self.portal_state().navigation_root_url(), randomstring)\n\n def expiration_timeout(self):\n pw_tool = getToolByName(self.context, 'portal_password_reset')\n timeout = int(pw_tool.getExpirationTimeout() or 0)\n return timeout * 24 # timeout is in days, but templates want in hours.\n\n\n@implementer(IPublishTraverse)\nclass PasswordResetView(BrowserView):\n \"\"\" \"\"\"\n\n invalid = ViewPageTemplateFile('templates/pwreset_invalid.pt')\n expired = ViewPageTemplateFile('templates/pwreset_expired.pt')\n finish = ViewPageTemplateFile('templates/pwreset_finish.pt')\n form = ViewPageTemplateFile('templates/pwreset_form.pt')\n subpath = None\n\n def _auto_login(self, userid, password):\n aclu = getToolByName(self.context, 'acl_users')\n for name, plugin in aclu.plugins.listPlugins(ICredentialsUpdatePlugin):\n plugin.updateCredentials(\n self.request,\n self.request.response,\n userid,\n password\n )\n user = getSecurityManager().getUser()\n login_time = user.getProperty('login_time', None)\n if login_time is None:\n notify(UserInitialLoginInEvent(user))\n else:\n notify(UserLoggedInEvent(user))\n\n IStatusMessage(self.request).addStatusMessage(\n _(\n 'password_reset_successful',\n default='Password reset successful, '\n 'you are logged in now!',\n ),\n 'info',\n )\n url = INavigationRoot(self.context).absolute_url()\n self.request.response.redirect(url)\n return\n\n def _reset_password(self, pw_tool, randomstring):\n state = self.getErrors()\n if state:\n return self.form()\n userid = self.request.form.get('userid')\n password = self.request.form.get('password')\n try:\n pw_tool.resetPassword(userid, randomstring, password)\n except ExpiredRequestError:\n return self.expired()\n except InvalidRequestError:\n return self.invalid()\n except RuntimeError:\n return self.invalid()\n registry = getUtility(IRegistry)\n if registry.get('plone.autologin_after_password_reset', False):\n return self._auto_login(userid, password)\n return self.finish()\n\n def __call__(self):\n if self.subpath:\n # Try traverse subpath first:\n randomstring = self.subpath[0]\n else:\n randomstring = self.request.get('key', None)\n\n pw_tool = getToolByName(self.context, 'portal_password_reset')\n if self.request.method == 'POST':\n return self._reset_password(pw_tool, randomstring)\n try:\n pw_tool.verifyKey(randomstring)\n except InvalidRequestError:\n return self.invalid()\n except ExpiredRequestError:\n return self.expired()\n return self.form()\n\n def publishTraverse(self, request, name):\n if self.subpath is None:\n self.subpath = []\n self.subpath.append(name)\n return self\n\n def getErrors(self):\n if self.request.method != 'POST':\n return\n password = self.request.form.get('password')\n password2 = self.request.form.get('password2')\n userid = self.request.form.get('userid')\n reg_tool = getToolByName(self.context, 'portal_registration')\n pw_fail = reg_tool.testPasswordValidity(password, password2)\n state = {}\n if pw_fail:\n state['password'] = pw_fail\n\n # Determine if we're checking userids or not\n pw_tool = getToolByName(self.context, 'portal_password_reset')\n if not pw_tool.checkUser():\n return state\n\n if not userid:\n state['userid'] = _(\n 'This field is required, please provide some information.',\n )\n if state:\n state['status'] = 'failure'\n state['portal_status_message'] = _(\n 'Please correct the indicated errors.',\n )\n return state\n\n def login_url(self):\n portal_state = getMultiAdapter((self.context, self.request),\n name=u\"plone_portal_state\")\n return '{0}/login?__ac_name={1}'.format(\n portal_state.navigation_root_url(),\n self.request.form.get('userid', ''))\n\n def expiration_timeout(self):\n pw_tool = getToolByName(self.context, 'portal_password_reset')\n timeout = int(pw_tool.getExpirationTimeout() or 0)\n return timeout * 24 # timeout is in days, but templates want in hours.\n\n\nclass ExplainPWResetToolView(BrowserView):\n \"\"\" \"\"\"\n\n def timeout_days(self):\n return self.context.getExpirationTimeout()\n\n def user_check(self):\n return self.context._user_check and 'checked' or None\n\n @property\n def stats(self):\n \"\"\"Return a dictionary like so:\n {\"open\":3, \"expired\":0}\n about the number of open and expired reset requests.\n \"\"\"\n # count expired reset requests by creating a list of it\n bad = len([1 for expiry in self.context._requests.values()\n if self.context.expired(expiry)])\n # open reset requests are all requests without the expired ones\n good = len(self.context._requests) - bad\n return {\"open\": good, \"expired\": bad}\n\n def __call__(self):\n if self.request.method == 'POST':\n timeout_days = safeToInt(self.request.get('timeout_days'), 7)\n self.context.setExpirationTimeout(timeout_days)\n self.context._user_check = bool(\n self.request.get('user_check', False),\n )\n return self.index()\n", "path": "Products/CMFPlone/browser/login/password_reset.py"}]} | 3,158 | 376 |
gh_patches_debug_32183 | rasdani/github-patches | git_diff | pre-commit__pre-commit-529 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[RFC] Make the default of `pre-commit autoupdate` use `--tags-only`?
I find that `--tags-only` to be much better than the default.
My proposal:
- Make the `--tags-only` behaviour the default behaviour
- Make `--tags-only` a noop argument which produces a warning and does the default
- Add a `--bleeding-edge` which does the current default behaviour
@chriskuehl thoughts?
</issue>
<code>
[start of pre_commit/main.py]
1 from __future__ import unicode_literals
2
3 import argparse
4 import os
5 import sys
6
7 import pre_commit.constants as C
8 from pre_commit import color
9 from pre_commit import five
10 from pre_commit import git
11 from pre_commit.commands.autoupdate import autoupdate
12 from pre_commit.commands.clean import clean
13 from pre_commit.commands.install_uninstall import install
14 from pre_commit.commands.install_uninstall import install_hooks
15 from pre_commit.commands.install_uninstall import uninstall
16 from pre_commit.commands.run import run
17 from pre_commit.commands.sample_config import sample_config
18 from pre_commit.error_handler import error_handler
19 from pre_commit.logging_handler import add_logging_handler
20 from pre_commit.runner import Runner
21
22
23 # https://github.com/pre-commit/pre-commit/issues/217
24 # On OSX, making a virtualenv using pyvenv at . causes `virtualenv` and `pip`
25 # to install packages to the wrong place. We don't want anything to deal with
26 # pyvenv
27 os.environ.pop('__PYVENV_LAUNCHER__', None)
28
29
30 def _add_color_option(parser):
31 parser.add_argument(
32 '--color', default='auto', type=color.use_color,
33 metavar='{' + ','.join(color.COLOR_CHOICES) + '}',
34 help='Whether to use color in output. Defaults to `%(default)s`.',
35 )
36
37
38 def _add_config_option(parser):
39 parser.add_argument(
40 '-c', '--config', default='.pre-commit-config.yaml',
41 help='Path to alternate config file'
42 )
43
44
45 def main(argv=None):
46 argv = argv if argv is not None else sys.argv[1:]
47 argv = [five.to_text(arg) for arg in argv]
48 parser = argparse.ArgumentParser()
49
50 # http://stackoverflow.com/a/8521644/812183
51 parser.add_argument(
52 '-V', '--version',
53 action='version',
54 version='%(prog)s {}'.format(C.VERSION),
55 )
56
57 subparsers = parser.add_subparsers(dest='command')
58
59 install_parser = subparsers.add_parser(
60 'install', help='Install the pre-commit script.',
61 )
62 _add_color_option(install_parser)
63 _add_config_option(install_parser)
64 install_parser.add_argument(
65 '-f', '--overwrite', action='store_true',
66 help='Overwrite existing hooks / remove migration mode.',
67 )
68 install_parser.add_argument(
69 '--install-hooks', action='store_true',
70 help=(
71 'Whether to install hook environments for all environments '
72 'in the config file.'
73 ),
74 )
75 install_parser.add_argument(
76 '-t', '--hook-type', choices=('pre-commit', 'pre-push'),
77 default='pre-commit',
78 )
79 install_parser.add_argument(
80 '--allow-missing-config', action='store_true', default=False,
81 help=(
82 'Whether to allow a missing `pre-config` configuration file '
83 'or exit with a failure code.'
84 ),
85 )
86
87 install_hooks_parser = subparsers.add_parser(
88 'install-hooks',
89 help=(
90 'Install hook environments for all environments in the config '
91 'file. You may find `pre-commit install --install-hooks` more '
92 'useful.'
93 ),
94 )
95 _add_color_option(install_hooks_parser)
96 _add_config_option(install_hooks_parser)
97
98 uninstall_parser = subparsers.add_parser(
99 'uninstall', help='Uninstall the pre-commit script.',
100 )
101 _add_color_option(uninstall_parser)
102 _add_config_option(uninstall_parser)
103 uninstall_parser.add_argument(
104 '-t', '--hook-type', choices=('pre-commit', 'pre-push'),
105 default='pre-commit',
106 )
107
108 clean_parser = subparsers.add_parser(
109 'clean', help='Clean out pre-commit files.',
110 )
111 _add_color_option(clean_parser)
112 _add_config_option(clean_parser)
113 autoupdate_parser = subparsers.add_parser(
114 'autoupdate',
115 help="Auto-update pre-commit config to the latest repos' versions.",
116 )
117 _add_color_option(autoupdate_parser)
118 _add_config_option(autoupdate_parser)
119 autoupdate_parser.add_argument(
120 '--tags-only', action='store_true', help='Update to tags only.',
121 )
122
123 run_parser = subparsers.add_parser('run', help='Run hooks.')
124 _add_color_option(run_parser)
125 _add_config_option(run_parser)
126 run_parser.add_argument('hook', nargs='?', help='A single hook-id to run')
127 run_parser.add_argument(
128 '--no-stash', default=False, action='store_true',
129 help='Use this option to prevent auto stashing of unstaged files.',
130 )
131 run_parser.add_argument(
132 '--verbose', '-v', action='store_true', default=False,
133 )
134 run_parser.add_argument(
135 '--origin', '-o',
136 help="The origin branch's commit_id when using `git push`.",
137 )
138 run_parser.add_argument(
139 '--source', '-s',
140 help="The remote branch's commit_id when using `git push`.",
141 )
142 run_parser.add_argument(
143 '--allow-unstaged-config', default=False, action='store_true',
144 help=(
145 'Allow an unstaged config to be present. Note that this will '
146 'be stashed before parsing unless --no-stash is specified.'
147 ),
148 )
149 run_parser.add_argument(
150 '--hook-stage', choices=('commit', 'push'), default='commit',
151 help='The stage during which the hook is fired e.g. commit or push.',
152 )
153 run_parser.add_argument(
154 '--show-diff-on-failure', action='store_true',
155 help='When hooks fail, run `git diff` directly afterward.',
156 )
157 run_mutex_group = run_parser.add_mutually_exclusive_group(required=False)
158 run_mutex_group.add_argument(
159 '--all-files', '-a', action='store_true', default=False,
160 help='Run on all the files in the repo. Implies --no-stash.',
161 )
162 run_mutex_group.add_argument(
163 '--files', nargs='*', default=[],
164 help='Specific filenames to run hooks on.',
165 )
166
167 sample_config_parser = subparsers.add_parser(
168 'sample-config', help='Produce a sample {} file'.format(C.CONFIG_FILE),
169 )
170 _add_color_option(sample_config_parser)
171 _add_config_option(sample_config_parser)
172
173 help = subparsers.add_parser(
174 'help', help='Show help for a specific command.',
175 )
176 help.add_argument('help_cmd', nargs='?', help='Command to show help for.')
177
178 # Argparse doesn't really provide a way to use a `default` subparser
179 if len(argv) == 0:
180 argv = ['run']
181 args = parser.parse_args(argv)
182 if args.command == 'run':
183 args.files = [
184 os.path.relpath(os.path.abspath(filename), git.get_root())
185 for filename in args.files
186 ]
187
188 if args.command == 'help':
189 if args.help_cmd:
190 parser.parse_args([args.help_cmd, '--help'])
191 else:
192 parser.parse_args(['--help'])
193
194 with error_handler():
195 add_logging_handler(args.color)
196 runner = Runner.create(args.config)
197 git.check_for_cygwin_mismatch()
198
199 if args.command == 'install':
200 return install(
201 runner, overwrite=args.overwrite, hooks=args.install_hooks,
202 hook_type=args.hook_type,
203 skip_on_missing_conf=args.allow_missing_config,
204 )
205 elif args.command == 'install-hooks':
206 return install_hooks(runner)
207 elif args.command == 'uninstall':
208 return uninstall(runner, hook_type=args.hook_type)
209 elif args.command == 'clean':
210 return clean(runner)
211 elif args.command == 'autoupdate':
212 return autoupdate(runner, args.tags_only)
213 elif args.command == 'run':
214 return run(runner, args)
215 elif args.command == 'sample-config':
216 return sample_config()
217 else:
218 raise NotImplementedError(
219 'Command {} not implemented.'.format(args.command)
220 )
221
222 raise AssertionError(
223 'Command {} failed to exit with a returncode'.format(args.command)
224 )
225
226
227 if __name__ == '__main__':
228 exit(main())
229
[end of pre_commit/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/main.py b/pre_commit/main.py
--- a/pre_commit/main.py
+++ b/pre_commit/main.py
@@ -1,6 +1,7 @@
from __future__ import unicode_literals
import argparse
+import logging
import os
import sys
@@ -20,6 +21,8 @@
from pre_commit.runner import Runner
+logger = logging.getLogger('pre_commit')
+
# https://github.com/pre-commit/pre-commit/issues/217
# On OSX, making a virtualenv using pyvenv at . causes `virtualenv` and `pip`
# to install packages to the wrong place. We don't want anything to deal with
@@ -117,7 +120,14 @@
_add_color_option(autoupdate_parser)
_add_config_option(autoupdate_parser)
autoupdate_parser.add_argument(
- '--tags-only', action='store_true', help='Update to tags only.',
+ '--tags-only', action='store_true', help='LEGACY: for compatibility',
+ )
+ autoupdate_parser.add_argument(
+ '--bleeding-edge', action='store_true',
+ help=(
+ 'Update to the bleeding edge of `master` instead of the latest '
+ 'tagged version (the default behavior).'
+ ),
)
run_parser = subparsers.add_parser('run', help='Run hooks.')
@@ -209,7 +219,9 @@
elif args.command == 'clean':
return clean(runner)
elif args.command == 'autoupdate':
- return autoupdate(runner, args.tags_only)
+ if args.tags_only:
+ logger.warning('--tags-only is the default')
+ return autoupdate(runner, tags_only=not args.bleeding_edge)
elif args.command == 'run':
return run(runner, args)
elif args.command == 'sample-config':
| {"golden_diff": "diff --git a/pre_commit/main.py b/pre_commit/main.py\n--- a/pre_commit/main.py\n+++ b/pre_commit/main.py\n@@ -1,6 +1,7 @@\n from __future__ import unicode_literals\n \n import argparse\n+import logging\n import os\n import sys\n \n@@ -20,6 +21,8 @@\n from pre_commit.runner import Runner\n \n \n+logger = logging.getLogger('pre_commit')\n+\n # https://github.com/pre-commit/pre-commit/issues/217\n # On OSX, making a virtualenv using pyvenv at . causes `virtualenv` and `pip`\n # to install packages to the wrong place. We don't want anything to deal with\n@@ -117,7 +120,14 @@\n _add_color_option(autoupdate_parser)\n _add_config_option(autoupdate_parser)\n autoupdate_parser.add_argument(\n- '--tags-only', action='store_true', help='Update to tags only.',\n+ '--tags-only', action='store_true', help='LEGACY: for compatibility',\n+ )\n+ autoupdate_parser.add_argument(\n+ '--bleeding-edge', action='store_true',\n+ help=(\n+ 'Update to the bleeding edge of `master` instead of the latest '\n+ 'tagged version (the default behavior).'\n+ ),\n )\n \n run_parser = subparsers.add_parser('run', help='Run hooks.')\n@@ -209,7 +219,9 @@\n elif args.command == 'clean':\n return clean(runner)\n elif args.command == 'autoupdate':\n- return autoupdate(runner, args.tags_only)\n+ if args.tags_only:\n+ logger.warning('--tags-only is the default')\n+ return autoupdate(runner, tags_only=not args.bleeding_edge)\n elif args.command == 'run':\n return run(runner, args)\n elif args.command == 'sample-config':\n", "issue": "[RFC] Make the default of `pre-commit autoupdate` use `--tags-only`?\nI find that `--tags-only` to be much better than the default.\r\n\r\nMy proposal:\r\n\r\n- Make the `--tags-only` behaviour the default behaviour\r\n- Make `--tags-only` a noop argument which produces a warning and does the default\r\n- Add a `--bleeding-edge` which does the current default behaviour\r\n\r\n@chriskuehl thoughts?\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport argparse\nimport os\nimport sys\n\nimport pre_commit.constants as C\nfrom pre_commit import color\nfrom pre_commit import five\nfrom pre_commit import git\nfrom pre_commit.commands.autoupdate import autoupdate\nfrom pre_commit.commands.clean import clean\nfrom pre_commit.commands.install_uninstall import install\nfrom pre_commit.commands.install_uninstall import install_hooks\nfrom pre_commit.commands.install_uninstall import uninstall\nfrom pre_commit.commands.run import run\nfrom pre_commit.commands.sample_config import sample_config\nfrom pre_commit.error_handler import error_handler\nfrom pre_commit.logging_handler import add_logging_handler\nfrom pre_commit.runner import Runner\n\n\n# https://github.com/pre-commit/pre-commit/issues/217\n# On OSX, making a virtualenv using pyvenv at . causes `virtualenv` and `pip`\n# to install packages to the wrong place. We don't want anything to deal with\n# pyvenv\nos.environ.pop('__PYVENV_LAUNCHER__', None)\n\n\ndef _add_color_option(parser):\n parser.add_argument(\n '--color', default='auto', type=color.use_color,\n metavar='{' + ','.join(color.COLOR_CHOICES) + '}',\n help='Whether to use color in output. Defaults to `%(default)s`.',\n )\n\n\ndef _add_config_option(parser):\n parser.add_argument(\n '-c', '--config', default='.pre-commit-config.yaml',\n help='Path to alternate config file'\n )\n\n\ndef main(argv=None):\n argv = argv if argv is not None else sys.argv[1:]\n argv = [five.to_text(arg) for arg in argv]\n parser = argparse.ArgumentParser()\n\n # http://stackoverflow.com/a/8521644/812183\n parser.add_argument(\n '-V', '--version',\n action='version',\n version='%(prog)s {}'.format(C.VERSION),\n )\n\n subparsers = parser.add_subparsers(dest='command')\n\n install_parser = subparsers.add_parser(\n 'install', help='Install the pre-commit script.',\n )\n _add_color_option(install_parser)\n _add_config_option(install_parser)\n install_parser.add_argument(\n '-f', '--overwrite', action='store_true',\n help='Overwrite existing hooks / remove migration mode.',\n )\n install_parser.add_argument(\n '--install-hooks', action='store_true',\n help=(\n 'Whether to install hook environments for all environments '\n 'in the config file.'\n ),\n )\n install_parser.add_argument(\n '-t', '--hook-type', choices=('pre-commit', 'pre-push'),\n default='pre-commit',\n )\n install_parser.add_argument(\n '--allow-missing-config', action='store_true', default=False,\n help=(\n 'Whether to allow a missing `pre-config` configuration file '\n 'or exit with a failure code.'\n ),\n )\n\n install_hooks_parser = subparsers.add_parser(\n 'install-hooks',\n help=(\n 'Install hook environments for all environments in the config '\n 'file. You may find `pre-commit install --install-hooks` more '\n 'useful.'\n ),\n )\n _add_color_option(install_hooks_parser)\n _add_config_option(install_hooks_parser)\n\n uninstall_parser = subparsers.add_parser(\n 'uninstall', help='Uninstall the pre-commit script.',\n )\n _add_color_option(uninstall_parser)\n _add_config_option(uninstall_parser)\n uninstall_parser.add_argument(\n '-t', '--hook-type', choices=('pre-commit', 'pre-push'),\n default='pre-commit',\n )\n\n clean_parser = subparsers.add_parser(\n 'clean', help='Clean out pre-commit files.',\n )\n _add_color_option(clean_parser)\n _add_config_option(clean_parser)\n autoupdate_parser = subparsers.add_parser(\n 'autoupdate',\n help=\"Auto-update pre-commit config to the latest repos' versions.\",\n )\n _add_color_option(autoupdate_parser)\n _add_config_option(autoupdate_parser)\n autoupdate_parser.add_argument(\n '--tags-only', action='store_true', help='Update to tags only.',\n )\n\n run_parser = subparsers.add_parser('run', help='Run hooks.')\n _add_color_option(run_parser)\n _add_config_option(run_parser)\n run_parser.add_argument('hook', nargs='?', help='A single hook-id to run')\n run_parser.add_argument(\n '--no-stash', default=False, action='store_true',\n help='Use this option to prevent auto stashing of unstaged files.',\n )\n run_parser.add_argument(\n '--verbose', '-v', action='store_true', default=False,\n )\n run_parser.add_argument(\n '--origin', '-o',\n help=\"The origin branch's commit_id when using `git push`.\",\n )\n run_parser.add_argument(\n '--source', '-s',\n help=\"The remote branch's commit_id when using `git push`.\",\n )\n run_parser.add_argument(\n '--allow-unstaged-config', default=False, action='store_true',\n help=(\n 'Allow an unstaged config to be present. Note that this will '\n 'be stashed before parsing unless --no-stash is specified.'\n ),\n )\n run_parser.add_argument(\n '--hook-stage', choices=('commit', 'push'), default='commit',\n help='The stage during which the hook is fired e.g. commit or push.',\n )\n run_parser.add_argument(\n '--show-diff-on-failure', action='store_true',\n help='When hooks fail, run `git diff` directly afterward.',\n )\n run_mutex_group = run_parser.add_mutually_exclusive_group(required=False)\n run_mutex_group.add_argument(\n '--all-files', '-a', action='store_true', default=False,\n help='Run on all the files in the repo. Implies --no-stash.',\n )\n run_mutex_group.add_argument(\n '--files', nargs='*', default=[],\n help='Specific filenames to run hooks on.',\n )\n\n sample_config_parser = subparsers.add_parser(\n 'sample-config', help='Produce a sample {} file'.format(C.CONFIG_FILE),\n )\n _add_color_option(sample_config_parser)\n _add_config_option(sample_config_parser)\n\n help = subparsers.add_parser(\n 'help', help='Show help for a specific command.',\n )\n help.add_argument('help_cmd', nargs='?', help='Command to show help for.')\n\n # Argparse doesn't really provide a way to use a `default` subparser\n if len(argv) == 0:\n argv = ['run']\n args = parser.parse_args(argv)\n if args.command == 'run':\n args.files = [\n os.path.relpath(os.path.abspath(filename), git.get_root())\n for filename in args.files\n ]\n\n if args.command == 'help':\n if args.help_cmd:\n parser.parse_args([args.help_cmd, '--help'])\n else:\n parser.parse_args(['--help'])\n\n with error_handler():\n add_logging_handler(args.color)\n runner = Runner.create(args.config)\n git.check_for_cygwin_mismatch()\n\n if args.command == 'install':\n return install(\n runner, overwrite=args.overwrite, hooks=args.install_hooks,\n hook_type=args.hook_type,\n skip_on_missing_conf=args.allow_missing_config,\n )\n elif args.command == 'install-hooks':\n return install_hooks(runner)\n elif args.command == 'uninstall':\n return uninstall(runner, hook_type=args.hook_type)\n elif args.command == 'clean':\n return clean(runner)\n elif args.command == 'autoupdate':\n return autoupdate(runner, args.tags_only)\n elif args.command == 'run':\n return run(runner, args)\n elif args.command == 'sample-config':\n return sample_config()\n else:\n raise NotImplementedError(\n 'Command {} not implemented.'.format(args.command)\n )\n\n raise AssertionError(\n 'Command {} failed to exit with a returncode'.format(args.command)\n )\n\n\nif __name__ == '__main__':\n exit(main())\n", "path": "pre_commit/main.py"}]} | 2,961 | 420 |
gh_patches_debug_1902 | rasdani/github-patches | git_diff | hpcaitech__ColossalAI-2777 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[tensor] fix some unittests
[tensor] fix some unittests
[tensor] fix some unittests
[BUG]: Wrong import in `zero/sharded_optim/_utils.py`
### 🐛 Describe the bug
In issue #2774 , thanks @malfet for pointing out that we should not use `torch._six` to import `inf` and use `torch` to import `inf` instead, however, there is a small mistake in PR #2775 use an invalid `torch.six` module to import `inf`. We should fix this typo.
### Environment
_No response_
</issue>
<code>
[start of colossalai/zero/sharded_optim/_utils.py]
1 import math
2 from typing import Optional
3
4 import torch
5 import torch.distributed as dist
6 from torch.six import inf
7 from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors
8
9 from colossalai.tensor import ColoParameter
10 from colossalai.utils import is_model_parallel_parameter
11
12
13 def flatten(input_):
14 return _flatten_dense_tensors(input_)
15
16
17 def unflatten(flat, tensors):
18 return _unflatten_dense_tensors(flat, tensors)
19
20
21 def count_numel(tensor_list):
22 res = 0
23 for tensor in tensor_list:
24 res += tensor.numel()
25 return res
26
27
28 def calculate_padding(numel, unit_size):
29 remainder = numel % unit_size
30 return unit_size - remainder if remainder else remainder
31
32
33 def shuffle_by_round_robin(tensor_list, num_partitions):
34 partitions = dict()
35
36 for tensor_idx, tensor in enumerate(tensor_list):
37 partition_to_go = tensor_idx % num_partitions
38 if partition_to_go not in partitions:
39 partitions[partition_to_go] = []
40 partitions[partition_to_go].append(dict(tensor=tensor, index=tensor_idx))
41
42 partitions_count = len(partitions)
43 new_tensor_list = []
44 tensor_index_mapping = dict()
45
46 for partition_id in range(partitions_count):
47 partition_tensors = partitions[partition_id]
48 for item in partition_tensors:
49 tensor_index_mapping[item['index']] = len(new_tensor_list)
50 new_tensor_list.append(item['tensor'])
51
52 return new_tensor_list, tensor_index_mapping
53
54
55 # create a flat tensor aligned at the alignment boundary
56 def flatten_dense_tensors_with_padding(tensor_list, unit_size):
57 num_elements = count_numel(tensor_list)
58 padding = calculate_padding(num_elements, unit_size=unit_size)
59
60 if padding > 0:
61 pad_tensor = torch.zeros(padding, device=tensor_list[0].device, dtype=tensor_list[0].dtype)
62 padded_tensor_list = tensor_list + [pad_tensor]
63 else:
64 padded_tensor_list = tensor_list
65
66 return flatten(padded_tensor_list)
67
68
69 def is_nccl_aligned(tensor):
70 return tensor.data_ptr() % 4 == 0
71
72
73 def get_grad_accumulate_object(tensor):
74 """
75 Return the AccumulateGrad of the input tensor
76 """
77
78 # grad_fn reference:
79 # https://discuss.pytorch.org/t/in-the-grad-fn-i-find-a-next-functions-but-i-dont-understand-the-meaning-of-the-attribute/24463
80 # expand_as reference: https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html#torch.Tensor.expand
81 #
82 # `next_functions` will return the backward graph where
83 # the first element is the AccumulateGrad of the leaf nodes.
84 # we want to get the AccumulateGrad of the input tensor instead of the leaf
85 # node in the whole computation graph.
86 # Therefore, we call expand_as to create a dummy graph
87 # where tensor_tmp and tensor indeed point to the same object.
88 # You can check this by print(tensor.data_ptr() == tensor_tmp.data_ptr())
89 tensor_tmp = tensor.expand_as(tensor)
90 grad_acc_obj = tensor_tmp.grad_fn.next_functions[0][0]
91 return grad_acc_obj
92
93
94 def split_half_float_double(tensor_list):
95 dtypes = ["torch.cuda.HalfTensor", "torch.cuda.FloatTensor", "torch.cuda.DoubleTensor", "torch.cuda.BFloat16Tensor"]
96 buckets = []
97 for i, dtype in enumerate(dtypes):
98 bucket = [t for t in tensor_list if t.type() == dtype]
99 if bucket:
100 buckets.append(bucket)
101 return buckets
102
103
104 def reduce_tensor_dp_group(tensor: torch.Tensor,
105 dtype: Optional[torch.dtype] = None,
106 dst_local_rank: Optional[int] = None,
107 dst_global_rank: Optional[int] = None,
108 group: Optional[dist.ProcessGroup] = None):
109 """
110 Reduce the tensor in the data parallel process group
111
112 :param tensor: A tensor object to reduce/all-reduce
113 :param dtype: The data type used in communication
114 :param dst_rank: The source rank for reduce. If dst_rank is None,
115 :param parallel_mode: Communication parallel mode
116 all-reduce will be used instead of reduce. Default is None.
117
118 :type tensor: torch.Tensor
119 :type dtype: torch.dtype, optional
120 :type dst_rank: int, optional
121 :type pg: ProcessGroup, optional
122 """
123 # use the original dtype
124 if dtype is None:
125 dtype = tensor.dtype
126
127 # cast the data to specified dtype for reduce/all-reduce
128 if tensor.dtype != dtype:
129 tensor_to_reduce = tensor.to(dtype)
130 else:
131 tensor_to_reduce = tensor
132
133 world_size = dist.get_world_size(group=group)
134 tensor_to_reduce.div_(world_size)
135
136 # if rank is None, all reduce will be used
137 # else, reduce is used
138 use_all_reduce = dst_local_rank is None
139
140 if use_all_reduce:
141 dist.all_reduce(tensor_to_reduce, group=group)
142 else:
143 dist.reduce(tensor=tensor_to_reduce, dst=dst_global_rank, group=group)
144
145 # recover the original dtype
146 if tensor.dtype != dtype and tensor is not tensor_to_reduce:
147 local_rank = dist.get_rank(group=group)
148 if use_all_reduce or dst_local_rank == local_rank:
149 tensor.copy_(tensor_to_reduce)
150
151 return tensor
152
153
154 def has_inf_or_nan(tensor):
155 try:
156 # if tensor is half, the .float() incurs an additional deep copy, but it's necessary if
157 # Pytorch's .sum() creates a one-element tensor of the same type as tensor
158 # (which is true for some recent version of pytorch).
159 tensor_sum = float(tensor.float().sum())
160 # More efficient version that can be used if .sum() returns a Python scalar
161 # tensor_sum = float(tensor.sum())
162 except RuntimeError as instance:
163 # We want to check if inst is actually an overflow exception.
164 # RuntimeError could come from a different error.
165 # If so, we still want the exception to propagate.
166 if "value cannot be converted" not in instance.args[0]:
167 raise
168 return True
169 else:
170 if tensor_sum == float('inf') or tensor_sum == -float('inf') or tensor_sum != tensor_sum:
171 return True
172 return False
173
174
175 def release_param_grad(tensor_list):
176 for tensor in tensor_list:
177 tensor.grad = None
178
179
180 def calculate_global_norm_from_list(norm_list):
181 """ Compute total from a list of norms
182 """
183 total_norm = 0.0
184 for norm in norm_list:
185 total_norm += norm**2.0
186 return math.sqrt(total_norm)
187
188
189 def compute_norm(gradients, params, dp_group, mp_group, norm_type=2):
190 """Clips gradient norm of an iterable of parameters.
191 This is adapted from torch.nn.utils.clip_grad.clip_grad_norm_ and
192 added functionality to handle model parallel parameters. Note that
193 the gradients are modified in place.
194 Arguments:
195 parameters (Iterable[Tensor] or Tensor): an iterable of Tensors or a
196 single Tensor that will have gradients normalized
197 max_norm (float or int): max norm of the gradients
198 norm_type (float or int): type of the used p-norm. Can be ``'inf'`` for
199 infinity norm.
200 Returns:
201 Total norm of the parameters (viewed as a single vector).
202 """
203
204 if mp_group is None:
205 mp_rank = 0
206 else:
207 mp_rank = dist.get_rank(mp_group)
208
209 norm_type = float(norm_type)
210 if norm_type == inf:
211 total_norm = max(g.data.abs().max() for g in gradients)
212 total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])
213 dist.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.MAX, group=dp_group)
214
215 # Take max across all GPUs.
216 if mp_group is not None:
217 dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.MAX)
218 total_norm = total_norm_cuda[0].item()
219 else:
220 total_norm = 0.0
221 # if dist.get_rank() == 0:
222 # logger.info(f"Total Norm beginning {total_norm}")
223
224 for g, p in zip(gradients, params):
225 # Pipeline parallelism may replicate parameters. Avoid multi-counting.
226 tp_param_flag = False
227 if is_model_parallel_parameter(p) or (isinstance(p, ColoParameter) and not p.is_replicate()):
228 tp_param_flag = True
229 if tp_param_flag or mp_rank == 0:
230 param_norm = g.data.double().norm(2)
231 total_norm += param_norm.item()**2
232
233 # Sum across all model parallel GPUs.
234 total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])
235 torch.distributed.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=dp_group)
236
237 if mp_group is not None:
238 dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=mp_group)
239
240 total_norm = total_norm_cuda[0].item()**(1. / norm_type)
241
242 if total_norm == float('inf') or total_norm == -float('inf') or total_norm != total_norm:
243 total_norm = -1
244
245 return total_norm
246
247
248 def sync_param(flat_tensor, tensor_list):
249 """
250 Synchronize the flattened tensor and unflattened tensor list. When
251 a list of tensor are flattened with `torch._utils._unflatten_dense_tensors`,
252 a new tensor is created. Thus, the flat tensor and original tensor list do not
253 share the same memory space. This function will update the tensor list so that
254 they point to the same value.
255
256 :param flat_tensor: A flat tensor obtained by calling `torch._utils._unflatten_dense_tensors` on a tensor lsit
257 :param tensor_list: A list of tensors corresponding to the flattened tensor
258 :type flat_tensor: torch.Tensor
259 :type tensor_list: List[torch.Tensor]
260 """
261 updated_params = unflatten(flat_tensor, tensor_list)
262
263 # update the tensor data
264 for p, q in zip(tensor_list, updated_params):
265 p.data = q.data
266
[end of colossalai/zero/sharded_optim/_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/colossalai/zero/sharded_optim/_utils.py b/colossalai/zero/sharded_optim/_utils.py
--- a/colossalai/zero/sharded_optim/_utils.py
+++ b/colossalai/zero/sharded_optim/_utils.py
@@ -3,7 +3,7 @@
import torch
import torch.distributed as dist
-from torch.six import inf
+from torch import inf
from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors
from colossalai.tensor import ColoParameter
| {"golden_diff": "diff --git a/colossalai/zero/sharded_optim/_utils.py b/colossalai/zero/sharded_optim/_utils.py\n--- a/colossalai/zero/sharded_optim/_utils.py\n+++ b/colossalai/zero/sharded_optim/_utils.py\n@@ -3,7 +3,7 @@\n \n import torch\n import torch.distributed as dist\n-from torch.six import inf\n+from torch import inf\n from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n \n from colossalai.tensor import ColoParameter\n", "issue": "[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[tensor] fix some unittests\n\n[BUG]: Wrong import in `zero/sharded_optim/_utils.py`\n### \ud83d\udc1b Describe the bug\n\nIn issue #2774 , thanks @malfet for pointing out that we should not use `torch._six` to import `inf` and use `torch` to import `inf` instead, however, there is a small mistake in PR #2775 use an invalid `torch.six` module to import `inf`. We should fix this typo.\n\n### Environment\n\n_No response_\n", "before_files": [{"content": "import math\nfrom typing import Optional\n\nimport torch\nimport torch.distributed as dist\nfrom torch.six import inf\nfrom torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors\n\nfrom colossalai.tensor import ColoParameter\nfrom colossalai.utils import is_model_parallel_parameter\n\n\ndef flatten(input_):\n return _flatten_dense_tensors(input_)\n\n\ndef unflatten(flat, tensors):\n return _unflatten_dense_tensors(flat, tensors)\n\n\ndef count_numel(tensor_list):\n res = 0\n for tensor in tensor_list:\n res += tensor.numel()\n return res\n\n\ndef calculate_padding(numel, unit_size):\n remainder = numel % unit_size\n return unit_size - remainder if remainder else remainder\n\n\ndef shuffle_by_round_robin(tensor_list, num_partitions):\n partitions = dict()\n\n for tensor_idx, tensor in enumerate(tensor_list):\n partition_to_go = tensor_idx % num_partitions\n if partition_to_go not in partitions:\n partitions[partition_to_go] = []\n partitions[partition_to_go].append(dict(tensor=tensor, index=tensor_idx))\n\n partitions_count = len(partitions)\n new_tensor_list = []\n tensor_index_mapping = dict()\n\n for partition_id in range(partitions_count):\n partition_tensors = partitions[partition_id]\n for item in partition_tensors:\n tensor_index_mapping[item['index']] = len(new_tensor_list)\n new_tensor_list.append(item['tensor'])\n\n return new_tensor_list, tensor_index_mapping\n\n\n# create a flat tensor aligned at the alignment boundary\ndef flatten_dense_tensors_with_padding(tensor_list, unit_size):\n num_elements = count_numel(tensor_list)\n padding = calculate_padding(num_elements, unit_size=unit_size)\n\n if padding > 0:\n pad_tensor = torch.zeros(padding, device=tensor_list[0].device, dtype=tensor_list[0].dtype)\n padded_tensor_list = tensor_list + [pad_tensor]\n else:\n padded_tensor_list = tensor_list\n\n return flatten(padded_tensor_list)\n\n\ndef is_nccl_aligned(tensor):\n return tensor.data_ptr() % 4 == 0\n\n\ndef get_grad_accumulate_object(tensor):\n \"\"\"\n Return the AccumulateGrad of the input tensor\n \"\"\"\n\n # grad_fn reference:\n # https://discuss.pytorch.org/t/in-the-grad-fn-i-find-a-next-functions-but-i-dont-understand-the-meaning-of-the-attribute/24463\n # expand_as reference: https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html#torch.Tensor.expand\n #\n # `next_functions` will return the backward graph where\n # the first element is the AccumulateGrad of the leaf nodes.\n # we want to get the AccumulateGrad of the input tensor instead of the leaf\n # node in the whole computation graph.\n # Therefore, we call expand_as to create a dummy graph\n # where tensor_tmp and tensor indeed point to the same object.\n # You can check this by print(tensor.data_ptr() == tensor_tmp.data_ptr())\n tensor_tmp = tensor.expand_as(tensor)\n grad_acc_obj = tensor_tmp.grad_fn.next_functions[0][0]\n return grad_acc_obj\n\n\ndef split_half_float_double(tensor_list):\n dtypes = [\"torch.cuda.HalfTensor\", \"torch.cuda.FloatTensor\", \"torch.cuda.DoubleTensor\", \"torch.cuda.BFloat16Tensor\"]\n buckets = []\n for i, dtype in enumerate(dtypes):\n bucket = [t for t in tensor_list if t.type() == dtype]\n if bucket:\n buckets.append(bucket)\n return buckets\n\n\ndef reduce_tensor_dp_group(tensor: torch.Tensor,\n dtype: Optional[torch.dtype] = None,\n dst_local_rank: Optional[int] = None,\n dst_global_rank: Optional[int] = None,\n group: Optional[dist.ProcessGroup] = None):\n \"\"\"\n Reduce the tensor in the data parallel process group\n\n :param tensor: A tensor object to reduce/all-reduce\n :param dtype: The data type used in communication\n :param dst_rank: The source rank for reduce. If dst_rank is None,\n :param parallel_mode: Communication parallel mode\n all-reduce will be used instead of reduce. Default is None.\n\n :type tensor: torch.Tensor\n :type dtype: torch.dtype, optional\n :type dst_rank: int, optional\n :type pg: ProcessGroup, optional\n \"\"\"\n # use the original dtype\n if dtype is None:\n dtype = tensor.dtype\n\n # cast the data to specified dtype for reduce/all-reduce\n if tensor.dtype != dtype:\n tensor_to_reduce = tensor.to(dtype)\n else:\n tensor_to_reduce = tensor\n\n world_size = dist.get_world_size(group=group)\n tensor_to_reduce.div_(world_size)\n\n # if rank is None, all reduce will be used\n # else, reduce is used\n use_all_reduce = dst_local_rank is None\n\n if use_all_reduce:\n dist.all_reduce(tensor_to_reduce, group=group)\n else:\n dist.reduce(tensor=tensor_to_reduce, dst=dst_global_rank, group=group)\n\n # recover the original dtype\n if tensor.dtype != dtype and tensor is not tensor_to_reduce:\n local_rank = dist.get_rank(group=group)\n if use_all_reduce or dst_local_rank == local_rank:\n tensor.copy_(tensor_to_reduce)\n\n return tensor\n\n\ndef has_inf_or_nan(tensor):\n try:\n # if tensor is half, the .float() incurs an additional deep copy, but it's necessary if\n # Pytorch's .sum() creates a one-element tensor of the same type as tensor\n # (which is true for some recent version of pytorch).\n tensor_sum = float(tensor.float().sum())\n # More efficient version that can be used if .sum() returns a Python scalar\n # tensor_sum = float(tensor.sum())\n except RuntimeError as instance:\n # We want to check if inst is actually an overflow exception.\n # RuntimeError could come from a different error.\n # If so, we still want the exception to propagate.\n if \"value cannot be converted\" not in instance.args[0]:\n raise\n return True\n else:\n if tensor_sum == float('inf') or tensor_sum == -float('inf') or tensor_sum != tensor_sum:\n return True\n return False\n\n\ndef release_param_grad(tensor_list):\n for tensor in tensor_list:\n tensor.grad = None\n\n\ndef calculate_global_norm_from_list(norm_list):\n \"\"\" Compute total from a list of norms\n \"\"\"\n total_norm = 0.0\n for norm in norm_list:\n total_norm += norm**2.0\n return math.sqrt(total_norm)\n\n\ndef compute_norm(gradients, params, dp_group, mp_group, norm_type=2):\n \"\"\"Clips gradient norm of an iterable of parameters.\n This is adapted from torch.nn.utils.clip_grad.clip_grad_norm_ and\n added functionality to handle model parallel parameters. Note that\n the gradients are modified in place.\n Arguments:\n parameters (Iterable[Tensor] or Tensor): an iterable of Tensors or a\n single Tensor that will have gradients normalized\n max_norm (float or int): max norm of the gradients\n norm_type (float or int): type of the used p-norm. Can be ``'inf'`` for\n infinity norm.\n Returns:\n Total norm of the parameters (viewed as a single vector).\n \"\"\"\n\n if mp_group is None:\n mp_rank = 0\n else:\n mp_rank = dist.get_rank(mp_group)\n\n norm_type = float(norm_type)\n if norm_type == inf:\n total_norm = max(g.data.abs().max() for g in gradients)\n total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])\n dist.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.MAX, group=dp_group)\n\n # Take max across all GPUs.\n if mp_group is not None:\n dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.MAX)\n total_norm = total_norm_cuda[0].item()\n else:\n total_norm = 0.0\n # if dist.get_rank() == 0:\n # logger.info(f\"Total Norm beginning {total_norm}\")\n\n for g, p in zip(gradients, params):\n # Pipeline parallelism may replicate parameters. Avoid multi-counting.\n tp_param_flag = False\n if is_model_parallel_parameter(p) or (isinstance(p, ColoParameter) and not p.is_replicate()):\n tp_param_flag = True\n if tp_param_flag or mp_rank == 0:\n param_norm = g.data.double().norm(2)\n total_norm += param_norm.item()**2\n\n # Sum across all model parallel GPUs.\n total_norm_cuda = torch.cuda.FloatTensor([float(total_norm)])\n torch.distributed.all_reduce(total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=dp_group)\n\n if mp_group is not None:\n dist.all_reduce(tensor=total_norm_cuda, op=torch.distributed.ReduceOp.SUM, group=mp_group)\n\n total_norm = total_norm_cuda[0].item()**(1. / norm_type)\n\n if total_norm == float('inf') or total_norm == -float('inf') or total_norm != total_norm:\n total_norm = -1\n\n return total_norm\n\n\ndef sync_param(flat_tensor, tensor_list):\n \"\"\"\n Synchronize the flattened tensor and unflattened tensor list. When\n a list of tensor are flattened with `torch._utils._unflatten_dense_tensors`,\n a new tensor is created. Thus, the flat tensor and original tensor list do not\n share the same memory space. This function will update the tensor list so that\n they point to the same value.\n\n :param flat_tensor: A flat tensor obtained by calling `torch._utils._unflatten_dense_tensors` on a tensor lsit\n :param tensor_list: A list of tensors corresponding to the flattened tensor\n :type flat_tensor: torch.Tensor\n :type tensor_list: List[torch.Tensor]\n \"\"\"\n updated_params = unflatten(flat_tensor, tensor_list)\n\n # update the tensor data\n for p, q in zip(tensor_list, updated_params):\n p.data = q.data\n", "path": "colossalai/zero/sharded_optim/_utils.py"}]} | 3,609 | 122 |
gh_patches_debug_10059 | rasdani/github-patches | git_diff | scrapy__scrapy-5269 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ItemLoader: support non-TextResponse
At the moment, `ItemLoader(response=response)` fails if `response` is not a `TextResponse` instance.
Passing a binary response can still be useful, though. For example, to allow processors to access the response from their loader context, and hence be able to report the source URL (`response.url`) when reporting input issues.
</issue>
<code>
[start of scrapy/loader/__init__.py]
1 """
2 Item Loader
3
4 See documentation in docs/topics/loaders.rst
5 """
6 import itemloaders
7
8 from scrapy.item import Item
9 from scrapy.selector import Selector
10
11
12 class ItemLoader(itemloaders.ItemLoader):
13 """
14 A user-friendly abstraction to populate an :ref:`item <topics-items>` with data
15 by applying :ref:`field processors <topics-loaders-processors>` to scraped data.
16 When instantiated with a ``selector`` or a ``response`` it supports
17 data extraction from web pages using :ref:`selectors <topics-selectors>`.
18
19 :param item: The item instance to populate using subsequent calls to
20 :meth:`~ItemLoader.add_xpath`, :meth:`~ItemLoader.add_css`,
21 or :meth:`~ItemLoader.add_value`.
22 :type item: scrapy.item.Item
23
24 :param selector: The selector to extract data from, when using the
25 :meth:`add_xpath`, :meth:`add_css`, :meth:`replace_xpath`, or
26 :meth:`replace_css` method.
27 :type selector: :class:`~scrapy.selector.Selector` object
28
29 :param response: The response used to construct the selector using the
30 :attr:`default_selector_class`, unless the selector argument is given,
31 in which case this argument is ignored.
32 :type response: :class:`~scrapy.http.Response` object
33
34 If no item is given, one is instantiated automatically using the class in
35 :attr:`default_item_class`.
36
37 The item, selector, response and remaining keyword arguments are
38 assigned to the Loader context (accessible through the :attr:`context` attribute).
39
40 .. attribute:: item
41
42 The item object being parsed by this Item Loader.
43 This is mostly used as a property so, when attempting to override this
44 value, you may want to check out :attr:`default_item_class` first.
45
46 .. attribute:: context
47
48 The currently active :ref:`Context <loaders-context>` of this Item Loader.
49
50 .. attribute:: default_item_class
51
52 An :ref:`item <topics-items>` class (or factory), used to instantiate
53 items when not given in the ``__init__`` method.
54
55 .. attribute:: default_input_processor
56
57 The default input processor to use for those fields which don't specify
58 one.
59
60 .. attribute:: default_output_processor
61
62 The default output processor to use for those fields which don't specify
63 one.
64
65 .. attribute:: default_selector_class
66
67 The class used to construct the :attr:`selector` of this
68 :class:`ItemLoader`, if only a response is given in the ``__init__`` method.
69 If a selector is given in the ``__init__`` method this attribute is ignored.
70 This attribute is sometimes overridden in subclasses.
71
72 .. attribute:: selector
73
74 The :class:`~scrapy.selector.Selector` object to extract data from.
75 It's either the selector given in the ``__init__`` method or one created from
76 the response given in the ``__init__`` method using the
77 :attr:`default_selector_class`. This attribute is meant to be
78 read-only.
79 """
80
81 default_item_class = Item
82 default_selector_class = Selector
83
84 def __init__(self, item=None, selector=None, response=None, parent=None, **context):
85 if selector is None and response is not None:
86 selector = self.default_selector_class(response)
87 context.update(response=response)
88 super().__init__(item=item, selector=selector, parent=parent, **context)
89
[end of scrapy/loader/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/loader/__init__.py b/scrapy/loader/__init__.py
--- a/scrapy/loader/__init__.py
+++ b/scrapy/loader/__init__.py
@@ -83,6 +83,9 @@
def __init__(self, item=None, selector=None, response=None, parent=None, **context):
if selector is None and response is not None:
- selector = self.default_selector_class(response)
+ try:
+ selector = self.default_selector_class(response)
+ except AttributeError:
+ selector = None
context.update(response=response)
super().__init__(item=item, selector=selector, parent=parent, **context)
| {"golden_diff": "diff --git a/scrapy/loader/__init__.py b/scrapy/loader/__init__.py\n--- a/scrapy/loader/__init__.py\n+++ b/scrapy/loader/__init__.py\n@@ -83,6 +83,9 @@\n \n def __init__(self, item=None, selector=None, response=None, parent=None, **context):\n if selector is None and response is not None:\n- selector = self.default_selector_class(response)\n+ try:\n+ selector = self.default_selector_class(response)\n+ except AttributeError:\n+ selector = None\n context.update(response=response)\n super().__init__(item=item, selector=selector, parent=parent, **context)\n", "issue": "ItemLoader: support non-TextResponse\nAt the moment, `ItemLoader(response=response)` fails if `response` is not a `TextResponse` instance.\r\n\r\nPassing a binary response can still be useful, though. For example, to allow processors to access the response from their loader context, and hence be able to report the source URL (`response.url`) when reporting input issues.\n", "before_files": [{"content": "\"\"\"\nItem Loader\n\nSee documentation in docs/topics/loaders.rst\n\"\"\"\nimport itemloaders\n\nfrom scrapy.item import Item\nfrom scrapy.selector import Selector\n\n\nclass ItemLoader(itemloaders.ItemLoader):\n \"\"\"\n A user-friendly abstraction to populate an :ref:`item <topics-items>` with data\n by applying :ref:`field processors <topics-loaders-processors>` to scraped data.\n When instantiated with a ``selector`` or a ``response`` it supports\n data extraction from web pages using :ref:`selectors <topics-selectors>`.\n\n :param item: The item instance to populate using subsequent calls to\n :meth:`~ItemLoader.add_xpath`, :meth:`~ItemLoader.add_css`,\n or :meth:`~ItemLoader.add_value`.\n :type item: scrapy.item.Item\n\n :param selector: The selector to extract data from, when using the\n :meth:`add_xpath`, :meth:`add_css`, :meth:`replace_xpath`, or\n :meth:`replace_css` method.\n :type selector: :class:`~scrapy.selector.Selector` object\n\n :param response: The response used to construct the selector using the\n :attr:`default_selector_class`, unless the selector argument is given,\n in which case this argument is ignored.\n :type response: :class:`~scrapy.http.Response` object\n\n If no item is given, one is instantiated automatically using the class in\n :attr:`default_item_class`.\n\n The item, selector, response and remaining keyword arguments are\n assigned to the Loader context (accessible through the :attr:`context` attribute).\n\n .. attribute:: item\n\n The item object being parsed by this Item Loader.\n This is mostly used as a property so, when attempting to override this\n value, you may want to check out :attr:`default_item_class` first.\n\n .. attribute:: context\n\n The currently active :ref:`Context <loaders-context>` of this Item Loader.\n\n .. attribute:: default_item_class\n\n An :ref:`item <topics-items>` class (or factory), used to instantiate\n items when not given in the ``__init__`` method.\n\n .. attribute:: default_input_processor\n\n The default input processor to use for those fields which don't specify\n one.\n\n .. attribute:: default_output_processor\n\n The default output processor to use for those fields which don't specify\n one.\n\n .. attribute:: default_selector_class\n\n The class used to construct the :attr:`selector` of this\n :class:`ItemLoader`, if only a response is given in the ``__init__`` method.\n If a selector is given in the ``__init__`` method this attribute is ignored.\n This attribute is sometimes overridden in subclasses.\n\n .. attribute:: selector\n\n The :class:`~scrapy.selector.Selector` object to extract data from.\n It's either the selector given in the ``__init__`` method or one created from\n the response given in the ``__init__`` method using the\n :attr:`default_selector_class`. This attribute is meant to be\n read-only.\n \"\"\"\n\n default_item_class = Item\n default_selector_class = Selector\n\n def __init__(self, item=None, selector=None, response=None, parent=None, **context):\n if selector is None and response is not None:\n selector = self.default_selector_class(response)\n context.update(response=response)\n super().__init__(item=item, selector=selector, parent=parent, **context)\n", "path": "scrapy/loader/__init__.py"}]} | 1,545 | 146 |
gh_patches_debug_19233 | rasdani/github-patches | git_diff | chainer__chainer-5692 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`backprop_step` only partially documented
The `backprop_step` function in `chainer/_backprop_utils.py` is documented but misses an explanation for the `func` argument. https://github.com/chainer/chainer/blob/master/chainer/_backprop_utils.py#L73-L89
</issue>
<code>
[start of chainer/_backprop_utils.py]
1 import six
2
3 import chainer
4
5
6 def _reduce(grad_list):
7 if not grad_list:
8 return None
9 if len(grad_list) >= 2:
10 grad_list[:] = [chainer.functions.add(*grad_list)]
11 return grad_list[0]
12
13
14 def _pure(grad):
15 return [] if grad is None else [grad]
16
17
18 def _pop_or_none(grad_list):
19 return grad_list.pop() if grad_list else None
20
21
22 class GradTable(object):
23
24 """Dict of nodes to references of gradients
25
26 The gradients are stored as references to them in the backprop process. The
27 current implementation uses lists. Keep the lengths of lists <= 1 for the
28 strict accumulation of gradients. Leave them to accumulate gradients
29 lazily.
30
31 Args:
32 load_if_new (bool): read ``grad_var`` of node when the node has not
33 been added.
34
35 """
36
37 def __init__(self, load_if_new=False):
38 self.grads = {}
39 self._load_if_new = load_if_new
40
41 def __setitem__(self, node, grad):
42 assert node is not None
43 self.grads[node] = _pure(grad)
44
45 def get_as_list(self, node):
46 assert node is not None
47 grads = self.grads
48 if node not in grads:
49 if self._load_if_new and node.creator_node is None:
50 node._check_old_style_gradient()
51 # accumulate the gradient only if the node is a leaf
52 grads[node] = _pure(node.grad_var)
53 else:
54 grads[node] = []
55 return grads[node]
56
57 def pop(self, node):
58 if node is None:
59 return None
60 grads = self.grads
61 if node in grads:
62 return _reduce(grads.pop(node))
63 if self._load_if_new:
64 return node.grad_var
65 else:
66 return None
67
68 def assert_no_grads(self):
69 for gx in self.grads.values():
70 assert gx == []
71
72
73 def backprop_step(
74 func, target_input_indexes, grad_outputs, grad_inputs):
75 """Accumulates gradients of a FunctionNode
76
77 This routine is used by :meth:`chainer.Variable.backward` and
78 :func:`chainer.grad`.
79
80 Args:
81 target_input_indexes (tuple of int): Sorted indices of the input
82 variables w.r.t. which the gradients are required. It is
83 guaranteed that this tuple contains at least one element.
84 grad_outputs (tuple of Variable): Gradients w.r.t. the output
85 variables. If the gradient w.r.t. an output variable is not
86 given, the corresponding element is ``None``.
87 grad_inputs (dict): References of radients w.r.t. the input variables.
88
89 """
90 is_debug = chainer.is_debug()
91 if is_debug:
92 assert isinstance(target_input_indexes, tuple)
93 assert target_input_indexes == tuple(sorted(target_input_indexes))
94 assert isinstance(grad_outputs, tuple)
95 if func.backward_accumulate.__code__ \
96 is not chainer.FunctionNode.backward_accumulate.__code__:
97 # backward_accumulate is overridden
98 grad_inputs_tuple = tuple([
99 _pop_or_none(grad_inputs[func.inputs[i]])
100 for i in target_input_indexes
101 ])
102 gxs = func.backward_accumulate(
103 target_input_indexes, grad_outputs, grad_inputs_tuple)
104 else: # otherwise, backward should be overridden
105 gxs = func.backward(
106 target_input_indexes, grad_outputs)
107
108 if is_debug:
109 for gx in gxs:
110 if not (gx is None or isinstance(gx, chainer.Variable)):
111 raise ValueError(func._get_error_message(
112 'type of gradients returned from backward is '
113 'incorrect: '
114 '{} != expected {}'.format(
115 type(gx), chainer.Variable)))
116
117 len_gxs = len(gxs)
118 if len_gxs == len(func.inputs):
119 gxs = tuple([gxs[i] for i in target_input_indexes])
120 elif len_gxs != len(target_input_indexes):
121 msg = 'number of gradients returned from backward is incorrect: '
122 if len(func.inputs) == len(target_input_indexes):
123 msg += (
124 '%s != expected %s' % (len_gxs, len(func.inputs)))
125 else:
126 msg += (
127 '%s != expected %s or %s'
128 % (len_gxs, len(func.inputs), len(target_input_indexes)))
129 raise ValueError(func._get_error_message(msg))
130
131 for i, gx in six.moves.zip(target_input_indexes, gxs):
132 if gx is not None:
133 grad_inputs[func.inputs[i]].append(gx)
134
135 if is_debug:
136 node_x = func.inputs[i]
137 g_input_list = grad_inputs[node_x]
138 if gx.shape != node_x.shape:
139 raise ValueError(func._get_error_message(
140 'shape of gradients returned from backward is '
141 'incorrect: '
142 'input-index={}, actual {} != expected {}'.format(
143 i, gx.shape, node_x.shape)))
144 if gx is not None and g_input_list:
145 g_input = g_input_list[0]
146 if gx.shape != g_input.shape:
147 raise ValueError(func._get_error_message(
148 'shape of gradients returned from backward is '
149 'incorrect: '
150 'input-index={}, actual {} != expected {}'.format(
151 i, gx.shape, g_input.shape)))
152 if gx.dtype != g_input.dtype:
153 raise ValueError(func._get_error_message(
154 'dtype of gradients returned from backward is '
155 'incorrect: '
156 'input-index={}, actual {} != expected {}'.format(
157 i, gx.dtype, g_input.dtype)))
158 del gxs
159
160 if is_debug:
161 # each grad is a list of variables
162 # iter_gxs expands it as a sequence of variables.
163 def iter_gxs(gxs):
164 for gx in gxs:
165 for gx_elem in gx:
166 yield gx_elem
167
168 for gx in iter_gxs(grad_inputs.values()):
169 if chainer.backend._contains_nan(gx.data):
170 raise RuntimeError(
171 'NaN is detected on backward computation of {}'
172 .format(func.label))
173
174 if not func.lazy_grad_sum:
175 for gx in grad_inputs.values():
176 _reduce(gx)
177
[end of chainer/_backprop_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/chainer/_backprop_utils.py b/chainer/_backprop_utils.py
--- a/chainer/_backprop_utils.py
+++ b/chainer/_backprop_utils.py
@@ -78,13 +78,16 @@
:func:`chainer.grad`.
Args:
- target_input_indexes (tuple of int): Sorted indices of the input
- variables w.r.t. which the gradients are required. It is
- guaranteed that this tuple contains at least one element.
+ func (~chainer.FunctionNode): The function for which gradients are
+ accumulated.
+ target_input_indexes (tuple of int): Sorted indices of the inputs
+ that require gradients. It is guaranteed that this tuple contains
+ at least one element.
grad_outputs (tuple of Variable): Gradients w.r.t. the output
variables. If the gradient w.r.t. an output variable is not
given, the corresponding element is ``None``.
- grad_inputs (dict): References of radients w.r.t. the input variables.
+ grad_inputs (dict): References of the gradients w.r.t. the input
+ variables.
"""
is_debug = chainer.is_debug()
| {"golden_diff": "diff --git a/chainer/_backprop_utils.py b/chainer/_backprop_utils.py\n--- a/chainer/_backprop_utils.py\n+++ b/chainer/_backprop_utils.py\n@@ -78,13 +78,16 @@\n :func:`chainer.grad`.\n \n Args:\n- target_input_indexes (tuple of int): Sorted indices of the input\n- variables w.r.t. which the gradients are required. It is\n- guaranteed that this tuple contains at least one element.\n+ func (~chainer.FunctionNode): The function for which gradients are\n+ accumulated.\n+ target_input_indexes (tuple of int): Sorted indices of the inputs\n+ that require gradients. It is guaranteed that this tuple contains\n+ at least one element.\n grad_outputs (tuple of Variable): Gradients w.r.t. the output\n variables. If the gradient w.r.t. an output variable is not\n given, the corresponding element is ``None``.\n- grad_inputs (dict): References of radients w.r.t. the input variables.\n+ grad_inputs (dict): References of the gradients w.r.t. the input\n+ variables.\n \n \"\"\"\n is_debug = chainer.is_debug()\n", "issue": "`backprop_step` only partially documented\nThe `backprop_step` function in `chainer/_backprop_utils.py` is documented but misses an explanation for the `func` argument. https://github.com/chainer/chainer/blob/master/chainer/_backprop_utils.py#L73-L89\n", "before_files": [{"content": "import six\n\nimport chainer\n\n\ndef _reduce(grad_list):\n if not grad_list:\n return None\n if len(grad_list) >= 2:\n grad_list[:] = [chainer.functions.add(*grad_list)]\n return grad_list[0]\n\n\ndef _pure(grad):\n return [] if grad is None else [grad]\n\n\ndef _pop_or_none(grad_list):\n return grad_list.pop() if grad_list else None\n\n\nclass GradTable(object):\n\n \"\"\"Dict of nodes to references of gradients\n\n The gradients are stored as references to them in the backprop process. The\n current implementation uses lists. Keep the lengths of lists <= 1 for the\n strict accumulation of gradients. Leave them to accumulate gradients\n lazily.\n\n Args:\n load_if_new (bool): read ``grad_var`` of node when the node has not\n been added.\n\n \"\"\"\n\n def __init__(self, load_if_new=False):\n self.grads = {}\n self._load_if_new = load_if_new\n\n def __setitem__(self, node, grad):\n assert node is not None\n self.grads[node] = _pure(grad)\n\n def get_as_list(self, node):\n assert node is not None\n grads = self.grads\n if node not in grads:\n if self._load_if_new and node.creator_node is None:\n node._check_old_style_gradient()\n # accumulate the gradient only if the node is a leaf\n grads[node] = _pure(node.grad_var)\n else:\n grads[node] = []\n return grads[node]\n\n def pop(self, node):\n if node is None:\n return None\n grads = self.grads\n if node in grads:\n return _reduce(grads.pop(node))\n if self._load_if_new:\n return node.grad_var\n else:\n return None\n\n def assert_no_grads(self):\n for gx in self.grads.values():\n assert gx == []\n\n\ndef backprop_step(\n func, target_input_indexes, grad_outputs, grad_inputs):\n \"\"\"Accumulates gradients of a FunctionNode\n\n This routine is used by :meth:`chainer.Variable.backward` and\n :func:`chainer.grad`.\n\n Args:\n target_input_indexes (tuple of int): Sorted indices of the input\n variables w.r.t. which the gradients are required. It is\n guaranteed that this tuple contains at least one element.\n grad_outputs (tuple of Variable): Gradients w.r.t. the output\n variables. If the gradient w.r.t. an output variable is not\n given, the corresponding element is ``None``.\n grad_inputs (dict): References of radients w.r.t. the input variables.\n\n \"\"\"\n is_debug = chainer.is_debug()\n if is_debug:\n assert isinstance(target_input_indexes, tuple)\n assert target_input_indexes == tuple(sorted(target_input_indexes))\n assert isinstance(grad_outputs, tuple)\n if func.backward_accumulate.__code__ \\\n is not chainer.FunctionNode.backward_accumulate.__code__:\n # backward_accumulate is overridden\n grad_inputs_tuple = tuple([\n _pop_or_none(grad_inputs[func.inputs[i]])\n for i in target_input_indexes\n ])\n gxs = func.backward_accumulate(\n target_input_indexes, grad_outputs, grad_inputs_tuple)\n else: # otherwise, backward should be overridden\n gxs = func.backward(\n target_input_indexes, grad_outputs)\n\n if is_debug:\n for gx in gxs:\n if not (gx is None or isinstance(gx, chainer.Variable)):\n raise ValueError(func._get_error_message(\n 'type of gradients returned from backward is '\n 'incorrect: '\n '{} != expected {}'.format(\n type(gx), chainer.Variable)))\n\n len_gxs = len(gxs)\n if len_gxs == len(func.inputs):\n gxs = tuple([gxs[i] for i in target_input_indexes])\n elif len_gxs != len(target_input_indexes):\n msg = 'number of gradients returned from backward is incorrect: '\n if len(func.inputs) == len(target_input_indexes):\n msg += (\n '%s != expected %s' % (len_gxs, len(func.inputs)))\n else:\n msg += (\n '%s != expected %s or %s'\n % (len_gxs, len(func.inputs), len(target_input_indexes)))\n raise ValueError(func._get_error_message(msg))\n\n for i, gx in six.moves.zip(target_input_indexes, gxs):\n if gx is not None:\n grad_inputs[func.inputs[i]].append(gx)\n\n if is_debug:\n node_x = func.inputs[i]\n g_input_list = grad_inputs[node_x]\n if gx.shape != node_x.shape:\n raise ValueError(func._get_error_message(\n 'shape of gradients returned from backward is '\n 'incorrect: '\n 'input-index={}, actual {} != expected {}'.format(\n i, gx.shape, node_x.shape)))\n if gx is not None and g_input_list:\n g_input = g_input_list[0]\n if gx.shape != g_input.shape:\n raise ValueError(func._get_error_message(\n 'shape of gradients returned from backward is '\n 'incorrect: '\n 'input-index={}, actual {} != expected {}'.format(\n i, gx.shape, g_input.shape)))\n if gx.dtype != g_input.dtype:\n raise ValueError(func._get_error_message(\n 'dtype of gradients returned from backward is '\n 'incorrect: '\n 'input-index={}, actual {} != expected {}'.format(\n i, gx.dtype, g_input.dtype)))\n del gxs\n\n if is_debug:\n # each grad is a list of variables\n # iter_gxs expands it as a sequence of variables.\n def iter_gxs(gxs):\n for gx in gxs:\n for gx_elem in gx:\n yield gx_elem\n\n for gx in iter_gxs(grad_inputs.values()):\n if chainer.backend._contains_nan(gx.data):\n raise RuntimeError(\n 'NaN is detected on backward computation of {}'\n .format(func.label))\n\n if not func.lazy_grad_sum:\n for gx in grad_inputs.values():\n _reduce(gx)\n", "path": "chainer/_backprop_utils.py"}]} | 2,361 | 264 |
gh_patches_debug_5555 | rasdani/github-patches | git_diff | getredash__redash-4638 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
error running query : ** '>' is not supported between instance of NoneType and 'int'
Issue Summary:
Database = Oracle 12c
`select count(*) from table `
throwing the following error
`error running query : ** '>' is not supported between instance of NoneType and 'int'`
Redash v9.0.0-alpha(dev)
</issue>
<code>
[start of redash/query_runner/oracle.py]
1 import logging
2
3 from redash.utils import json_dumps, json_loads
4 from redash.query_runner import *
5
6 try:
7 import cx_Oracle
8
9 TYPES_MAP = {
10 cx_Oracle.DATETIME: TYPE_DATETIME,
11 cx_Oracle.CLOB: TYPE_STRING,
12 cx_Oracle.LOB: TYPE_STRING,
13 cx_Oracle.FIXED_CHAR: TYPE_STRING,
14 cx_Oracle.FIXED_NCHAR: TYPE_STRING,
15 cx_Oracle.INTERVAL: TYPE_DATETIME,
16 cx_Oracle.LONG_STRING: TYPE_STRING,
17 cx_Oracle.NATIVE_FLOAT: TYPE_FLOAT,
18 cx_Oracle.NCHAR: TYPE_STRING,
19 cx_Oracle.NUMBER: TYPE_FLOAT,
20 cx_Oracle.ROWID: TYPE_INTEGER,
21 cx_Oracle.STRING: TYPE_STRING,
22 cx_Oracle.TIMESTAMP: TYPE_DATETIME,
23 }
24
25 ENABLED = True
26 except ImportError:
27 ENABLED = False
28
29 logger = logging.getLogger(__name__)
30
31
32 class Oracle(BaseSQLQueryRunner):
33 noop_query = "SELECT 1 FROM dual"
34
35 @classmethod
36 def get_col_type(cls, col_type, scale):
37 if col_type == cx_Oracle.NUMBER:
38 return TYPE_FLOAT if scale > 0 else TYPE_INTEGER
39 else:
40 return TYPES_MAP.get(col_type, None)
41
42 @classmethod
43 def enabled(cls):
44 return ENABLED
45
46 @classmethod
47 def configuration_schema(cls):
48 return {
49 "type": "object",
50 "properties": {
51 "user": {"type": "string"},
52 "password": {"type": "string"},
53 "host": {"type": "string"},
54 "port": {"type": "number"},
55 "servicename": {"type": "string", "title": "DSN Service Name"},
56 },
57 "required": ["servicename", "user", "password", "host", "port"],
58 "secret": ["password"],
59 }
60
61 @classmethod
62 def type(cls):
63 return "oracle"
64
65 def __init__(self, configuration):
66 super(Oracle, self).__init__(configuration)
67
68 dsn = cx_Oracle.makedsn(
69 self.configuration["host"],
70 self.configuration["port"],
71 service_name=self.configuration["servicename"],
72 )
73
74 self.connection_string = "{}/{}@{}".format(
75 self.configuration["user"], self.configuration["password"], dsn
76 )
77
78 def _get_tables(self, schema):
79 query = """
80 SELECT
81 all_tab_cols.OWNER,
82 all_tab_cols.TABLE_NAME,
83 all_tab_cols.COLUMN_NAME
84 FROM all_tab_cols
85 WHERE all_tab_cols.OWNER NOT IN('SYS','SYSTEM','ORDSYS','CTXSYS','WMSYS','MDSYS','ORDDATA','XDB','OUTLN','DMSYS','DSSYS','EXFSYS','LBACSYS','TSMSYS')
86 """
87
88 results, error = self.run_query(query, None)
89
90 if error is not None:
91 raise Exception("Failed getting schema.")
92
93 results = json_loads(results)
94
95 for row in results["rows"]:
96 if row["OWNER"] != None:
97 table_name = "{}.{}".format(row["OWNER"], row["TABLE_NAME"])
98 else:
99 table_name = row["TABLE_NAME"]
100
101 if table_name not in schema:
102 schema[table_name] = {"name": table_name, "columns": []}
103
104 schema[table_name]["columns"].append(row["COLUMN_NAME"])
105
106 return list(schema.values())
107
108 @classmethod
109 def _convert_number(cls, value):
110 try:
111 return int(value)
112 except:
113 return value
114
115 @classmethod
116 def output_handler(cls, cursor, name, default_type, length, precision, scale):
117 if default_type in (cx_Oracle.CLOB, cx_Oracle.LOB):
118 return cursor.var(cx_Oracle.LONG_STRING, 80000, cursor.arraysize)
119
120 if default_type in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR):
121 return cursor.var(str, length, cursor.arraysize)
122
123 if default_type == cx_Oracle.NUMBER:
124 if scale <= 0:
125 return cursor.var(
126 cx_Oracle.STRING,
127 255,
128 outconverter=Oracle._convert_number,
129 arraysize=cursor.arraysize,
130 )
131
132 def run_query(self, query, user):
133 connection = cx_Oracle.connect(self.connection_string)
134 connection.outputtypehandler = Oracle.output_handler
135
136 cursor = connection.cursor()
137
138 try:
139 cursor.execute(query)
140 rows_count = cursor.rowcount
141 if cursor.description is not None:
142 columns = self.fetch_columns(
143 [
144 (i[0], Oracle.get_col_type(i[1], i[5]))
145 for i in cursor.description
146 ]
147 )
148 rows = [
149 dict(zip((column["name"] for column in columns), row))
150 for row in cursor
151 ]
152 data = {"columns": columns, "rows": rows}
153 error = None
154 json_data = json_dumps(data)
155 else:
156 columns = [{"name": "Row(s) Affected", "type": "TYPE_INTEGER"}]
157 rows = [{"Row(s) Affected": rows_count}]
158 data = {"columns": columns, "rows": rows}
159 json_data = json_dumps(data)
160 connection.commit()
161 except cx_Oracle.DatabaseError as err:
162 error = "Query failed. {}.".format(str(err))
163 json_data = None
164 except KeyboardInterrupt:
165 connection.cancel()
166 error = "Query cancelled by user."
167 json_data = None
168 finally:
169 connection.close()
170
171 return json_data, error
172
173
174 register(Oracle)
175
[end of redash/query_runner/oracle.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/redash/query_runner/oracle.py b/redash/query_runner/oracle.py
--- a/redash/query_runner/oracle.py
+++ b/redash/query_runner/oracle.py
@@ -35,7 +35,11 @@
@classmethod
def get_col_type(cls, col_type, scale):
if col_type == cx_Oracle.NUMBER:
- return TYPE_FLOAT if scale > 0 else TYPE_INTEGER
+ if scale is None:
+ return TYPE_INTEGER
+ if scale > 0:
+ return TYPE_FLOAT
+ return TYPE_INTEGER
else:
return TYPES_MAP.get(col_type, None)
| {"golden_diff": "diff --git a/redash/query_runner/oracle.py b/redash/query_runner/oracle.py\n--- a/redash/query_runner/oracle.py\n+++ b/redash/query_runner/oracle.py\n@@ -35,7 +35,11 @@\n @classmethod\n def get_col_type(cls, col_type, scale):\n if col_type == cx_Oracle.NUMBER:\n- return TYPE_FLOAT if scale > 0 else TYPE_INTEGER\n+ if scale is None:\n+ return TYPE_INTEGER\n+ if scale > 0:\n+ return TYPE_FLOAT\n+ return TYPE_INTEGER\n else:\n return TYPES_MAP.get(col_type, None)\n", "issue": "error running query : ** '>' is not supported between instance of NoneType and 'int'\nIssue Summary:\r\nDatabase = Oracle 12c\r\n\r\n`select count(*) from table `\r\n\r\nthrowing the following error\r\n\r\n`error running query : ** '>' is not supported between instance of NoneType and 'int'`\r\n\r\nRedash v9.0.0-alpha(dev)\r\n\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom redash.utils import json_dumps, json_loads\nfrom redash.query_runner import *\n\ntry:\n import cx_Oracle\n\n TYPES_MAP = {\n cx_Oracle.DATETIME: TYPE_DATETIME,\n cx_Oracle.CLOB: TYPE_STRING,\n cx_Oracle.LOB: TYPE_STRING,\n cx_Oracle.FIXED_CHAR: TYPE_STRING,\n cx_Oracle.FIXED_NCHAR: TYPE_STRING,\n cx_Oracle.INTERVAL: TYPE_DATETIME,\n cx_Oracle.LONG_STRING: TYPE_STRING,\n cx_Oracle.NATIVE_FLOAT: TYPE_FLOAT,\n cx_Oracle.NCHAR: TYPE_STRING,\n cx_Oracle.NUMBER: TYPE_FLOAT,\n cx_Oracle.ROWID: TYPE_INTEGER,\n cx_Oracle.STRING: TYPE_STRING,\n cx_Oracle.TIMESTAMP: TYPE_DATETIME,\n }\n\n ENABLED = True\nexcept ImportError:\n ENABLED = False\n\nlogger = logging.getLogger(__name__)\n\n\nclass Oracle(BaseSQLQueryRunner):\n noop_query = \"SELECT 1 FROM dual\"\n\n @classmethod\n def get_col_type(cls, col_type, scale):\n if col_type == cx_Oracle.NUMBER:\n return TYPE_FLOAT if scale > 0 else TYPE_INTEGER\n else:\n return TYPES_MAP.get(col_type, None)\n\n @classmethod\n def enabled(cls):\n return ENABLED\n\n @classmethod\n def configuration_schema(cls):\n return {\n \"type\": \"object\",\n \"properties\": {\n \"user\": {\"type\": \"string\"},\n \"password\": {\"type\": \"string\"},\n \"host\": {\"type\": \"string\"},\n \"port\": {\"type\": \"number\"},\n \"servicename\": {\"type\": \"string\", \"title\": \"DSN Service Name\"},\n },\n \"required\": [\"servicename\", \"user\", \"password\", \"host\", \"port\"],\n \"secret\": [\"password\"],\n }\n\n @classmethod\n def type(cls):\n return \"oracle\"\n\n def __init__(self, configuration):\n super(Oracle, self).__init__(configuration)\n\n dsn = cx_Oracle.makedsn(\n self.configuration[\"host\"],\n self.configuration[\"port\"],\n service_name=self.configuration[\"servicename\"],\n )\n\n self.connection_string = \"{}/{}@{}\".format(\n self.configuration[\"user\"], self.configuration[\"password\"], dsn\n )\n\n def _get_tables(self, schema):\n query = \"\"\"\n SELECT\n all_tab_cols.OWNER,\n all_tab_cols.TABLE_NAME,\n all_tab_cols.COLUMN_NAME\n FROM all_tab_cols\n WHERE all_tab_cols.OWNER NOT IN('SYS','SYSTEM','ORDSYS','CTXSYS','WMSYS','MDSYS','ORDDATA','XDB','OUTLN','DMSYS','DSSYS','EXFSYS','LBACSYS','TSMSYS')\n \"\"\"\n\n results, error = self.run_query(query, None)\n\n if error is not None:\n raise Exception(\"Failed getting schema.\")\n\n results = json_loads(results)\n\n for row in results[\"rows\"]:\n if row[\"OWNER\"] != None:\n table_name = \"{}.{}\".format(row[\"OWNER\"], row[\"TABLE_NAME\"])\n else:\n table_name = row[\"TABLE_NAME\"]\n\n if table_name not in schema:\n schema[table_name] = {\"name\": table_name, \"columns\": []}\n\n schema[table_name][\"columns\"].append(row[\"COLUMN_NAME\"])\n\n return list(schema.values())\n\n @classmethod\n def _convert_number(cls, value):\n try:\n return int(value)\n except:\n return value\n\n @classmethod\n def output_handler(cls, cursor, name, default_type, length, precision, scale):\n if default_type in (cx_Oracle.CLOB, cx_Oracle.LOB):\n return cursor.var(cx_Oracle.LONG_STRING, 80000, cursor.arraysize)\n\n if default_type in (cx_Oracle.STRING, cx_Oracle.FIXED_CHAR):\n return cursor.var(str, length, cursor.arraysize)\n\n if default_type == cx_Oracle.NUMBER:\n if scale <= 0:\n return cursor.var(\n cx_Oracle.STRING,\n 255,\n outconverter=Oracle._convert_number,\n arraysize=cursor.arraysize,\n )\n\n def run_query(self, query, user):\n connection = cx_Oracle.connect(self.connection_string)\n connection.outputtypehandler = Oracle.output_handler\n\n cursor = connection.cursor()\n\n try:\n cursor.execute(query)\n rows_count = cursor.rowcount\n if cursor.description is not None:\n columns = self.fetch_columns(\n [\n (i[0], Oracle.get_col_type(i[1], i[5]))\n for i in cursor.description\n ]\n )\n rows = [\n dict(zip((column[\"name\"] for column in columns), row))\n for row in cursor\n ]\n data = {\"columns\": columns, \"rows\": rows}\n error = None\n json_data = json_dumps(data)\n else:\n columns = [{\"name\": \"Row(s) Affected\", \"type\": \"TYPE_INTEGER\"}]\n rows = [{\"Row(s) Affected\": rows_count}]\n data = {\"columns\": columns, \"rows\": rows}\n json_data = json_dumps(data)\n connection.commit()\n except cx_Oracle.DatabaseError as err:\n error = \"Query failed. {}.\".format(str(err))\n json_data = None\n except KeyboardInterrupt:\n connection.cancel()\n error = \"Query cancelled by user.\"\n json_data = None\n finally:\n connection.close()\n\n return json_data, error\n\n\nregister(Oracle)\n", "path": "redash/query_runner/oracle.py"}]} | 2,260 | 140 |
gh_patches_debug_41424 | rasdani/github-patches | git_diff | wagtail__wagtail-291 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Login screen should redirect somewhere appropriate if visited while already logged in
Rather confusing, visiting the login screen while already logged in presents the user with a login form again. It should redirect them to the dashboard, I'd suggest.
</issue>
<code>
[start of runtests.py]
1 #!/usr/bin/env python
2 import sys
3 import os
4 import shutil
5
6 from django.conf import settings, global_settings
7 from django.core.management import execute_from_command_line
8
9 WAGTAIL_ROOT = os.path.dirname(__file__)
10 STATIC_ROOT = os.path.join(WAGTAIL_ROOT, 'test-static')
11 MEDIA_ROOT = os.path.join(WAGTAIL_ROOT, 'test-media')
12
13 if not settings.configured:
14
15 try:
16 import elasticutils
17 has_elasticsearch = True
18 except ImportError:
19 has_elasticsearch = False
20
21 WAGTAILSEARCH_BACKENDS = {
22 'default': {
23 'BACKEND': 'wagtail.wagtailsearch.backends.db.DBSearch',
24 }
25 }
26 if has_elasticsearch:
27 WAGTAILSEARCH_BACKENDS['elasticsearch'] = {
28 'BACKEND': 'wagtail.wagtailsearch.backends.elasticsearch.ElasticSearch',
29 'TIMEOUT': 10,
30 'max_retries': 1,
31 }
32
33 settings.configure(
34 DATABASES={
35 'default': {
36 'ENGINE': os.environ.get('DATABASE_ENGINE', 'django.db.backends.postgresql_psycopg2'),
37 'NAME': 'wagtaildemo',
38 'USER': os.environ.get('DATABASE_USER', 'postgres'),
39 }
40 },
41 ROOT_URLCONF='wagtail.tests.urls',
42 STATIC_URL='/static/',
43 STATIC_ROOT=STATIC_ROOT,
44 MEDIA_ROOT=MEDIA_ROOT,
45 USE_TZ=True,
46 STATICFILES_FINDERS=(
47 'django.contrib.staticfiles.finders.AppDirectoriesFinder',
48 'compressor.finders.CompressorFinder',
49 ),
50 TEMPLATE_CONTEXT_PROCESSORS=global_settings.TEMPLATE_CONTEXT_PROCESSORS + (
51 'django.core.context_processors.request',
52 ),
53 MIDDLEWARE_CLASSES=(
54 'django.middleware.common.CommonMiddleware',
55 'django.contrib.sessions.middleware.SessionMiddleware',
56 'django.middleware.csrf.CsrfViewMiddleware',
57 'django.contrib.auth.middleware.AuthenticationMiddleware',
58 'django.contrib.messages.middleware.MessageMiddleware',
59 'django.middleware.clickjacking.XFrameOptionsMiddleware',
60
61 'wagtail.wagtailcore.middleware.SiteMiddleware',
62
63 'wagtail.wagtailredirects.middleware.RedirectMiddleware',
64 ),
65 INSTALLED_APPS=[
66 'django.contrib.contenttypes',
67 'django.contrib.sessions',
68 'django.contrib.auth',
69 'django.contrib.messages',
70 'django.contrib.staticfiles',
71 'django.contrib.admin',
72
73 'taggit',
74 'south',
75 'compressor',
76
77 'wagtail.wagtailcore',
78 'wagtail.wagtailadmin',
79 'wagtail.wagtaildocs',
80 'wagtail.wagtailsnippets',
81 'wagtail.wagtailusers',
82 'wagtail.wagtailimages',
83 'wagtail.wagtailembeds',
84 'wagtail.wagtailsearch',
85 'wagtail.wagtailredirects',
86 'wagtail.wagtailforms',
87 'wagtail.tests',
88 ],
89
90 # Using DatabaseCache to make sure that the cache is cleared between tests.
91 # This prevents false-positives in some wagtail core tests where we are
92 # changing the 'wagtail_root_paths' key which may cause future tests to fail.
93 CACHES = {
94 'default': {
95 'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
96 'LOCATION': 'cache',
97 }
98 },
99 PASSWORD_HASHERS=(
100 'django.contrib.auth.hashers.MD5PasswordHasher', # don't use the intentionally slow default password hasher
101 ),
102 COMPRESS_ENABLED=False, # disable compression so that we can run tests on the content of the compress tag
103 WAGTAILSEARCH_BACKENDS=WAGTAILSEARCH_BACKENDS,
104 WAGTAIL_SITE_NAME='Test Site',
105 LOGIN_REDIRECT_URL='wagtailadmin_home',
106 )
107
108
109 def runtests():
110 argv = sys.argv[:1] + ['test'] + sys.argv[1:]
111 try:
112 execute_from_command_line(argv)
113 finally:
114 shutil.rmtree(STATIC_ROOT, ignore_errors=True)
115 shutil.rmtree(MEDIA_ROOT, ignore_errors=True)
116
117
118 if __name__ == '__main__':
119 runtests()
120
[end of runtests.py]
[start of wagtail/wagtailadmin/views/account.py]
1 from django.conf import settings
2 from django.shortcuts import render, redirect
3 from django.contrib import messages
4 from django.contrib.auth.forms import SetPasswordForm
5 from django.contrib.auth.decorators import permission_required
6 from django.contrib.auth.views import logout as auth_logout
7 from django.utils.translation import ugettext as _
8
9 @permission_required('wagtailadmin.access_admin')
10 def account(request):
11 return render(request, 'wagtailadmin/account/account.html', {
12 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),
13 })
14
15
16 @permission_required('wagtailadmin.access_admin')
17 def change_password(request):
18 can_change_password = request.user.has_usable_password()
19
20 if can_change_password:
21 if request.POST:
22 form = SetPasswordForm(request.user, request.POST)
23
24 if form.is_valid():
25 form.save()
26
27 messages.success(request, _("Your password has been changed successfully!"))
28 return redirect('wagtailadmin_account')
29 else:
30 form = SetPasswordForm(request.user)
31 else:
32 form = None
33
34 return render(request, 'wagtailadmin/account/change_password.html', {
35 'form': form,
36 'can_change_password': can_change_password,
37 })
38
39
40 def logout(request):
41 response = auth_logout(request, next_page = 'wagtailadmin_login')
42
43 # By default, logging out will generate a fresh sessionid cookie. We want to use the
44 # absence of sessionid as an indication that front-end pages are being viewed by a
45 # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.
46 response.delete_cookie(settings.SESSION_COOKIE_NAME,
47 domain=settings.SESSION_COOKIE_DOMAIN,
48 path=settings.SESSION_COOKIE_PATH)
49
50 # HACK: pretend that the session hasn't been modified, so that SessionMiddleware
51 # won't override the above and write a new cookie.
52 request.session.modified = False
53
54 return response
55
[end of wagtail/wagtailadmin/views/account.py]
[start of wagtail/wagtailadmin/urls.py]
1 from django.conf.urls import url
2 from django.conf import settings
3
4 from wagtail.wagtailadmin.forms import LoginForm, PasswordResetForm
5 from wagtail.wagtailadmin.views import account, chooser, home, pages, tags, userbar
6 from wagtail.wagtailadmin import hooks
7
8 urlpatterns = [
9 url(
10 r'^login/$', 'django.contrib.auth.views.login', {
11 'template_name': 'wagtailadmin/login.html',
12 'authentication_form': LoginForm,
13 'extra_context': {'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)},
14 }, name='wagtailadmin_login'
15 ),
16
17 # Password reset
18 url(
19 r'^password_reset/$', 'django.contrib.auth.views.password_reset', {
20 'template_name': 'wagtailadmin/account/password_reset/form.html',
21 'email_template_name': 'wagtailadmin/account/password_reset/email.txt',
22 'subject_template_name': 'wagtailadmin/account/password_reset/email_subject.txt',
23 'password_reset_form': PasswordResetForm,
24 }, name='password_reset'
25 ),
26 url(
27 r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done', {
28 'template_name': 'wagtailadmin/account/password_reset/done.html'
29 }, name='password_reset_done'
30 ),
31 url(
32 r'^password_reset/confirm/(?P<uidb64>[0-9A-Za-z_\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',
33 'django.contrib.auth.views.password_reset_confirm',
34 {'template_name': 'wagtailadmin/account/password_reset/confirm.html'},
35 name='password_reset_confirm',
36 ),
37 url(
38 r'^password_reset/complete/$', 'django.contrib.auth.views.password_reset_complete',
39 {'template_name': 'wagtailadmin/account/password_reset/complete.html'},
40 name='password_reset_complete'
41 ),
42 ]
43
44 urlpatterns += [
45 url(r'^$', home.home, name='wagtailadmin_home'),
46
47 url(r'^failwhale/$', home.error_test, name='wagtailadmin_error_test'),
48
49 url(r'^pages/$', pages.index, name='wagtailadmin_explore_root'),
50 url(r'^pages/(\d+)/$', pages.index, name='wagtailadmin_explore'),
51
52 url(r'^pages/new/(\w+)/(\w+)/(\d+)/$', pages.create, name='wagtailadmin_pages_create'),
53 url(r'^pages/new/(\w+)/(\w+)/(\d+)/preview/$', pages.preview_on_create, name='wagtailadmin_pages_preview_on_create'),
54 url(r'^pages/usage/(\w+)/(\w+)/$', pages.content_type_use, name='wagtailadmin_pages_type_use'),
55
56 url(r'^pages/(\d+)/edit/$', pages.edit, name='wagtailadmin_pages_edit'),
57 url(r'^pages/(\d+)/edit/preview/$', pages.preview_on_edit, name='wagtailadmin_pages_preview_on_edit'),
58
59 url(r'^pages/preview_placeholder/$', pages.preview_placeholder, name='wagtailadmin_pages_preview_placeholder'),
60
61 url(r'^pages/(\d+)/view_draft/$', pages.view_draft, name='wagtailadmin_pages_view_draft'),
62 url(r'^pages/(\d+)/add_subpage/$', pages.add_subpage, name='wagtailadmin_pages_add_subpage'),
63 url(r'^pages/(\d+)/delete/$', pages.delete, name='wagtailadmin_pages_delete'),
64 url(r'^pages/(\d+)/unpublish/$', pages.unpublish, name='wagtailadmin_pages_unpublish'),
65
66 url(r'^pages/search/$', pages.search, name='wagtailadmin_pages_search'),
67
68 url(r'^pages/(\d+)/move/$', pages.move_choose_destination, name='wagtailadmin_pages_move'),
69 url(r'^pages/(\d+)/move/(\d+)/$', pages.move_choose_destination, name='wagtailadmin_pages_move_choose_destination'),
70 url(r'^pages/(\d+)/move/(\d+)/confirm/$', pages.move_confirm, name='wagtailadmin_pages_move_confirm'),
71 url(r'^pages/(\d+)/set_position/$', pages.set_page_position, name='wagtailadmin_pages_set_page_position'),
72
73 url(r'^pages/moderation/(\d+)/approve/$', pages.approve_moderation, name='wagtailadmin_pages_approve_moderation'),
74 url(r'^pages/moderation/(\d+)/reject/$', pages.reject_moderation, name='wagtailadmin_pages_reject_moderation'),
75 url(r'^pages/moderation/(\d+)/preview/$', pages.preview_for_moderation, name='wagtailadmin_pages_preview_for_moderation'),
76
77 url(r'^choose-page/$', chooser.browse, name='wagtailadmin_choose_page'),
78 url(r'^choose-page/(\d+)/$', chooser.browse, name='wagtailadmin_choose_page_child'),
79 url(r'^choose-external-link/$', chooser.external_link, name='wagtailadmin_choose_page_external_link'),
80 url(r'^choose-email-link/$', chooser.email_link, name='wagtailadmin_choose_page_email_link'),
81
82 url(r'^tag-autocomplete/$', tags.autocomplete, name='wagtailadmin_tag_autocomplete'),
83
84 url(r'^account/$', account.account, name='wagtailadmin_account'),
85 url(r'^account/change_password/$', account.change_password, name='wagtailadmin_account_change_password'),
86 url(r'^logout/$', account.logout, name='wagtailadmin_logout'),
87
88 url(r'^userbar/(\d+)/$', userbar.for_frontend, name='wagtailadmin_userbar_frontend'),
89 url(r'^userbar/moderation/(\d+)/$', userbar.for_moderation, name='wagtailadmin_userbar_moderation'),
90 ]
91
92
93 # Import additional urlpatterns from any apps that define a register_admin_urls hook
94 for fn in hooks.get_hooks('register_admin_urls'):
95 urls = fn()
96 if urls:
97 urlpatterns += urls
98
[end of wagtail/wagtailadmin/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/runtests.py b/runtests.py
--- a/runtests.py
+++ b/runtests.py
@@ -103,6 +103,7 @@
WAGTAILSEARCH_BACKENDS=WAGTAILSEARCH_BACKENDS,
WAGTAIL_SITE_NAME='Test Site',
LOGIN_REDIRECT_URL='wagtailadmin_home',
+ LOGIN_URL='wagtailadmin_login',
)
diff --git a/wagtail/wagtailadmin/urls.py b/wagtail/wagtailadmin/urls.py
--- a/wagtail/wagtailadmin/urls.py
+++ b/wagtail/wagtailadmin/urls.py
@@ -5,15 +5,8 @@
from wagtail.wagtailadmin.views import account, chooser, home, pages, tags, userbar
from wagtail.wagtailadmin import hooks
-urlpatterns = [
- url(
- r'^login/$', 'django.contrib.auth.views.login', {
- 'template_name': 'wagtailadmin/login.html',
- 'authentication_form': LoginForm,
- 'extra_context': {'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)},
- }, name='wagtailadmin_login'
- ),
+urlpatterns = [
# Password reset
url(
r'^password_reset/$', 'django.contrib.auth.views.password_reset', {
@@ -81,6 +74,7 @@
url(r'^tag-autocomplete/$', tags.autocomplete, name='wagtailadmin_tag_autocomplete'),
+ url(r'^login/$', account.login, name='wagtailadmin_login'),
url(r'^account/$', account.account, name='wagtailadmin_account'),
url(r'^account/change_password/$', account.change_password, name='wagtailadmin_account_change_password'),
url(r'^logout/$', account.logout, name='wagtailadmin_logout'),
@@ -90,6 +84,13 @@
]
+# This is here to make sure that 'django.contrib.auth.views.login' is reversed correctly
+# It must be placed after 'wagtailadmin_login' to prevent this from being used
+urlpatterns += [
+ url(r'^login/$', 'django.contrib.auth.views.login'),
+]
+
+
# Import additional urlpatterns from any apps that define a register_admin_urls hook
for fn in hooks.get_hooks('register_admin_urls'):
urls = fn()
diff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py
--- a/wagtail/wagtailadmin/views/account.py
+++ b/wagtail/wagtailadmin/views/account.py
@@ -3,8 +3,13 @@
from django.contrib import messages
from django.contrib.auth.forms import SetPasswordForm
from django.contrib.auth.decorators import permission_required
-from django.contrib.auth.views import logout as auth_logout
+from django.contrib.auth.views import logout as auth_logout, login as auth_login
from django.utils.translation import ugettext as _
+from django.views.decorators.debug import sensitive_post_parameters
+from django.views.decorators.cache import never_cache
+
+from wagtail.wagtailadmin import forms
+
@permission_required('wagtailadmin.access_admin')
def account(request):
@@ -37,6 +42,21 @@
})
+@sensitive_post_parameters()
+@never_cache
+def login(request):
+ if request.user.is_authenticated():
+ return redirect('wagtailadmin_home')
+ else:
+ return auth_login(request,
+ template_name='wagtailadmin/login.html',
+ authentication_form=forms.LoginForm,
+ extra_context={
+ 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),
+ },
+ )
+
+
def logout(request):
response = auth_logout(request, next_page = 'wagtailadmin_login')
| {"golden_diff": "diff --git a/runtests.py b/runtests.py\n--- a/runtests.py\n+++ b/runtests.py\n@@ -103,6 +103,7 @@\n WAGTAILSEARCH_BACKENDS=WAGTAILSEARCH_BACKENDS,\n WAGTAIL_SITE_NAME='Test Site',\n LOGIN_REDIRECT_URL='wagtailadmin_home',\n+ LOGIN_URL='wagtailadmin_login',\n )\n \n \ndiff --git a/wagtail/wagtailadmin/urls.py b/wagtail/wagtailadmin/urls.py\n--- a/wagtail/wagtailadmin/urls.py\n+++ b/wagtail/wagtailadmin/urls.py\n@@ -5,15 +5,8 @@\n from wagtail.wagtailadmin.views import account, chooser, home, pages, tags, userbar\n from wagtail.wagtailadmin import hooks\n \n-urlpatterns = [\n- url(\n- r'^login/$', 'django.contrib.auth.views.login', {\n- 'template_name': 'wagtailadmin/login.html',\n- 'authentication_form': LoginForm,\n- 'extra_context': {'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)},\n- }, name='wagtailadmin_login'\n- ),\n \n+urlpatterns = [\n # Password reset\n url(\n r'^password_reset/$', 'django.contrib.auth.views.password_reset', {\n@@ -81,6 +74,7 @@\n \n url(r'^tag-autocomplete/$', tags.autocomplete, name='wagtailadmin_tag_autocomplete'),\n \n+ url(r'^login/$', account.login, name='wagtailadmin_login'),\n url(r'^account/$', account.account, name='wagtailadmin_account'),\n url(r'^account/change_password/$', account.change_password, name='wagtailadmin_account_change_password'),\n url(r'^logout/$', account.logout, name='wagtailadmin_logout'),\n@@ -90,6 +84,13 @@\n ]\n \n \n+# This is here to make sure that 'django.contrib.auth.views.login' is reversed correctly\n+# It must be placed after 'wagtailadmin_login' to prevent this from being used\n+urlpatterns += [\n+ url(r'^login/$', 'django.contrib.auth.views.login'),\n+]\n+\n+\n # Import additional urlpatterns from any apps that define a register_admin_urls hook\n for fn in hooks.get_hooks('register_admin_urls'):\n urls = fn()\ndiff --git a/wagtail/wagtailadmin/views/account.py b/wagtail/wagtailadmin/views/account.py\n--- a/wagtail/wagtailadmin/views/account.py\n+++ b/wagtail/wagtailadmin/views/account.py\n@@ -3,8 +3,13 @@\n from django.contrib import messages\n from django.contrib.auth.forms import SetPasswordForm\n from django.contrib.auth.decorators import permission_required\n-from django.contrib.auth.views import logout as auth_logout\n+from django.contrib.auth.views import logout as auth_logout, login as auth_login\n from django.utils.translation import ugettext as _ \n+from django.views.decorators.debug import sensitive_post_parameters\n+from django.views.decorators.cache import never_cache\n+\n+from wagtail.wagtailadmin import forms\n+\n \n @permission_required('wagtailadmin.access_admin')\n def account(request):\n@@ -37,6 +42,21 @@\n })\n \n \n+@sensitive_post_parameters()\n+@never_cache\n+def login(request):\n+ if request.user.is_authenticated():\n+ return redirect('wagtailadmin_home')\n+ else:\n+ return auth_login(request,\n+ template_name='wagtailadmin/login.html',\n+ authentication_form=forms.LoginForm,\n+ extra_context={\n+ 'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True),\n+ },\n+ )\n+\n+\n def logout(request):\n response = auth_logout(request, next_page = 'wagtailadmin_login')\n", "issue": "Login screen should redirect somewhere appropriate if visited while already logged in\nRather confusing, visiting the login screen while already logged in presents the user with a login form again. It should redirect them to the dashboard, I'd suggest.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nimport sys\nimport os\nimport shutil\n\nfrom django.conf import settings, global_settings\nfrom django.core.management import execute_from_command_line\n\nWAGTAIL_ROOT = os.path.dirname(__file__)\nSTATIC_ROOT = os.path.join(WAGTAIL_ROOT, 'test-static')\nMEDIA_ROOT = os.path.join(WAGTAIL_ROOT, 'test-media')\n\nif not settings.configured:\n\n try:\n import elasticutils\n has_elasticsearch = True\n except ImportError:\n has_elasticsearch = False\n\n WAGTAILSEARCH_BACKENDS = {\n 'default': {\n 'BACKEND': 'wagtail.wagtailsearch.backends.db.DBSearch',\n }\n }\n if has_elasticsearch:\n WAGTAILSEARCH_BACKENDS['elasticsearch'] = {\n 'BACKEND': 'wagtail.wagtailsearch.backends.elasticsearch.ElasticSearch',\n 'TIMEOUT': 10,\n 'max_retries': 1,\n }\n\n settings.configure(\n DATABASES={\n 'default': {\n 'ENGINE': os.environ.get('DATABASE_ENGINE', 'django.db.backends.postgresql_psycopg2'),\n 'NAME': 'wagtaildemo',\n 'USER': os.environ.get('DATABASE_USER', 'postgres'),\n }\n },\n ROOT_URLCONF='wagtail.tests.urls',\n STATIC_URL='/static/',\n STATIC_ROOT=STATIC_ROOT,\n MEDIA_ROOT=MEDIA_ROOT,\n USE_TZ=True,\n STATICFILES_FINDERS=(\n 'django.contrib.staticfiles.finders.AppDirectoriesFinder',\n 'compressor.finders.CompressorFinder',\n ),\n TEMPLATE_CONTEXT_PROCESSORS=global_settings.TEMPLATE_CONTEXT_PROCESSORS + (\n 'django.core.context_processors.request',\n ),\n MIDDLEWARE_CLASSES=(\n 'django.middleware.common.CommonMiddleware',\n 'django.contrib.sessions.middleware.SessionMiddleware',\n 'django.middleware.csrf.CsrfViewMiddleware',\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\n 'django.contrib.messages.middleware.MessageMiddleware',\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\n\n 'wagtail.wagtailcore.middleware.SiteMiddleware',\n\n 'wagtail.wagtailredirects.middleware.RedirectMiddleware',\n ),\n INSTALLED_APPS=[\n 'django.contrib.contenttypes',\n 'django.contrib.sessions',\n 'django.contrib.auth',\n 'django.contrib.messages',\n 'django.contrib.staticfiles',\n 'django.contrib.admin',\n\n 'taggit',\n 'south',\n 'compressor',\n\n 'wagtail.wagtailcore',\n 'wagtail.wagtailadmin',\n 'wagtail.wagtaildocs',\n 'wagtail.wagtailsnippets',\n 'wagtail.wagtailusers',\n 'wagtail.wagtailimages',\n 'wagtail.wagtailembeds',\n 'wagtail.wagtailsearch',\n 'wagtail.wagtailredirects',\n 'wagtail.wagtailforms',\n 'wagtail.tests',\n ],\n\n # Using DatabaseCache to make sure that the cache is cleared between tests.\n # This prevents false-positives in some wagtail core tests where we are\n # changing the 'wagtail_root_paths' key which may cause future tests to fail.\n CACHES = {\n 'default': {\n 'BACKEND': 'django.core.cache.backends.db.DatabaseCache',\n 'LOCATION': 'cache',\n }\n },\n PASSWORD_HASHERS=(\n 'django.contrib.auth.hashers.MD5PasswordHasher', # don't use the intentionally slow default password hasher\n ),\n COMPRESS_ENABLED=False, # disable compression so that we can run tests on the content of the compress tag\n WAGTAILSEARCH_BACKENDS=WAGTAILSEARCH_BACKENDS,\n WAGTAIL_SITE_NAME='Test Site',\n LOGIN_REDIRECT_URL='wagtailadmin_home',\n )\n\n\ndef runtests():\n argv = sys.argv[:1] + ['test'] + sys.argv[1:]\n try:\n execute_from_command_line(argv)\n finally:\n shutil.rmtree(STATIC_ROOT, ignore_errors=True)\n shutil.rmtree(MEDIA_ROOT, ignore_errors=True)\n\n\nif __name__ == '__main__':\n runtests()\n", "path": "runtests.py"}, {"content": "from django.conf import settings\nfrom django.shortcuts import render, redirect\nfrom django.contrib import messages\nfrom django.contrib.auth.forms import SetPasswordForm\nfrom django.contrib.auth.decorators import permission_required\nfrom django.contrib.auth.views import logout as auth_logout\nfrom django.utils.translation import ugettext as _ \n\n@permission_required('wagtailadmin.access_admin')\ndef account(request):\n return render(request, 'wagtailadmin/account/account.html', {\n 'show_change_password': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True) and request.user.has_usable_password(),\n })\n\n\n@permission_required('wagtailadmin.access_admin')\ndef change_password(request):\n can_change_password = request.user.has_usable_password()\n\n if can_change_password:\n if request.POST:\n form = SetPasswordForm(request.user, request.POST)\n\n if form.is_valid():\n form.save()\n\n messages.success(request, _(\"Your password has been changed successfully!\"))\n return redirect('wagtailadmin_account')\n else:\n form = SetPasswordForm(request.user)\n else:\n form = None\n\n return render(request, 'wagtailadmin/account/change_password.html', {\n 'form': form,\n 'can_change_password': can_change_password,\n })\n\n\ndef logout(request):\n response = auth_logout(request, next_page = 'wagtailadmin_login')\n\n # By default, logging out will generate a fresh sessionid cookie. We want to use the\n # absence of sessionid as an indication that front-end pages are being viewed by a\n # non-logged-in user and are therefore cacheable, so we forcibly delete the cookie here.\n response.delete_cookie(settings.SESSION_COOKIE_NAME,\n domain=settings.SESSION_COOKIE_DOMAIN,\n path=settings.SESSION_COOKIE_PATH)\n\n # HACK: pretend that the session hasn't been modified, so that SessionMiddleware\n # won't override the above and write a new cookie.\n request.session.modified = False\n\n return response\n", "path": "wagtail/wagtailadmin/views/account.py"}, {"content": "from django.conf.urls import url\nfrom django.conf import settings\n\nfrom wagtail.wagtailadmin.forms import LoginForm, PasswordResetForm\nfrom wagtail.wagtailadmin.views import account, chooser, home, pages, tags, userbar\nfrom wagtail.wagtailadmin import hooks\n\nurlpatterns = [\n url(\n r'^login/$', 'django.contrib.auth.views.login', {\n 'template_name': 'wagtailadmin/login.html',\n 'authentication_form': LoginForm,\n 'extra_context': {'show_password_reset': getattr(settings, 'WAGTAIL_PASSWORD_MANAGEMENT_ENABLED', True)},\n }, name='wagtailadmin_login'\n ),\n\n # Password reset\n url(\n r'^password_reset/$', 'django.contrib.auth.views.password_reset', {\n 'template_name': 'wagtailadmin/account/password_reset/form.html',\n 'email_template_name': 'wagtailadmin/account/password_reset/email.txt',\n 'subject_template_name': 'wagtailadmin/account/password_reset/email_subject.txt',\n 'password_reset_form': PasswordResetForm,\n }, name='password_reset'\n ),\n url(\n r'^password_reset/done/$', 'django.contrib.auth.views.password_reset_done', {\n 'template_name': 'wagtailadmin/account/password_reset/done.html'\n }, name='password_reset_done'\n ),\n url(\n r'^password_reset/confirm/(?P<uidb64>[0-9A-Za-z_\\-]+)/(?P<token>[0-9A-Za-z]{1,13}-[0-9A-Za-z]{1,20})/$',\n 'django.contrib.auth.views.password_reset_confirm',\n {'template_name': 'wagtailadmin/account/password_reset/confirm.html'},\n name='password_reset_confirm',\n ),\n url(\n r'^password_reset/complete/$', 'django.contrib.auth.views.password_reset_complete',\n {'template_name': 'wagtailadmin/account/password_reset/complete.html'},\n name='password_reset_complete'\n ),\n]\n\nurlpatterns += [\n url(r'^$', home.home, name='wagtailadmin_home'),\n\n url(r'^failwhale/$', home.error_test, name='wagtailadmin_error_test'),\n\n url(r'^pages/$', pages.index, name='wagtailadmin_explore_root'),\n url(r'^pages/(\\d+)/$', pages.index, name='wagtailadmin_explore'),\n\n url(r'^pages/new/(\\w+)/(\\w+)/(\\d+)/$', pages.create, name='wagtailadmin_pages_create'),\n url(r'^pages/new/(\\w+)/(\\w+)/(\\d+)/preview/$', pages.preview_on_create, name='wagtailadmin_pages_preview_on_create'),\n url(r'^pages/usage/(\\w+)/(\\w+)/$', pages.content_type_use, name='wagtailadmin_pages_type_use'),\n\n url(r'^pages/(\\d+)/edit/$', pages.edit, name='wagtailadmin_pages_edit'),\n url(r'^pages/(\\d+)/edit/preview/$', pages.preview_on_edit, name='wagtailadmin_pages_preview_on_edit'),\n\n url(r'^pages/preview_placeholder/$', pages.preview_placeholder, name='wagtailadmin_pages_preview_placeholder'),\n\n url(r'^pages/(\\d+)/view_draft/$', pages.view_draft, name='wagtailadmin_pages_view_draft'),\n url(r'^pages/(\\d+)/add_subpage/$', pages.add_subpage, name='wagtailadmin_pages_add_subpage'),\n url(r'^pages/(\\d+)/delete/$', pages.delete, name='wagtailadmin_pages_delete'),\n url(r'^pages/(\\d+)/unpublish/$', pages.unpublish, name='wagtailadmin_pages_unpublish'),\n\n url(r'^pages/search/$', pages.search, name='wagtailadmin_pages_search'),\n\n url(r'^pages/(\\d+)/move/$', pages.move_choose_destination, name='wagtailadmin_pages_move'),\n url(r'^pages/(\\d+)/move/(\\d+)/$', pages.move_choose_destination, name='wagtailadmin_pages_move_choose_destination'),\n url(r'^pages/(\\d+)/move/(\\d+)/confirm/$', pages.move_confirm, name='wagtailadmin_pages_move_confirm'),\n url(r'^pages/(\\d+)/set_position/$', pages.set_page_position, name='wagtailadmin_pages_set_page_position'),\n\n url(r'^pages/moderation/(\\d+)/approve/$', pages.approve_moderation, name='wagtailadmin_pages_approve_moderation'),\n url(r'^pages/moderation/(\\d+)/reject/$', pages.reject_moderation, name='wagtailadmin_pages_reject_moderation'),\n url(r'^pages/moderation/(\\d+)/preview/$', pages.preview_for_moderation, name='wagtailadmin_pages_preview_for_moderation'),\n\n url(r'^choose-page/$', chooser.browse, name='wagtailadmin_choose_page'),\n url(r'^choose-page/(\\d+)/$', chooser.browse, name='wagtailadmin_choose_page_child'),\n url(r'^choose-external-link/$', chooser.external_link, name='wagtailadmin_choose_page_external_link'),\n url(r'^choose-email-link/$', chooser.email_link, name='wagtailadmin_choose_page_email_link'),\n\n url(r'^tag-autocomplete/$', tags.autocomplete, name='wagtailadmin_tag_autocomplete'),\n\n url(r'^account/$', account.account, name='wagtailadmin_account'),\n url(r'^account/change_password/$', account.change_password, name='wagtailadmin_account_change_password'),\n url(r'^logout/$', account.logout, name='wagtailadmin_logout'),\n\n url(r'^userbar/(\\d+)/$', userbar.for_frontend, name='wagtailadmin_userbar_frontend'),\n url(r'^userbar/moderation/(\\d+)/$', userbar.for_moderation, name='wagtailadmin_userbar_moderation'),\n]\n\n\n# Import additional urlpatterns from any apps that define a register_admin_urls hook\nfor fn in hooks.get_hooks('register_admin_urls'):\n urls = fn()\n if urls:\n urlpatterns += urls\n", "path": "wagtail/wagtailadmin/urls.py"}]} | 3,766 | 824 |
gh_patches_debug_23646 | rasdani/github-patches | git_diff | nipy__nipype-2066 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ENH: ICA_AROMA: make current working directory default output directory
Changes proposed in this pull request
- Make the default output directory the current working directory so you don't have to specify an out_dir, which I believe makes ICA_AROMA more Node friendly.
</issue>
<code>
[start of nipype/interfaces/fsl/ICA_AROMA.py]
1 # -*- coding: utf-8 -*-
2 # emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
3 # vi: set ft=python sts=4 ts=4 sw=4 et:
4 """This commandline module provides classes for interfacing with the
5 `ICA-AROMA.py<https://github.com/rhr-pruim/ICA-AROMA>`_ command line tool.
6 Change directory to provide relative paths for doctests
7 >>> import os
8 >>> filepath = os.path.dirname(os.path.realpath(__file__))
9 >>> datadir = os.path.realpath(os.path.join(filepath,
10 ... '../../testing/data'))
11 >>> os.chdir(datadir)
12 """
13
14 from __future__ import print_function, division, unicode_literals, absolute_import
15 from ..base import (TraitedSpec, CommandLineInputSpec, CommandLine,
16 File, Directory, traits)
17 import os
18
19
20 class ICA_AROMAInputSpec(CommandLineInputSpec):
21 feat_dir = Directory(exists=True, mandatory=True,
22 argstr='-feat %s',
23 xor=['in_file', 'mat_file', 'fnirt_warp_file', 'motion_parameters'],
24 desc='If a feat directory exists and temporal filtering '
25 'has not been run yet, ICA_AROMA can use the files in '
26 'this directory.')
27 in_file = File(exists=True, mandatory=True,
28 argstr='-i %s', xor=['feat_dir'],
29 desc='volume to be denoised')
30 out_dir = Directory('out', genfile=True,
31 argstr='-o %s',
32 desc='output directory')
33 mask = File(exists=True, argstr='-m %s', xor=['feat_dir'],
34 desc='path/name volume mask')
35 dim = traits.Int(argstr='-dim %d',
36 desc='Dimensionality reduction when running '
37 'MELODIC (defualt is automatic estimation)')
38 TR = traits.Float(argstr='-tr %.3f',
39 desc='TR in seconds. If this is not specified '
40 'the TR will be extracted from the '
41 'header of the fMRI nifti file.')
42 melodic_dir = Directory(exists=True, argstr='-meldir %s',
43 desc='path to MELODIC directory if MELODIC has already been run')
44 mat_file = File(exists=True, argstr='-affmat %s', xor=['feat_dir'],
45 desc='path/name of the mat-file describing the '
46 'affine registration (e.g. FSL FLIRT) of the '
47 'functional data to structural space (.mat file)')
48 fnirt_warp_file = File(exists=True, argstr='-warp %s', xor=['feat_dir'],
49 desc='File name of the warp-file describing '
50 'the non-linear registration (e.g. FSL FNIRT) '
51 'of the structural data to MNI152 space (.nii.gz)')
52 motion_parameters = File(exists=True, mandatory=True,
53 argstr='-mc %s', xor=['feat_dir'],
54 desc='motion parameters file')
55 denoise_type = traits.Enum('nonaggr', 'aggr', 'both', 'no', usedefault=True,
56 mandatory=True, argstr='-den %s',
57 desc='Type of denoising strategy:\n'
58 '-none: only classification, no denoising\n'
59 '-nonaggr (default): non-aggresssive denoising, i.e. partial component regression\n'
60 '-aggr: aggressive denoising, i.e. full component regression\n'
61 '-both: both aggressive and non-aggressive denoising (two outputs)')
62
63 class ICA_AROMAOutputSpec(TraitedSpec):
64 aggr_denoised_file = File(exists=True,
65 desc='if generated: aggressively denoised volume')
66 nonaggr_denoised_file = File(exists=True,
67 desc='if generated: non aggressively denoised volume' )
68 out_dir = Directory(exists=True,
69 desc='directory contains (in addition to the denoised files): '
70 'melodic.ica + classified_motion_components + '
71 'classification_overview + feature_scores + melodic_ic_mni)')
72
73 class ICA_AROMA(CommandLine):
74 """
75 Interface for the ICA_AROMA.py script.
76
77 ICA-AROMA (i.e. 'ICA-based Automatic Removal Of Motion Artifacts') concerns
78 a data-driven method to identify and remove motion-related independent
79 components from fMRI data. To that end it exploits a small, but robust
80 set of theoretically motivated features, preventing the need for classifier
81 re-training and therefore providing direct and easy applicability.
82
83 See link for further documentation: https://github.com/rhr-pruim/ICA-AROMA
84
85 Example
86 -------
87
88 >>> from nipype.interfaces.fsl import ICA_AROMA
89 >>> from nipype.testing import example_data
90 >>> AROMA_obj = ICA_AROMA.ICA_AROMA()
91 >>> AROMA_obj.inputs.in_file = 'functional.nii'
92 >>> AROMA_obj.inputs.mat_file = 'func_to_struct.mat'
93 >>> AROMA_obj.inputs.fnirt_warp_file = 'warpfield.nii'
94 >>> AROMA_obj.inputs.motion_parameters = 'fsl_mcflirt_movpar.txt'
95 >>> AROMA_obj.inputs.mask = 'mask.nii.gz'
96 >>> AROMA_obj.inputs.denoise_type = 'both'
97 >>> AROMA_obj.inputs.out_dir = 'ICA_testout'
98 >>> AROMA_obj.cmdline # doctest: +ALLOW_UNICODE
99 'ICA_AROMA.py -den both -warp warpfield.nii -i functional.nii -m mask.nii.gz -affmat func_to_struct.mat -mc fsl_mcflirt_movpar.txt -o ICA_testout'
100 """
101 _cmd = 'ICA_AROMA.py'
102 input_spec = ICA_AROMAInputSpec
103 output_spec = ICA_AROMAOutputSpec
104
105 def _list_outputs(self):
106 outputs = self.output_spec().get()
107 if isdefined(self.inputs.out_dir):
108 outputs['out_dir'] = os.path.abspath(self.inputs.out_dir)
109 else:
110 outputs['out_dir'] = self._gen_filename('out_dir')
111 out_dir = outputs['out_dir']
112
113 if self.inputs.denoise_type in ('aggr', 'both'):
114 outputs['aggr_denoised_file'] = os.path.join(out_dir, 'denoised_func_data_aggr.nii.gz')
115 if self.inputs.denoise_type in ('nonaggr', 'both'):
116 outputs['nonaggr_denoised_file'] = os.path.join(out_dir, 'denoised_func_data_nonaggr.nii.gz')
117 return outputs
118
119 def _gen_filename(self, name):
120 if name == 'out_dir':
121 return os.getcwd()
122
123
124
[end of nipype/interfaces/fsl/ICA_AROMA.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nipype/interfaces/fsl/ICA_AROMA.py b/nipype/interfaces/fsl/ICA_AROMA.py
--- a/nipype/interfaces/fsl/ICA_AROMA.py
+++ b/nipype/interfaces/fsl/ICA_AROMA.py
@@ -13,7 +13,7 @@
from __future__ import print_function, division, unicode_literals, absolute_import
from ..base import (TraitedSpec, CommandLineInputSpec, CommandLine,
- File, Directory, traits)
+ File, Directory, traits, isdefined)
import os
@@ -109,7 +109,7 @@
else:
outputs['out_dir'] = self._gen_filename('out_dir')
out_dir = outputs['out_dir']
-
+
if self.inputs.denoise_type in ('aggr', 'both'):
outputs['aggr_denoised_file'] = os.path.join(out_dir, 'denoised_func_data_aggr.nii.gz')
if self.inputs.denoise_type in ('nonaggr', 'both'):
@@ -119,5 +119,3 @@
def _gen_filename(self, name):
if name == 'out_dir':
return os.getcwd()
-
-
| {"golden_diff": "diff --git a/nipype/interfaces/fsl/ICA_AROMA.py b/nipype/interfaces/fsl/ICA_AROMA.py\n--- a/nipype/interfaces/fsl/ICA_AROMA.py\n+++ b/nipype/interfaces/fsl/ICA_AROMA.py\n@@ -13,7 +13,7 @@\n \n from __future__ import print_function, division, unicode_literals, absolute_import\n from ..base import (TraitedSpec, CommandLineInputSpec, CommandLine,\n- File, Directory, traits)\n+ File, Directory, traits, isdefined)\n import os\n \n \n@@ -109,7 +109,7 @@\n else:\n outputs['out_dir'] = self._gen_filename('out_dir')\n out_dir = outputs['out_dir']\n- \n+\n if self.inputs.denoise_type in ('aggr', 'both'):\n outputs['aggr_denoised_file'] = os.path.join(out_dir, 'denoised_func_data_aggr.nii.gz')\n if self.inputs.denoise_type in ('nonaggr', 'both'):\n@@ -119,5 +119,3 @@\n def _gen_filename(self, name):\n if name == 'out_dir':\n return os.getcwd()\n-\n-\n", "issue": "ENH: ICA_AROMA: make current working directory default output directory\nChanges proposed in this pull request\r\n- Make the default output directory the current working directory so you don't have to specify an out_dir, which I believe makes ICA_AROMA more Node friendly.\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-\n# vi: set ft=python sts=4 ts=4 sw=4 et:\n\"\"\"This commandline module provides classes for interfacing with the\n`ICA-AROMA.py<https://github.com/rhr-pruim/ICA-AROMA>`_ command line tool.\n Change directory to provide relative paths for doctests\n >>> import os\n >>> filepath = os.path.dirname(os.path.realpath(__file__))\n >>> datadir = os.path.realpath(os.path.join(filepath,\n ... '../../testing/data'))\n >>> os.chdir(datadir)\n\"\"\"\n\nfrom __future__ import print_function, division, unicode_literals, absolute_import\nfrom ..base import (TraitedSpec, CommandLineInputSpec, CommandLine,\n File, Directory, traits)\nimport os\n\n\nclass ICA_AROMAInputSpec(CommandLineInputSpec):\n feat_dir = Directory(exists=True, mandatory=True,\n argstr='-feat %s',\n xor=['in_file', 'mat_file', 'fnirt_warp_file', 'motion_parameters'],\n desc='If a feat directory exists and temporal filtering '\n 'has not been run yet, ICA_AROMA can use the files in '\n 'this directory.')\n in_file = File(exists=True, mandatory=True,\n argstr='-i %s', xor=['feat_dir'],\n desc='volume to be denoised')\n out_dir = Directory('out', genfile=True,\n argstr='-o %s',\n desc='output directory')\n mask = File(exists=True, argstr='-m %s', xor=['feat_dir'],\n desc='path/name volume mask')\n dim = traits.Int(argstr='-dim %d',\n desc='Dimensionality reduction when running '\n 'MELODIC (defualt is automatic estimation)')\n TR = traits.Float(argstr='-tr %.3f',\n desc='TR in seconds. If this is not specified '\n 'the TR will be extracted from the '\n 'header of the fMRI nifti file.')\n melodic_dir = Directory(exists=True, argstr='-meldir %s',\n desc='path to MELODIC directory if MELODIC has already been run')\n mat_file = File(exists=True, argstr='-affmat %s', xor=['feat_dir'],\n desc='path/name of the mat-file describing the '\n 'affine registration (e.g. FSL FLIRT) of the '\n 'functional data to structural space (.mat file)')\n fnirt_warp_file = File(exists=True, argstr='-warp %s', xor=['feat_dir'],\n desc='File name of the warp-file describing '\n 'the non-linear registration (e.g. FSL FNIRT) '\n 'of the structural data to MNI152 space (.nii.gz)')\n motion_parameters = File(exists=True, mandatory=True,\n argstr='-mc %s', xor=['feat_dir'],\n desc='motion parameters file')\n denoise_type = traits.Enum('nonaggr', 'aggr', 'both', 'no', usedefault=True,\n mandatory=True, argstr='-den %s',\n desc='Type of denoising strategy:\\n'\n '-none: only classification, no denoising\\n'\n '-nonaggr (default): non-aggresssive denoising, i.e. partial component regression\\n'\n '-aggr: aggressive denoising, i.e. full component regression\\n'\n '-both: both aggressive and non-aggressive denoising (two outputs)')\n\nclass ICA_AROMAOutputSpec(TraitedSpec):\n aggr_denoised_file = File(exists=True,\n desc='if generated: aggressively denoised volume')\n nonaggr_denoised_file = File(exists=True,\n desc='if generated: non aggressively denoised volume' )\n out_dir = Directory(exists=True,\n desc='directory contains (in addition to the denoised files): '\n 'melodic.ica + classified_motion_components + '\n 'classification_overview + feature_scores + melodic_ic_mni)')\n\nclass ICA_AROMA(CommandLine):\n \"\"\"\n Interface for the ICA_AROMA.py script.\n\n ICA-AROMA (i.e. 'ICA-based Automatic Removal Of Motion Artifacts') concerns\n a data-driven method to identify and remove motion-related independent\n components from fMRI data. To that end it exploits a small, but robust\n set of theoretically motivated features, preventing the need for classifier\n re-training and therefore providing direct and easy applicability.\n\n See link for further documentation: https://github.com/rhr-pruim/ICA-AROMA\n\n Example\n -------\n\n >>> from nipype.interfaces.fsl import ICA_AROMA\n >>> from nipype.testing import example_data\n >>> AROMA_obj = ICA_AROMA.ICA_AROMA()\n >>> AROMA_obj.inputs.in_file = 'functional.nii'\n >>> AROMA_obj.inputs.mat_file = 'func_to_struct.mat'\n >>> AROMA_obj.inputs.fnirt_warp_file = 'warpfield.nii'\n >>> AROMA_obj.inputs.motion_parameters = 'fsl_mcflirt_movpar.txt'\n >>> AROMA_obj.inputs.mask = 'mask.nii.gz'\n >>> AROMA_obj.inputs.denoise_type = 'both'\n >>> AROMA_obj.inputs.out_dir = 'ICA_testout'\n >>> AROMA_obj.cmdline # doctest: +ALLOW_UNICODE\n 'ICA_AROMA.py -den both -warp warpfield.nii -i functional.nii -m mask.nii.gz -affmat func_to_struct.mat -mc fsl_mcflirt_movpar.txt -o ICA_testout'\n \"\"\"\n _cmd = 'ICA_AROMA.py'\n input_spec = ICA_AROMAInputSpec\n output_spec = ICA_AROMAOutputSpec\n\n def _list_outputs(self):\n outputs = self.output_spec().get()\n if isdefined(self.inputs.out_dir):\n outputs['out_dir'] = os.path.abspath(self.inputs.out_dir)\n else:\n outputs['out_dir'] = self._gen_filename('out_dir')\n out_dir = outputs['out_dir']\n \n if self.inputs.denoise_type in ('aggr', 'both'):\n outputs['aggr_denoised_file'] = os.path.join(out_dir, 'denoised_func_data_aggr.nii.gz')\n if self.inputs.denoise_type in ('nonaggr', 'both'):\n outputs['nonaggr_denoised_file'] = os.path.join(out_dir, 'denoised_func_data_nonaggr.nii.gz')\n return outputs\n\n def _gen_filename(self, name):\n if name == 'out_dir':\n return os.getcwd()\n\n \n", "path": "nipype/interfaces/fsl/ICA_AROMA.py"}]} | 2,346 | 270 |
gh_patches_debug_18556 | rasdani/github-patches | git_diff | beetbox__beets-2762 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ImageMagick applies maxwidth to longest edge instead of to width
The [current invocation](https://github.com/beetbox/beets/blob/2120cf68c61649c22c14f20d83bd28d758720557/beets/util/artresizer.py#L96-L100) of ImageMagick applies the ```maxwidth``` parameter of ```fetchart``` to an image's longer edge instead of to its width. Possible solution suggested by Adrian [on the forum](https://discourse.beets.io/t/fetchart-taming-cover-art-resolution/206/12?u=dorade).
</issue>
<code>
[start of beets/util/artresizer.py]
1 # -*- coding: utf-8 -*-
2 # This file is part of beets.
3 # Copyright 2016, Fabrice Laporte
4 #
5 # Permission is hereby granted, free of charge, to any person obtaining
6 # a copy of this software and associated documentation files (the
7 # "Software"), to deal in the Software without restriction, including
8 # without limitation the rights to use, copy, modify, merge, publish,
9 # distribute, sublicense, and/or sell copies of the Software, and to
10 # permit persons to whom the Software is furnished to do so, subject to
11 # the following conditions:
12 #
13 # The above copyright notice and this permission notice shall be
14 # included in all copies or substantial portions of the Software.
15
16 """Abstraction layer to resize images using PIL, ImageMagick, or a
17 public resizing proxy if neither is available.
18 """
19 from __future__ import division, absolute_import, print_function
20
21 import subprocess
22 import os
23 import re
24 from tempfile import NamedTemporaryFile
25 from six.moves.urllib.parse import urlencode
26 from beets import logging
27 from beets import util
28 import six
29
30 # Resizing methods
31 PIL = 1
32 IMAGEMAGICK = 2
33 WEBPROXY = 3
34
35 if util.SNI_SUPPORTED:
36 PROXY_URL = 'https://images.weserv.nl/'
37 else:
38 PROXY_URL = 'http://images.weserv.nl/'
39
40 log = logging.getLogger('beets')
41
42
43 def resize_url(url, maxwidth):
44 """Return a proxied image URL that resizes the original image to
45 maxwidth (preserving aspect ratio).
46 """
47 return '{0}?{1}'.format(PROXY_URL, urlencode({
48 'url': url.replace('http://', ''),
49 'w': maxwidth,
50 }))
51
52
53 def temp_file_for(path):
54 """Return an unused filename with the same extension as the
55 specified path.
56 """
57 ext = os.path.splitext(path)[1]
58 with NamedTemporaryFile(suffix=util.py3_path(ext), delete=False) as f:
59 return util.bytestring_path(f.name)
60
61
62 def pil_resize(maxwidth, path_in, path_out=None):
63 """Resize using Python Imaging Library (PIL). Return the output path
64 of resized image.
65 """
66 path_out = path_out or temp_file_for(path_in)
67 from PIL import Image
68 log.debug(u'artresizer: PIL resizing {0} to {1}',
69 util.displayable_path(path_in), util.displayable_path(path_out))
70
71 try:
72 im = Image.open(util.syspath(path_in))
73 size = maxwidth, maxwidth
74 im.thumbnail(size, Image.ANTIALIAS)
75 im.save(path_out)
76 return path_out
77 except IOError:
78 log.error(u"PIL cannot create thumbnail for '{0}'",
79 util.displayable_path(path_in))
80 return path_in
81
82
83 def im_resize(maxwidth, path_in, path_out=None):
84 """Resize using ImageMagick's ``convert`` tool.
85 Return the output path of resized image.
86 """
87 path_out = path_out or temp_file_for(path_in)
88 log.debug(u'artresizer: ImageMagick resizing {0} to {1}',
89 util.displayable_path(path_in), util.displayable_path(path_out))
90
91 # "-resize widthxheight>" shrinks images with dimension(s) larger
92 # than the corresponding width and/or height dimension(s). The >
93 # "only shrink" flag is prefixed by ^ escape char for Windows
94 # compatibility.
95 try:
96 util.command_output([
97 'convert', util.syspath(path_in, prefix=False),
98 '-resize', '{0}x^>'.format(maxwidth),
99 util.syspath(path_out, prefix=False),
100 ])
101 except subprocess.CalledProcessError:
102 log.warning(u'artresizer: IM convert failed for {0}',
103 util.displayable_path(path_in))
104 return path_in
105 return path_out
106
107
108 BACKEND_FUNCS = {
109 PIL: pil_resize,
110 IMAGEMAGICK: im_resize,
111 }
112
113
114 def pil_getsize(path_in):
115 from PIL import Image
116 try:
117 im = Image.open(util.syspath(path_in))
118 return im.size
119 except IOError as exc:
120 log.error(u"PIL could not read file {}: {}",
121 util.displayable_path(path_in), exc)
122
123
124 def im_getsize(path_in):
125 cmd = ['identify', '-format', '%w %h',
126 util.syspath(path_in, prefix=False)]
127 try:
128 out = util.command_output(cmd)
129 except subprocess.CalledProcessError as exc:
130 log.warning(u'ImageMagick size query failed')
131 log.debug(
132 u'`convert` exited with (status {}) when '
133 u'getting size with command {}:\n{}',
134 exc.returncode, cmd, exc.output.strip()
135 )
136 return
137 try:
138 return tuple(map(int, out.split(b' ')))
139 except IndexError:
140 log.warning(u'Could not understand IM output: {0!r}', out)
141
142
143 BACKEND_GET_SIZE = {
144 PIL: pil_getsize,
145 IMAGEMAGICK: im_getsize,
146 }
147
148
149 class Shareable(type):
150 """A pseudo-singleton metaclass that allows both shared and
151 non-shared instances. The ``MyClass.shared`` property holds a
152 lazily-created shared instance of ``MyClass`` while calling
153 ``MyClass()`` to construct a new object works as usual.
154 """
155 def __init__(self, name, bases, dict):
156 super(Shareable, self).__init__(name, bases, dict)
157 self._instance = None
158
159 @property
160 def shared(self):
161 if self._instance is None:
162 self._instance = self()
163 return self._instance
164
165
166 class ArtResizer(six.with_metaclass(Shareable, object)):
167 """A singleton class that performs image resizes.
168 """
169
170 def __init__(self):
171 """Create a resizer object with an inferred method.
172 """
173 self.method = self._check_method()
174 log.debug(u"artresizer: method is {0}", self.method)
175 self.can_compare = self._can_compare()
176
177 def resize(self, maxwidth, path_in, path_out=None):
178 """Manipulate an image file according to the method, returning a
179 new path. For PIL or IMAGEMAGIC methods, resizes the image to a
180 temporary file. For WEBPROXY, returns `path_in` unmodified.
181 """
182 if self.local:
183 func = BACKEND_FUNCS[self.method[0]]
184 return func(maxwidth, path_in, path_out)
185 else:
186 return path_in
187
188 def proxy_url(self, maxwidth, url):
189 """Modifies an image URL according the method, returning a new
190 URL. For WEBPROXY, a URL on the proxy server is returned.
191 Otherwise, the URL is returned unmodified.
192 """
193 if self.local:
194 return url
195 else:
196 return resize_url(url, maxwidth)
197
198 @property
199 def local(self):
200 """A boolean indicating whether the resizing method is performed
201 locally (i.e., PIL or ImageMagick).
202 """
203 return self.method[0] in BACKEND_FUNCS
204
205 def get_size(self, path_in):
206 """Return the size of an image file as an int couple (width, height)
207 in pixels.
208
209 Only available locally
210 """
211 if self.local:
212 func = BACKEND_GET_SIZE[self.method[0]]
213 return func(path_in)
214
215 def _can_compare(self):
216 """A boolean indicating whether image comparison is available"""
217
218 return self.method[0] == IMAGEMAGICK and self.method[1] > (6, 8, 7)
219
220 @staticmethod
221 def _check_method():
222 """Return a tuple indicating an available method and its version."""
223 version = get_im_version()
224 if version:
225 return IMAGEMAGICK, version
226
227 version = get_pil_version()
228 if version:
229 return PIL, version
230
231 return WEBPROXY, (0)
232
233
234 def get_im_version():
235 """Return Image Magick version or None if it is unavailable
236 Try invoking ImageMagick's "convert".
237 """
238 try:
239 out = util.command_output(['convert', '--version'])
240
241 if b'imagemagick' in out.lower():
242 pattern = br".+ (\d+)\.(\d+)\.(\d+).*"
243 match = re.search(pattern, out)
244 if match:
245 return (int(match.group(1)),
246 int(match.group(2)),
247 int(match.group(3)))
248 return (0,)
249
250 except (subprocess.CalledProcessError, OSError) as exc:
251 log.debug(u'ImageMagick check `convert --version` failed: {}', exc)
252 return None
253
254
255 def get_pil_version():
256 """Return Image Magick version or None if it is unavailable
257 Try importing PIL."""
258 try:
259 __import__('PIL', fromlist=[str('Image')])
260 return (0,)
261 except ImportError:
262 return None
263
[end of beets/util/artresizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/beets/util/artresizer.py b/beets/util/artresizer.py
--- a/beets/util/artresizer.py
+++ b/beets/util/artresizer.py
@@ -88,14 +88,13 @@
log.debug(u'artresizer: ImageMagick resizing {0} to {1}',
util.displayable_path(path_in), util.displayable_path(path_out))
- # "-resize widthxheight>" shrinks images with dimension(s) larger
- # than the corresponding width and/or height dimension(s). The >
- # "only shrink" flag is prefixed by ^ escape char for Windows
- # compatibility.
+ # "-resize WIDTHx>" shrinks images with the width larger
+ # than the given width while maintaining the aspect ratio
+ # with regards to the height.
try:
util.command_output([
'convert', util.syspath(path_in, prefix=False),
- '-resize', '{0}x^>'.format(maxwidth),
+ '-resize', '{0}x>'.format(maxwidth),
util.syspath(path_out, prefix=False),
])
except subprocess.CalledProcessError:
| {"golden_diff": "diff --git a/beets/util/artresizer.py b/beets/util/artresizer.py\n--- a/beets/util/artresizer.py\n+++ b/beets/util/artresizer.py\n@@ -88,14 +88,13 @@\n log.debug(u'artresizer: ImageMagick resizing {0} to {1}',\n util.displayable_path(path_in), util.displayable_path(path_out))\n \n- # \"-resize widthxheight>\" shrinks images with dimension(s) larger\n- # than the corresponding width and/or height dimension(s). The >\n- # \"only shrink\" flag is prefixed by ^ escape char for Windows\n- # compatibility.\n+ # \"-resize WIDTHx>\" shrinks images with the width larger\n+ # than the given width while maintaining the aspect ratio\n+ # with regards to the height.\n try:\n util.command_output([\n 'convert', util.syspath(path_in, prefix=False),\n- '-resize', '{0}x^>'.format(maxwidth),\n+ '-resize', '{0}x>'.format(maxwidth),\n util.syspath(path_out, prefix=False),\n ])\n except subprocess.CalledProcessError:\n", "issue": "ImageMagick applies maxwidth to longest edge instead of to width\nThe [current invocation](https://github.com/beetbox/beets/blob/2120cf68c61649c22c14f20d83bd28d758720557/beets/util/artresizer.py#L96-L100) of ImageMagick applies the ```maxwidth``` parameter of ```fetchart``` to an image's longer edge instead of to its width. Possible solution suggested by Adrian [on the forum](https://discourse.beets.io/t/fetchart-taming-cover-art-resolution/206/12?u=dorade).\r\n\r\n\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This file is part of beets.\n# Copyright 2016, Fabrice Laporte\n#\n# Permission is hereby granted, free of charge, to any person obtaining\n# a copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish,\n# distribute, sublicense, and/or sell copies of the Software, and to\n# permit persons to whom the Software is furnished to do so, subject to\n# the following conditions:\n#\n# The above copyright notice and this permission notice shall be\n# included in all copies or substantial portions of the Software.\n\n\"\"\"Abstraction layer to resize images using PIL, ImageMagick, or a\npublic resizing proxy if neither is available.\n\"\"\"\nfrom __future__ import division, absolute_import, print_function\n\nimport subprocess\nimport os\nimport re\nfrom tempfile import NamedTemporaryFile\nfrom six.moves.urllib.parse import urlencode\nfrom beets import logging\nfrom beets import util\nimport six\n\n# Resizing methods\nPIL = 1\nIMAGEMAGICK = 2\nWEBPROXY = 3\n\nif util.SNI_SUPPORTED:\n PROXY_URL = 'https://images.weserv.nl/'\nelse:\n PROXY_URL = 'http://images.weserv.nl/'\n\nlog = logging.getLogger('beets')\n\n\ndef resize_url(url, maxwidth):\n \"\"\"Return a proxied image URL that resizes the original image to\n maxwidth (preserving aspect ratio).\n \"\"\"\n return '{0}?{1}'.format(PROXY_URL, urlencode({\n 'url': url.replace('http://', ''),\n 'w': maxwidth,\n }))\n\n\ndef temp_file_for(path):\n \"\"\"Return an unused filename with the same extension as the\n specified path.\n \"\"\"\n ext = os.path.splitext(path)[1]\n with NamedTemporaryFile(suffix=util.py3_path(ext), delete=False) as f:\n return util.bytestring_path(f.name)\n\n\ndef pil_resize(maxwidth, path_in, path_out=None):\n \"\"\"Resize using Python Imaging Library (PIL). Return the output path\n of resized image.\n \"\"\"\n path_out = path_out or temp_file_for(path_in)\n from PIL import Image\n log.debug(u'artresizer: PIL resizing {0} to {1}',\n util.displayable_path(path_in), util.displayable_path(path_out))\n\n try:\n im = Image.open(util.syspath(path_in))\n size = maxwidth, maxwidth\n im.thumbnail(size, Image.ANTIALIAS)\n im.save(path_out)\n return path_out\n except IOError:\n log.error(u\"PIL cannot create thumbnail for '{0}'\",\n util.displayable_path(path_in))\n return path_in\n\n\ndef im_resize(maxwidth, path_in, path_out=None):\n \"\"\"Resize using ImageMagick's ``convert`` tool.\n Return the output path of resized image.\n \"\"\"\n path_out = path_out or temp_file_for(path_in)\n log.debug(u'artresizer: ImageMagick resizing {0} to {1}',\n util.displayable_path(path_in), util.displayable_path(path_out))\n\n # \"-resize widthxheight>\" shrinks images with dimension(s) larger\n # than the corresponding width and/or height dimension(s). The >\n # \"only shrink\" flag is prefixed by ^ escape char for Windows\n # compatibility.\n try:\n util.command_output([\n 'convert', util.syspath(path_in, prefix=False),\n '-resize', '{0}x^>'.format(maxwidth),\n util.syspath(path_out, prefix=False),\n ])\n except subprocess.CalledProcessError:\n log.warning(u'artresizer: IM convert failed for {0}',\n util.displayable_path(path_in))\n return path_in\n return path_out\n\n\nBACKEND_FUNCS = {\n PIL: pil_resize,\n IMAGEMAGICK: im_resize,\n}\n\n\ndef pil_getsize(path_in):\n from PIL import Image\n try:\n im = Image.open(util.syspath(path_in))\n return im.size\n except IOError as exc:\n log.error(u\"PIL could not read file {}: {}\",\n util.displayable_path(path_in), exc)\n\n\ndef im_getsize(path_in):\n cmd = ['identify', '-format', '%w %h',\n util.syspath(path_in, prefix=False)]\n try:\n out = util.command_output(cmd)\n except subprocess.CalledProcessError as exc:\n log.warning(u'ImageMagick size query failed')\n log.debug(\n u'`convert` exited with (status {}) when '\n u'getting size with command {}:\\n{}',\n exc.returncode, cmd, exc.output.strip()\n )\n return\n try:\n return tuple(map(int, out.split(b' ')))\n except IndexError:\n log.warning(u'Could not understand IM output: {0!r}', out)\n\n\nBACKEND_GET_SIZE = {\n PIL: pil_getsize,\n IMAGEMAGICK: im_getsize,\n}\n\n\nclass Shareable(type):\n \"\"\"A pseudo-singleton metaclass that allows both shared and\n non-shared instances. The ``MyClass.shared`` property holds a\n lazily-created shared instance of ``MyClass`` while calling\n ``MyClass()`` to construct a new object works as usual.\n \"\"\"\n def __init__(self, name, bases, dict):\n super(Shareable, self).__init__(name, bases, dict)\n self._instance = None\n\n @property\n def shared(self):\n if self._instance is None:\n self._instance = self()\n return self._instance\n\n\nclass ArtResizer(six.with_metaclass(Shareable, object)):\n \"\"\"A singleton class that performs image resizes.\n \"\"\"\n\n def __init__(self):\n \"\"\"Create a resizer object with an inferred method.\n \"\"\"\n self.method = self._check_method()\n log.debug(u\"artresizer: method is {0}\", self.method)\n self.can_compare = self._can_compare()\n\n def resize(self, maxwidth, path_in, path_out=None):\n \"\"\"Manipulate an image file according to the method, returning a\n new path. For PIL or IMAGEMAGIC methods, resizes the image to a\n temporary file. For WEBPROXY, returns `path_in` unmodified.\n \"\"\"\n if self.local:\n func = BACKEND_FUNCS[self.method[0]]\n return func(maxwidth, path_in, path_out)\n else:\n return path_in\n\n def proxy_url(self, maxwidth, url):\n \"\"\"Modifies an image URL according the method, returning a new\n URL. For WEBPROXY, a URL on the proxy server is returned.\n Otherwise, the URL is returned unmodified.\n \"\"\"\n if self.local:\n return url\n else:\n return resize_url(url, maxwidth)\n\n @property\n def local(self):\n \"\"\"A boolean indicating whether the resizing method is performed\n locally (i.e., PIL or ImageMagick).\n \"\"\"\n return self.method[0] in BACKEND_FUNCS\n\n def get_size(self, path_in):\n \"\"\"Return the size of an image file as an int couple (width, height)\n in pixels.\n\n Only available locally\n \"\"\"\n if self.local:\n func = BACKEND_GET_SIZE[self.method[0]]\n return func(path_in)\n\n def _can_compare(self):\n \"\"\"A boolean indicating whether image comparison is available\"\"\"\n\n return self.method[0] == IMAGEMAGICK and self.method[1] > (6, 8, 7)\n\n @staticmethod\n def _check_method():\n \"\"\"Return a tuple indicating an available method and its version.\"\"\"\n version = get_im_version()\n if version:\n return IMAGEMAGICK, version\n\n version = get_pil_version()\n if version:\n return PIL, version\n\n return WEBPROXY, (0)\n\n\ndef get_im_version():\n \"\"\"Return Image Magick version or None if it is unavailable\n Try invoking ImageMagick's \"convert\".\n \"\"\"\n try:\n out = util.command_output(['convert', '--version'])\n\n if b'imagemagick' in out.lower():\n pattern = br\".+ (\\d+)\\.(\\d+)\\.(\\d+).*\"\n match = re.search(pattern, out)\n if match:\n return (int(match.group(1)),\n int(match.group(2)),\n int(match.group(3)))\n return (0,)\n\n except (subprocess.CalledProcessError, OSError) as exc:\n log.debug(u'ImageMagick check `convert --version` failed: {}', exc)\n return None\n\n\ndef get_pil_version():\n \"\"\"Return Image Magick version or None if it is unavailable\n Try importing PIL.\"\"\"\n try:\n __import__('PIL', fromlist=[str('Image')])\n return (0,)\n except ImportError:\n return None\n", "path": "beets/util/artresizer.py"}]} | 3,333 | 253 |
gh_patches_debug_9737 | rasdani/github-patches | git_diff | unionai-oss__pandera-1190 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove incorrect(?) warning for `register_check_method()` in docs
#### Location of the documentation
https://pandera.readthedocs.io/en/latest/reference/generated/pandera.extensions.html
#### Documentation problem
It's documented for `register_check_method()` that
> **Warning**
> This is the legacy method for registering check methods. Use the `register_check()` decorator instead.
I can't see any reference to `register_check()` in the docs or the repo, so I assume this is an out-dated warning and `register_check_method()` is infact the defacto function for this kind of stuff.
Might I be missing something? Maybe this warning is supposed to refer to another function.
#### Suggested fix for documentation
Remove warning as it's seemingly not warranted (at least in this moment).
</issue>
<code>
[start of pandera/api/extensions.py]
1 """Extensions module."""
2
3 import inspect
4 import warnings
5 from enum import Enum
6 from functools import partial, wraps
7 from inspect import signature
8 from typing import Callable, List, Optional, Tuple, Type, Union
9
10 import pandas as pd
11 import typing_inspect
12
13 from pandera.api.checks import Check
14 from pandera.api.hypotheses import Hypothesis
15 from pandera.strategies.base_strategies import STRATEGY_DISPATCHER
16
17
18 class BuiltinCheckRegistrationError(Exception):
19 """
20 Exception raised when registering a built-in check implementation but the
21 default check function implementation hasn't been registered with
22 :py:meth:`~flytekit.core.base.BaseCheck.register_builtin_check_fn`.
23 """
24
25
26 # pylint: disable=too-many-locals
27 def register_builtin_check(
28 fn=None,
29 strategy: Optional[Callable] = None,
30 _check_cls: Type = Check,
31 **outer_kwargs,
32 ):
33 """Register a check method to the Check namespace.
34
35 This is the primary way for extending the Check api to define additional
36 built-in checks.
37 """
38
39 if fn is None:
40 return partial(
41 register_builtin_check,
42 strategy=strategy,
43 _check_cls=_check_cls,
44 **outer_kwargs,
45 )
46
47 name = fn.__name__
48
49 # see if the check function is already registered
50 check_fn = _check_cls.CHECK_FUNCTION_REGISTRY.get(name)
51 fn_sig = signature(fn)
52
53 # register the check strategy for this particular check, identified
54 # by the check `name`, and the data type of the check function. This
55 # supports Union types. Also assume that the data type of the data
56 # object to validate is the first argument.
57 data_type = [*fn_sig.parameters.values()][0].annotation
58
59 if typing_inspect.get_origin(data_type) is Tuple:
60 data_type, *_ = typing_inspect.get_args(data_type)
61
62 if typing_inspect.get_origin(data_type) is Union:
63 data_types = typing_inspect.get_args(data_type)
64 else:
65 data_types = (data_type,)
66
67 if strategy is not None:
68 for dt in data_types:
69 STRATEGY_DISPATCHER[(name, dt)] = strategy
70
71 if check_fn is None: # pragma: no cover
72 raise BuiltinCheckRegistrationError(
73 f"Check '{name}' doesn't have a base check implementation. "
74 f"You need to create a stub method in the {_check_cls} class and "
75 "then register a base check function implementation with the "
76 f"{_check_cls}.register_builtin_check_fn method.\n"
77 "See the `pandera.api.base.builtin_checks` and "
78 "`pandera.backends.pandas.builtin_checks` modules as an example."
79 )
80
81 check_fn.register(fn) # type: ignore
82
83 return fn
84
85
86 def register_builtin_hypothesis(**kwargs):
87 """Register a new hypothesis."""
88 return partial(
89 register_builtin_check,
90 _check_cls=Hypothesis,
91 **kwargs,
92 )
93
94
95 # --------------------------------
96 # CUSTOM CHECK REGISTRATION METHOD
97 # --------------------------------
98 #
99 # The `register_check_method` decorator is the legacy method for registering
100 # custom checks and will slated for deprecation after merging the core
101 # internals overhaul.
102
103
104 class CheckType(Enum):
105 """Check types for registered check methods."""
106
107 VECTORIZED = 1 #: Check applied to a Series or DataFrame
108 ELEMENT_WISE = 2 #: Check applied to an element of a Series or DataFrame
109 GROUPBY = 3 #: Check applied to dictionary of Series or DataFrames.
110
111
112 def register_check_statistics(statistics_args):
113 """Decorator to set statistics based on Check method."""
114
115 def register_check_statistics_decorator(class_method):
116 @wraps(class_method)
117 def _wrapper(cls, *args, **kwargs):
118 args = list(args)
119 arg_names = inspect.getfullargspec(class_method).args[1:]
120 if not arg_names:
121 arg_names = statistics_args
122 args_dict = {**dict(zip(arg_names, args)), **kwargs}
123 check = class_method(cls, *args, **kwargs)
124 check.statistics = {
125 stat: args_dict.get(stat) for stat in statistics_args
126 }
127 check.statistics_args = statistics_args
128 return check
129
130 return _wrapper
131
132 return register_check_statistics_decorator
133
134
135 def register_check_method(
136 check_fn=None,
137 *,
138 statistics: Optional[List[str]] = None,
139 supported_types: Union[type, Tuple, List] = (pd.DataFrame, pd.Series),
140 check_type: Union[CheckType, str] = "vectorized",
141 strategy=None,
142 ):
143 """Registers a function as a :class:`~pandera.api.checks.Check` method.
144
145 See the :ref:`user guide<extensions>` for more details.
146
147 .. warning::
148
149 This is the legacy method for registering check methods. Use the
150 :py:func:`~pandera.api.extensions.register_check` decorator instead.
151
152 :param check_fn: check function to register. The function should take one
153 positional argument for the object to validate and additional
154 keyword-only arguments for the check statistics.
155 :param statistics: list of keyword-only arguments in the ``check_fn``,
156 which serve as the statistics needed to serialize/de-serialize the
157 check and generate data if a ``strategy`` function is provided.
158 :param supported_types: the pandas type(s) supported by the check function.
159 Valid values are ``pd.DataFrame``, ``pd.Series``, or a list/tuple of
160 ``(pa.DataFrame, pa.Series)`` if both types are supported.
161 :param check_type: the expected input of the check function. Valid values
162 are :class:`~pandera.extensions.CheckType` enums or
163 ``{"vectorized", "element_wise", "groupby"}``. The input signature of
164 ``check_fn`` is determined by this argument:
165
166 - if ``vectorized``, the first positional argument of ``check_fn``
167 should be one of the ``supported_types``.
168 - if ``element_wise``, the first positional argument of ``check_fn``
169 should be a single scalar element in the pandas Series or DataFrame.
170 - if ``groupby``, the first positional argument of ``check_fn`` should
171 be a dictionary mapping group names to subsets of the Series or
172 DataFrame.
173
174 :param strategy: data-generation strategy associated with the check
175 function.
176 :return: register check function wrapper.
177 """
178
179 # pylint: disable=import-outside-toplevel
180 from pandera.strategies.pandas_strategies import register_check_strategy
181
182 if statistics is None:
183 statistics = []
184
185 if isinstance(check_type, str):
186 check_type = CheckType[check_type.upper()]
187
188 msg = (
189 "{} is not a valid input type for check_fn. You must specify one of "
190 "pandas.DataFrame, pandas.Series, or a tuple of both."
191 )
192 if isinstance(supported_types, list):
193 supported_types = tuple(supported_types)
194 elif not isinstance(supported_types, tuple):
195 supported_types = (supported_types,)
196
197 for supported_type in supported_types: # type: ignore
198 if supported_type not in {pd.DataFrame, pd.Series}:
199 raise TypeError(msg.format(supported_type))
200
201 if check_type is CheckType.ELEMENT_WISE and set(supported_types) != {
202 pd.DataFrame,
203 pd.Series,
204 }: # type: ignore
205 raise ValueError(
206 "Element-wise checks should support DataFrame and Series "
207 "validation. Use the default setting for the 'supported_types' "
208 "argument."
209 )
210
211 if check_fn is None:
212 return partial(
213 register_check_method,
214 statistics=statistics,
215 supported_types=supported_types,
216 check_type=check_type,
217 strategy=strategy,
218 )
219 else:
220 sig = signature(check_fn)
221 for statistic in statistics:
222 if statistic not in sig.parameters:
223 raise TypeError(
224 f"statistic '{statistic}' is not part of "
225 f"{check_fn.__name__}'s signature."
226 )
227
228 def register_check_wrapper(check_fn: Callable):
229 """Register a function as a :class:`~pandera.api.checks.Check` method."""
230
231 if hasattr(Check, check_fn.__name__):
232 raise ValueError(
233 f"method with name '{check_fn.__name__}' already defined. "
234 "Check methods must have a unique method name."
235 )
236
237 @wraps(check_fn)
238 def check_fn_wrapper(validate_obj, **kwargs):
239 """Wrapper for check_fn to validate inputs."""
240 return check_fn(validate_obj, **kwargs)
241
242 def validate_check_kwargs(check_kwargs):
243 msg = (
244 f"'{check_fn.__name__} has check_type={check_type}. "
245 "Providing the following arguments will have no effect: "
246 "{}. Remove these arguments to avoid this warning."
247 )
248
249 no_effect_args = {
250 CheckType.ELEMENT_WISE: ["element_wise", "groupby", "groups"],
251 CheckType.VECTORIZED: ["element_wise", "groupby", "groups"],
252 CheckType.GROUPBY: ["element_wise"],
253 }[check_type]
254
255 if any(arg in check_kwargs for arg in no_effect_args):
256 warnings.warn(msg.format(no_effect_args))
257 for arg in no_effect_args:
258 check_kwargs.pop(arg, None)
259
260 if check_type is CheckType.ELEMENT_WISE:
261 check_kwargs["element_wise"] = True
262
263 return check_kwargs
264
265 @register_check_statistics(statistics)
266 def check_method(cls, *args, **kwargs):
267 """Wrapper function that serves as the Check method."""
268 stats, check_kwargs = {}, {}
269
270 if args:
271 stats = dict(zip(statistics, args))
272
273 for k, v in kwargs.items():
274 if k in statistics:
275 stats[k] = v
276 else:
277 check_kwargs[k] = v
278
279 return cls(
280 partial(check_fn_wrapper, **stats),
281 name=check_fn.__name__,
282 **validate_check_kwargs(check_kwargs),
283 )
284
285 if strategy is not None:
286 check_method = register_check_strategy(strategy)(check_method)
287
288 Check.REGISTERED_CUSTOM_CHECKS[check_fn.__name__] = partial(
289 check_method, Check
290 )
291
292 return register_check_wrapper(check_fn)
293
[end of pandera/api/extensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pandera/api/extensions.py b/pandera/api/extensions.py
--- a/pandera/api/extensions.py
+++ b/pandera/api/extensions.py
@@ -144,11 +144,6 @@
See the :ref:`user guide<extensions>` for more details.
- .. warning::
-
- This is the legacy method for registering check methods. Use the
- :py:func:`~pandera.api.extensions.register_check` decorator instead.
-
:param check_fn: check function to register. The function should take one
positional argument for the object to validate and additional
keyword-only arguments for the check statistics.
| {"golden_diff": "diff --git a/pandera/api/extensions.py b/pandera/api/extensions.py\n--- a/pandera/api/extensions.py\n+++ b/pandera/api/extensions.py\n@@ -144,11 +144,6 @@\n \n See the :ref:`user guide<extensions>` for more details.\n \n- .. warning::\n-\n- This is the legacy method for registering check methods. Use the\n- :py:func:`~pandera.api.extensions.register_check` decorator instead.\n-\n :param check_fn: check function to register. The function should take one\n positional argument for the object to validate and additional\n keyword-only arguments for the check statistics.\n", "issue": "Remove incorrect(?) warning for `register_check_method()` in docs\n#### Location of the documentation\r\n\r\nhttps://pandera.readthedocs.io/en/latest/reference/generated/pandera.extensions.html\r\n\r\n#### Documentation problem\r\n\r\nIt's documented for `register_check_method()` that\r\n\r\n> **Warning**\r\n> This is the legacy method for registering check methods. Use the `register_check()` decorator instead.\r\n\r\nI can't see any reference to `register_check()` in the docs or the repo, so I assume this is an out-dated warning and `register_check_method()` is infact the defacto function for this kind of stuff. \r\n\r\nMight I be missing something? Maybe this warning is supposed to refer to another function.\r\n\r\n#### Suggested fix for documentation\r\n\r\nRemove warning as it's seemingly not warranted (at least in this moment).\n", "before_files": [{"content": "\"\"\"Extensions module.\"\"\"\n\nimport inspect\nimport warnings\nfrom enum import Enum\nfrom functools import partial, wraps\nfrom inspect import signature\nfrom typing import Callable, List, Optional, Tuple, Type, Union\n\nimport pandas as pd\nimport typing_inspect\n\nfrom pandera.api.checks import Check\nfrom pandera.api.hypotheses import Hypothesis\nfrom pandera.strategies.base_strategies import STRATEGY_DISPATCHER\n\n\nclass BuiltinCheckRegistrationError(Exception):\n \"\"\"\n Exception raised when registering a built-in check implementation but the\n default check function implementation hasn't been registered with\n :py:meth:`~flytekit.core.base.BaseCheck.register_builtin_check_fn`.\n \"\"\"\n\n\n# pylint: disable=too-many-locals\ndef register_builtin_check(\n fn=None,\n strategy: Optional[Callable] = None,\n _check_cls: Type = Check,\n **outer_kwargs,\n):\n \"\"\"Register a check method to the Check namespace.\n\n This is the primary way for extending the Check api to define additional\n built-in checks.\n \"\"\"\n\n if fn is None:\n return partial(\n register_builtin_check,\n strategy=strategy,\n _check_cls=_check_cls,\n **outer_kwargs,\n )\n\n name = fn.__name__\n\n # see if the check function is already registered\n check_fn = _check_cls.CHECK_FUNCTION_REGISTRY.get(name)\n fn_sig = signature(fn)\n\n # register the check strategy for this particular check, identified\n # by the check `name`, and the data type of the check function. This\n # supports Union types. Also assume that the data type of the data\n # object to validate is the first argument.\n data_type = [*fn_sig.parameters.values()][0].annotation\n\n if typing_inspect.get_origin(data_type) is Tuple:\n data_type, *_ = typing_inspect.get_args(data_type)\n\n if typing_inspect.get_origin(data_type) is Union:\n data_types = typing_inspect.get_args(data_type)\n else:\n data_types = (data_type,)\n\n if strategy is not None:\n for dt in data_types:\n STRATEGY_DISPATCHER[(name, dt)] = strategy\n\n if check_fn is None: # pragma: no cover\n raise BuiltinCheckRegistrationError(\n f\"Check '{name}' doesn't have a base check implementation. \"\n f\"You need to create a stub method in the {_check_cls} class and \"\n \"then register a base check function implementation with the \"\n f\"{_check_cls}.register_builtin_check_fn method.\\n\"\n \"See the `pandera.api.base.builtin_checks` and \"\n \"`pandera.backends.pandas.builtin_checks` modules as an example.\"\n )\n\n check_fn.register(fn) # type: ignore\n\n return fn\n\n\ndef register_builtin_hypothesis(**kwargs):\n \"\"\"Register a new hypothesis.\"\"\"\n return partial(\n register_builtin_check,\n _check_cls=Hypothesis,\n **kwargs,\n )\n\n\n# --------------------------------\n# CUSTOM CHECK REGISTRATION METHOD\n# --------------------------------\n#\n# The `register_check_method` decorator is the legacy method for registering\n# custom checks and will slated for deprecation after merging the core\n# internals overhaul.\n\n\nclass CheckType(Enum):\n \"\"\"Check types for registered check methods.\"\"\"\n\n VECTORIZED = 1 #: Check applied to a Series or DataFrame\n ELEMENT_WISE = 2 #: Check applied to an element of a Series or DataFrame\n GROUPBY = 3 #: Check applied to dictionary of Series or DataFrames.\n\n\ndef register_check_statistics(statistics_args):\n \"\"\"Decorator to set statistics based on Check method.\"\"\"\n\n def register_check_statistics_decorator(class_method):\n @wraps(class_method)\n def _wrapper(cls, *args, **kwargs):\n args = list(args)\n arg_names = inspect.getfullargspec(class_method).args[1:]\n if not arg_names:\n arg_names = statistics_args\n args_dict = {**dict(zip(arg_names, args)), **kwargs}\n check = class_method(cls, *args, **kwargs)\n check.statistics = {\n stat: args_dict.get(stat) for stat in statistics_args\n }\n check.statistics_args = statistics_args\n return check\n\n return _wrapper\n\n return register_check_statistics_decorator\n\n\ndef register_check_method(\n check_fn=None,\n *,\n statistics: Optional[List[str]] = None,\n supported_types: Union[type, Tuple, List] = (pd.DataFrame, pd.Series),\n check_type: Union[CheckType, str] = \"vectorized\",\n strategy=None,\n):\n \"\"\"Registers a function as a :class:`~pandera.api.checks.Check` method.\n\n See the :ref:`user guide<extensions>` for more details.\n\n .. warning::\n\n This is the legacy method for registering check methods. Use the\n :py:func:`~pandera.api.extensions.register_check` decorator instead.\n\n :param check_fn: check function to register. The function should take one\n positional argument for the object to validate and additional\n keyword-only arguments for the check statistics.\n :param statistics: list of keyword-only arguments in the ``check_fn``,\n which serve as the statistics needed to serialize/de-serialize the\n check and generate data if a ``strategy`` function is provided.\n :param supported_types: the pandas type(s) supported by the check function.\n Valid values are ``pd.DataFrame``, ``pd.Series``, or a list/tuple of\n ``(pa.DataFrame, pa.Series)`` if both types are supported.\n :param check_type: the expected input of the check function. Valid values\n are :class:`~pandera.extensions.CheckType` enums or\n ``{\"vectorized\", \"element_wise\", \"groupby\"}``. The input signature of\n ``check_fn`` is determined by this argument:\n\n - if ``vectorized``, the first positional argument of ``check_fn``\n should be one of the ``supported_types``.\n - if ``element_wise``, the first positional argument of ``check_fn``\n should be a single scalar element in the pandas Series or DataFrame.\n - if ``groupby``, the first positional argument of ``check_fn`` should\n be a dictionary mapping group names to subsets of the Series or\n DataFrame.\n\n :param strategy: data-generation strategy associated with the check\n function.\n :return: register check function wrapper.\n \"\"\"\n\n # pylint: disable=import-outside-toplevel\n from pandera.strategies.pandas_strategies import register_check_strategy\n\n if statistics is None:\n statistics = []\n\n if isinstance(check_type, str):\n check_type = CheckType[check_type.upper()]\n\n msg = (\n \"{} is not a valid input type for check_fn. You must specify one of \"\n \"pandas.DataFrame, pandas.Series, or a tuple of both.\"\n )\n if isinstance(supported_types, list):\n supported_types = tuple(supported_types)\n elif not isinstance(supported_types, tuple):\n supported_types = (supported_types,)\n\n for supported_type in supported_types: # type: ignore\n if supported_type not in {pd.DataFrame, pd.Series}:\n raise TypeError(msg.format(supported_type))\n\n if check_type is CheckType.ELEMENT_WISE and set(supported_types) != {\n pd.DataFrame,\n pd.Series,\n }: # type: ignore\n raise ValueError(\n \"Element-wise checks should support DataFrame and Series \"\n \"validation. Use the default setting for the 'supported_types' \"\n \"argument.\"\n )\n\n if check_fn is None:\n return partial(\n register_check_method,\n statistics=statistics,\n supported_types=supported_types,\n check_type=check_type,\n strategy=strategy,\n )\n else:\n sig = signature(check_fn)\n for statistic in statistics:\n if statistic not in sig.parameters:\n raise TypeError(\n f\"statistic '{statistic}' is not part of \"\n f\"{check_fn.__name__}'s signature.\"\n )\n\n def register_check_wrapper(check_fn: Callable):\n \"\"\"Register a function as a :class:`~pandera.api.checks.Check` method.\"\"\"\n\n if hasattr(Check, check_fn.__name__):\n raise ValueError(\n f\"method with name '{check_fn.__name__}' already defined. \"\n \"Check methods must have a unique method name.\"\n )\n\n @wraps(check_fn)\n def check_fn_wrapper(validate_obj, **kwargs):\n \"\"\"Wrapper for check_fn to validate inputs.\"\"\"\n return check_fn(validate_obj, **kwargs)\n\n def validate_check_kwargs(check_kwargs):\n msg = (\n f\"'{check_fn.__name__} has check_type={check_type}. \"\n \"Providing the following arguments will have no effect: \"\n \"{}. Remove these arguments to avoid this warning.\"\n )\n\n no_effect_args = {\n CheckType.ELEMENT_WISE: [\"element_wise\", \"groupby\", \"groups\"],\n CheckType.VECTORIZED: [\"element_wise\", \"groupby\", \"groups\"],\n CheckType.GROUPBY: [\"element_wise\"],\n }[check_type]\n\n if any(arg in check_kwargs for arg in no_effect_args):\n warnings.warn(msg.format(no_effect_args))\n for arg in no_effect_args:\n check_kwargs.pop(arg, None)\n\n if check_type is CheckType.ELEMENT_WISE:\n check_kwargs[\"element_wise\"] = True\n\n return check_kwargs\n\n @register_check_statistics(statistics)\n def check_method(cls, *args, **kwargs):\n \"\"\"Wrapper function that serves as the Check method.\"\"\"\n stats, check_kwargs = {}, {}\n\n if args:\n stats = dict(zip(statistics, args))\n\n for k, v in kwargs.items():\n if k in statistics:\n stats[k] = v\n else:\n check_kwargs[k] = v\n\n return cls(\n partial(check_fn_wrapper, **stats),\n name=check_fn.__name__,\n **validate_check_kwargs(check_kwargs),\n )\n\n if strategy is not None:\n check_method = register_check_strategy(strategy)(check_method)\n\n Check.REGISTERED_CUSTOM_CHECKS[check_fn.__name__] = partial(\n check_method, Check\n )\n\n return register_check_wrapper(check_fn)\n", "path": "pandera/api/extensions.py"}]} | 3,691 | 145 |
gh_patches_debug_18288 | rasdani/github-patches | git_diff | deepset-ai__haystack-6735 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Weights and score normalization for DocumentJoiner with reciprocal rank fusion - 2.x
Complete details in #5551.
Implemented for 1.x by @robpasternak in #5704.
We should port this improvement to 2.x.
</issue>
<code>
[start of haystack/components/joiners/document_joiner.py]
1 import itertools
2 import logging
3 from collections import defaultdict
4 from math import inf
5 from typing import List, Optional
6 from haystack.core.component.types import Variadic
7
8 from haystack import component, Document
9
10
11 logger = logging.getLogger(__name__)
12
13
14 @component
15 class DocumentJoiner:
16 """
17 A component that joins input lists of Documents from multiple connections and outputs them as one list.
18
19 The component allows multiple join modes:
20 * concatenate: Combine Documents from multiple components. Discards duplicate Documents.
21 Documents get their scores from the last component in the pipeline that assigns scores.
22 This join mode doesn't influence Document scores.
23 * merge: Merge scores of duplicate Documents coming from multiple components.
24 Optionally, you can assign a weight to the scores and set the top_k limit for this join mode.
25 You can also use this join mode to rerank retrieved Documents.
26 * reciprocal_rank_fusion: Combine Documents into a single list based on their ranking received from multiple components.
27
28 Example usage in a hybrid retrieval pipeline:
29 ```python
30 document_store = InMemoryDocumentStore()
31 p = Pipeline()
32 p.add_component(instance=InMemoryBM25Retriever(document_store=document_store), name="bm25_retriever")
33 p.add_component(
34 instance=SentenceTransformersTextEmbedder(model="sentence-transformers/all-MiniLM-L6-v2"),
35 name="text_embedder",
36 )
37 p.add_component(instance=InMemoryEmbeddingRetriever(document_store=document_store), name="embedding_retriever")
38 p.add_component(instance=DocumentJoiner(), name="joiner")
39 p.connect("bm25_retriever", "joiner")
40 p.connect("embedding_retriever", "joiner")
41 p.connect("text_embedder", "embedding_retriever")
42 query = "What is the capital of France?"
43 p.run(data={"bm25_retriever": {"query": query},
44 "text_embedder": {"text": query}})
45 ```
46 """
47
48 def __init__(
49 self,
50 join_mode: str = "concatenate",
51 weights: Optional[List[float]] = None,
52 top_k: Optional[int] = None,
53 sort_by_score: bool = True,
54 ):
55 """
56 Initialize the DocumentJoiner.
57
58 :param join_mode: Specifies the join mode to use. Available modes: `concatenate` to combine Documents from multiple Retrievers, `merge` to aggregate the scores of
59 individual Documents, `reciprocal_rank_fusion` to apply rank-based scoring.
60 :param weights: A component-wise list (the length of the list must be equal to the number of input components) of weights for
61 adjusting Document scores when using the `merge` join_mode. By default, equal weight is given
62 to each Retriever score. This param is not compatible with the `concatenate` join_mode.
63 :param top_k: The maximum number of Documents to be returned as output. By default, returns all Documents.
64 :param sort_by_score: Whether the output list of Documents should be sorted by Document scores in descending order.
65 By default, the output is sorted.
66 Documents without score are handled as if their score was -infinity.
67 """
68 if join_mode not in ["concatenate", "merge", "reciprocal_rank_fusion"]:
69 raise ValueError(f"DocumentJoiner component does not support '{join_mode}' join_mode.")
70 self.join_mode = join_mode
71 self.weights = [float(i) / sum(weights) for i in weights] if weights else None
72 self.top_k = top_k
73 self.sort_by_score = sort_by_score
74
75 @component.output_types(documents=List[Document])
76 def run(self, documents: Variadic[List[Document]]):
77 """
78 Run the DocumentJoiner. This method joins the input lists of Documents into one output list based on the join_mode specified during initialization.
79
80 :param documents: An arbitrary number of lists of Documents to join.
81 """
82 output_documents = []
83 if self.join_mode == "concatenate":
84 output_documents = self._concatenate(documents)
85 elif self.join_mode == "merge":
86 output_documents = self._merge(documents)
87 elif self.join_mode == "reciprocal_rank_fusion":
88 output_documents = self._reciprocal_rank_fusion(documents)
89
90 if self.sort_by_score:
91 output_documents = sorted(
92 output_documents, key=lambda doc: doc.score if doc.score is not None else -inf, reverse=True
93 )
94 if any(doc.score is None for doc in output_documents):
95 logger.info(
96 "Some of the Documents DocumentJoiner got have score=None. It was configured to sort Documents by "
97 "score, so those with score=None were sorted as if they had a score of -infinity."
98 )
99
100 if self.top_k:
101 output_documents = output_documents[: self.top_k]
102 return {"documents": output_documents}
103
104 def _concatenate(self, document_lists):
105 """
106 Concatenate multiple lists of Documents and return only the Document with the highest score for duplicate Documents.
107 """
108 output = []
109 docs_per_id = defaultdict(list)
110 for doc in itertools.chain.from_iterable(document_lists):
111 docs_per_id[doc.id].append(doc)
112 for docs in docs_per_id.values():
113 doc_with_best_score = max(docs, key=lambda doc: doc.score if doc.score else -inf)
114 output.append(doc_with_best_score)
115 return output
116
117 def _merge(self, document_lists):
118 """
119 Merge multiple lists of Documents and calculate a weighted sum of the scores of duplicate Documents.
120 """
121 scores_map = defaultdict(int)
122 documents_map = {}
123 weights = self.weights if self.weights else [1 / len(document_lists)] * len(document_lists)
124
125 for documents, weight in zip(document_lists, weights):
126 for doc in documents:
127 scores_map[doc.id] += (doc.score if doc.score else 0) * weight
128 documents_map[doc.id] = doc
129
130 for doc in documents_map.values():
131 doc.score = scores_map[doc.id]
132
133 return documents_map.values()
134
135 def _reciprocal_rank_fusion(self, document_lists):
136 """
137 Merge multiple lists of Documents and assign scores based on reciprocal rank fusion.
138 The constant k is set to 61 (60 was suggested by the original paper,
139 plus 1 as python lists are 0-based and the paper used 1-based ranking).
140 """
141 k = 61
142
143 scores_map = defaultdict(int)
144 documents_map = {}
145 for documents in document_lists:
146 for rank, doc in enumerate(documents):
147 scores_map[doc.id] += 1 / (k + rank)
148 documents_map[doc.id] = doc
149
150 for doc in documents_map.values():
151 doc.score = scores_map[doc.id]
152
153 return documents_map.values()
154
[end of haystack/components/joiners/document_joiner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/haystack/components/joiners/document_joiner.py b/haystack/components/joiners/document_joiner.py
--- a/haystack/components/joiners/document_joiner.py
+++ b/haystack/components/joiners/document_joiner.py
@@ -142,11 +142,19 @@
scores_map = defaultdict(int)
documents_map = {}
- for documents in document_lists:
+ weights = self.weights if self.weights else [1 / len(document_lists)] * len(document_lists)
+
+ # Calculate weighted reciprocal rank fusion score
+ for documents, weight in zip(document_lists, weights):
for rank, doc in enumerate(documents):
- scores_map[doc.id] += 1 / (k + rank)
+ scores_map[doc.id] += (weight * len(document_lists)) / (k + rank)
documents_map[doc.id] = doc
+ # Normalize scores. Note: len(results) / k is the maximum possible score,
+ # achieved by being ranked first in all doc lists with non-zero weight.
+ for id in scores_map:
+ scores_map[id] /= len(document_lists) / k
+
for doc in documents_map.values():
doc.score = scores_map[doc.id]
| {"golden_diff": "diff --git a/haystack/components/joiners/document_joiner.py b/haystack/components/joiners/document_joiner.py\n--- a/haystack/components/joiners/document_joiner.py\n+++ b/haystack/components/joiners/document_joiner.py\n@@ -142,11 +142,19 @@\n \n scores_map = defaultdict(int)\n documents_map = {}\n- for documents in document_lists:\n+ weights = self.weights if self.weights else [1 / len(document_lists)] * len(document_lists)\n+\n+ # Calculate weighted reciprocal rank fusion score\n+ for documents, weight in zip(document_lists, weights):\n for rank, doc in enumerate(documents):\n- scores_map[doc.id] += 1 / (k + rank)\n+ scores_map[doc.id] += (weight * len(document_lists)) / (k + rank)\n documents_map[doc.id] = doc\n \n+ # Normalize scores. Note: len(results) / k is the maximum possible score,\n+ # achieved by being ranked first in all doc lists with non-zero weight.\n+ for id in scores_map:\n+ scores_map[id] /= len(document_lists) / k\n+\n for doc in documents_map.values():\n doc.score = scores_map[doc.id]\n", "issue": "Weights and score normalization for DocumentJoiner with reciprocal rank fusion - 2.x\nComplete details in #5551.\r\nImplemented for 1.x by @robpasternak in #5704.\r\n\r\nWe should port this improvement to 2.x.\n", "before_files": [{"content": "import itertools\nimport logging\nfrom collections import defaultdict\nfrom math import inf\nfrom typing import List, Optional\nfrom haystack.core.component.types import Variadic\n\nfrom haystack import component, Document\n\n\nlogger = logging.getLogger(__name__)\n\n\n@component\nclass DocumentJoiner:\n \"\"\"\n A component that joins input lists of Documents from multiple connections and outputs them as one list.\n\n The component allows multiple join modes:\n * concatenate: Combine Documents from multiple components. Discards duplicate Documents.\n Documents get their scores from the last component in the pipeline that assigns scores.\n This join mode doesn't influence Document scores.\n * merge: Merge scores of duplicate Documents coming from multiple components.\n Optionally, you can assign a weight to the scores and set the top_k limit for this join mode.\n You can also use this join mode to rerank retrieved Documents.\n * reciprocal_rank_fusion: Combine Documents into a single list based on their ranking received from multiple components.\n\n Example usage in a hybrid retrieval pipeline:\n ```python\n document_store = InMemoryDocumentStore()\n p = Pipeline()\n p.add_component(instance=InMemoryBM25Retriever(document_store=document_store), name=\"bm25_retriever\")\n p.add_component(\n instance=SentenceTransformersTextEmbedder(model=\"sentence-transformers/all-MiniLM-L6-v2\"),\n name=\"text_embedder\",\n )\n p.add_component(instance=InMemoryEmbeddingRetriever(document_store=document_store), name=\"embedding_retriever\")\n p.add_component(instance=DocumentJoiner(), name=\"joiner\")\n p.connect(\"bm25_retriever\", \"joiner\")\n p.connect(\"embedding_retriever\", \"joiner\")\n p.connect(\"text_embedder\", \"embedding_retriever\")\n query = \"What is the capital of France?\"\n p.run(data={\"bm25_retriever\": {\"query\": query},\n \"text_embedder\": {\"text\": query}})\n ```\n \"\"\"\n\n def __init__(\n self,\n join_mode: str = \"concatenate\",\n weights: Optional[List[float]] = None,\n top_k: Optional[int] = None,\n sort_by_score: bool = True,\n ):\n \"\"\"\n Initialize the DocumentJoiner.\n\n :param join_mode: Specifies the join mode to use. Available modes: `concatenate` to combine Documents from multiple Retrievers, `merge` to aggregate the scores of\n individual Documents, `reciprocal_rank_fusion` to apply rank-based scoring.\n :param weights: A component-wise list (the length of the list must be equal to the number of input components) of weights for\n adjusting Document scores when using the `merge` join_mode. By default, equal weight is given\n to each Retriever score. This param is not compatible with the `concatenate` join_mode.\n :param top_k: The maximum number of Documents to be returned as output. By default, returns all Documents.\n :param sort_by_score: Whether the output list of Documents should be sorted by Document scores in descending order.\n By default, the output is sorted.\n Documents without score are handled as if their score was -infinity.\n \"\"\"\n if join_mode not in [\"concatenate\", \"merge\", \"reciprocal_rank_fusion\"]:\n raise ValueError(f\"DocumentJoiner component does not support '{join_mode}' join_mode.\")\n self.join_mode = join_mode\n self.weights = [float(i) / sum(weights) for i in weights] if weights else None\n self.top_k = top_k\n self.sort_by_score = sort_by_score\n\n @component.output_types(documents=List[Document])\n def run(self, documents: Variadic[List[Document]]):\n \"\"\"\n Run the DocumentJoiner. This method joins the input lists of Documents into one output list based on the join_mode specified during initialization.\n\n :param documents: An arbitrary number of lists of Documents to join.\n \"\"\"\n output_documents = []\n if self.join_mode == \"concatenate\":\n output_documents = self._concatenate(documents)\n elif self.join_mode == \"merge\":\n output_documents = self._merge(documents)\n elif self.join_mode == \"reciprocal_rank_fusion\":\n output_documents = self._reciprocal_rank_fusion(documents)\n\n if self.sort_by_score:\n output_documents = sorted(\n output_documents, key=lambda doc: doc.score if doc.score is not None else -inf, reverse=True\n )\n if any(doc.score is None for doc in output_documents):\n logger.info(\n \"Some of the Documents DocumentJoiner got have score=None. It was configured to sort Documents by \"\n \"score, so those with score=None were sorted as if they had a score of -infinity.\"\n )\n\n if self.top_k:\n output_documents = output_documents[: self.top_k]\n return {\"documents\": output_documents}\n\n def _concatenate(self, document_lists):\n \"\"\"\n Concatenate multiple lists of Documents and return only the Document with the highest score for duplicate Documents.\n \"\"\"\n output = []\n docs_per_id = defaultdict(list)\n for doc in itertools.chain.from_iterable(document_lists):\n docs_per_id[doc.id].append(doc)\n for docs in docs_per_id.values():\n doc_with_best_score = max(docs, key=lambda doc: doc.score if doc.score else -inf)\n output.append(doc_with_best_score)\n return output\n\n def _merge(self, document_lists):\n \"\"\"\n Merge multiple lists of Documents and calculate a weighted sum of the scores of duplicate Documents.\n \"\"\"\n scores_map = defaultdict(int)\n documents_map = {}\n weights = self.weights if self.weights else [1 / len(document_lists)] * len(document_lists)\n\n for documents, weight in zip(document_lists, weights):\n for doc in documents:\n scores_map[doc.id] += (doc.score if doc.score else 0) * weight\n documents_map[doc.id] = doc\n\n for doc in documents_map.values():\n doc.score = scores_map[doc.id]\n\n return documents_map.values()\n\n def _reciprocal_rank_fusion(self, document_lists):\n \"\"\"\n Merge multiple lists of Documents and assign scores based on reciprocal rank fusion.\n The constant k is set to 61 (60 was suggested by the original paper,\n plus 1 as python lists are 0-based and the paper used 1-based ranking).\n \"\"\"\n k = 61\n\n scores_map = defaultdict(int)\n documents_map = {}\n for documents in document_lists:\n for rank, doc in enumerate(documents):\n scores_map[doc.id] += 1 / (k + rank)\n documents_map[doc.id] = doc\n\n for doc in documents_map.values():\n doc.score = scores_map[doc.id]\n\n return documents_map.values()\n", "path": "haystack/components/joiners/document_joiner.py"}]} | 2,412 | 275 |
gh_patches_debug_41991 | rasdani/github-patches | git_diff | litestar-org__litestar-1428 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StaticFilesConfig and virtual directories
I'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem.
This is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.
https://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32
</issue>
<code>
[start of starlite/handlers/base.py]
1 from __future__ import annotations
2
3 from copy import copy
4 from inspect import Signature
5 from typing import TYPE_CHECKING, Any, Generic, Mapping, Sequence, TypeVar, cast
6
7 from starlite._signature.field import SignatureField
8 from starlite.exceptions import ImproperlyConfiguredException
9 from starlite.types import Dependencies, Empty, EmptyType, ExceptionHandlersMap, Guard, Middleware, TypeEncodersMap
10 from starlite.utils import AsyncCallable, Ref, get_name, normalize_path
11 from starlite.utils.helpers import unwrap_partial
12
13 __all__ = ("BaseRouteHandler",)
14
15
16 if TYPE_CHECKING:
17 from typing_extensions import Self
18
19 from starlite._signature.models import SignatureModel
20 from starlite.connection import ASGIConnection
21 from starlite.controller import Controller
22 from starlite.di import Provide
23 from starlite.params import ParameterKwarg
24 from starlite.router import Router
25 from starlite.types import AnyCallable, AsyncAnyCallable, ExceptionHandler
26 from starlite.types.composite_types import MaybePartial
27
28 T = TypeVar("T", bound="BaseRouteHandler")
29
30
31 class BaseRouteHandler(Generic[T]):
32 """Base route handler.
33
34 Serves as a subclass for all route handlers
35 """
36
37 fn: Ref[MaybePartial[AnyCallable]]
38 signature: Signature
39
40 __slots__ = (
41 "_resolved_dependencies",
42 "_resolved_guards",
43 "_resolved_layered_parameters",
44 "_resolved_signature_namespace",
45 "_resolved_type_encoders",
46 "dependencies",
47 "exception_handlers",
48 "fn",
49 "guards",
50 "middleware",
51 "name",
52 "opt",
53 "owner",
54 "paths",
55 "signature",
56 "signature_model",
57 "signature_namespace",
58 "type_encoders",
59 )
60
61 def __init__(
62 self,
63 path: str | Sequence[str] | None = None,
64 *,
65 dependencies: Dependencies | None = None,
66 exception_handlers: ExceptionHandlersMap | None = None,
67 guards: Sequence[Guard] | None = None,
68 middleware: Sequence[Middleware] | None = None,
69 name: str | None = None,
70 opt: Mapping[str, Any] | None = None,
71 signature_namespace: Mapping[str, Any] | None = None,
72 type_encoders: TypeEncodersMap | None = None,
73 **kwargs: Any,
74 ) -> None:
75 """Initialize ``HTTPRouteHandler``.
76
77 Args:
78 path: A path fragment for the route handler function or a sequence of path fragments. If not given defaults
79 to ``/``
80 dependencies: A string keyed mapping of dependency :class:`Provider <.di.Provide>` instances.
81 exception_handlers: A mapping of status codes and/or exception types to handler functions.
82 guards: A sequence of :class:`Guard <.types.Guard>` callables.
83 middleware: A sequence of :class:`Middleware <.types.Middleware>`.
84 name: A string identifying the route handler.
85 opt: A string keyed mapping of arbitrary values that can be accessed in :class:`Guards <.types.Guard>` or
86 wherever you have access to :class:`Request <.connection.Request>` or
87 :class:`ASGI Scope <.types.Scope>`.
88 signature_namespace: A mapping of names to types for use in forward reference resolution during signature modelling.
89 type_encoders: A mapping of types to callables that transform them into types supported for serialization.
90 **kwargs: Any additional kwarg - will be set in the opt dictionary.
91 """
92 self._resolved_dependencies: dict[str, Provide] | EmptyType = Empty
93 self._resolved_guards: list[Guard] | EmptyType = Empty
94 self._resolved_layered_parameters: dict[str, SignatureField] | EmptyType = Empty
95 self._resolved_signature_namespace: dict[str, Any] | EmptyType = Empty
96 self._resolved_type_encoders: TypeEncodersMap | EmptyType = Empty
97
98 self.dependencies = dependencies
99 self.exception_handlers = exception_handlers
100 self.guards = guards
101 self.middleware = middleware
102 self.name = name
103 self.opt = dict(opt or {})
104 self.owner: Controller | Router | None = None
105 self.signature_model: type[SignatureModel] | None = None
106 self.signature_namespace = signature_namespace or {}
107 self.paths = (
108 {normalize_path(p) for p in path}
109 if path and isinstance(path, list)
110 else {normalize_path(path or "/")} # type: ignore
111 )
112 self.opt.update(**kwargs)
113 self.type_encoders = type_encoders
114
115 def __call__(self, fn: AsyncAnyCallable) -> Self:
116 """Replace a function with itself."""
117 self.fn = Ref["MaybePartial[AsyncAnyCallable]"](fn)
118 self.signature = Signature.from_callable(fn)
119 self._validate_handler_function()
120 return self
121
122 @property
123 def handler_name(self) -> str:
124 """Get the name of the handler function.
125
126 Raises:
127 ImproperlyConfiguredException: if handler fn is not set.
128
129 Returns:
130 Name of the handler function
131 """
132 fn = getattr(self, "fn", None)
133 if not fn:
134 raise ImproperlyConfiguredException("cannot access handler name before setting the handler function")
135 return get_name(unwrap_partial(self.fn.value))
136
137 @property
138 def dependency_name_set(self) -> set[str]:
139 """Set of all dependency names provided in the handler's ownership layers."""
140 layered_dependencies = (layer.dependencies or {} for layer in self.ownership_layers)
141 return {name for layer in layered_dependencies for name in layer}
142
143 @property
144 def ownership_layers(self) -> list[T | Controller | Router]:
145 """Return the handler layers from the app down to the route handler.
146
147 ``app -> ... -> route handler``
148 """
149 layers = []
150
151 cur: Any = self
152 while cur:
153 layers.append(cur)
154 cur = cur.owner
155
156 return list(reversed(layers))
157
158 def resolve_type_encoders(self) -> TypeEncodersMap:
159 """Return a merged type_encoders mapping.
160
161 This method is memoized so the computation occurs only once.
162
163 Returns:
164 A dict of type encoders
165 """
166 if self._resolved_type_encoders is Empty:
167 self._resolved_type_encoders = {}
168
169 for layer in self.ownership_layers:
170 if type_encoders := getattr(layer, "type_encoders", None):
171 self._resolved_type_encoders.update(type_encoders)
172 return cast("TypeEncodersMap", self._resolved_type_encoders)
173
174 def resolve_layered_parameters(self) -> dict[str, SignatureField]:
175 """Return all parameters declared above the handler."""
176 if self._resolved_layered_parameters is Empty:
177 parameter_kwargs: dict[str, ParameterKwarg] = {}
178
179 for layer in self.ownership_layers:
180 parameter_kwargs.update(getattr(layer, "parameters", {}) or {})
181
182 self._resolved_layered_parameters = {
183 key: SignatureField.create(
184 name=key, field_type=parameter.value_type, default_value=parameter.default, kwarg_model=parameter
185 )
186 for key, parameter in parameter_kwargs.items()
187 }
188
189 return cast("dict[str, SignatureField]", self._resolved_layered_parameters)
190
191 def resolve_guards(self) -> list[Guard]:
192 """Return all guards in the handlers scope, starting from highest to current layer."""
193 if self._resolved_guards is Empty:
194 self._resolved_guards = []
195
196 for layer in self.ownership_layers:
197 self._resolved_guards.extend(layer.guards or [])
198
199 self._resolved_guards = cast("list[Guard]", [AsyncCallable(guard) for guard in self._resolved_guards])
200
201 return self._resolved_guards # type:ignore
202
203 def resolve_dependencies(self) -> dict[str, Provide]:
204 """Return all dependencies correlating to handler function's kwargs that exist in the handler's scope."""
205 if self._resolved_dependencies is Empty:
206 self._resolved_dependencies = {}
207
208 for layer in self.ownership_layers:
209 for key, value in (layer.dependencies or {}).items():
210 self._validate_dependency_is_unique(
211 dependencies=self._resolved_dependencies, key=key, provider=value
212 )
213 self._resolved_dependencies[key] = value
214
215 return cast("dict[str, Provide]", self._resolved_dependencies)
216
217 def resolve_middleware(self) -> list[Middleware]:
218 """Build the middleware stack for the RouteHandler and return it.
219
220 The middlewares are added from top to bottom (``app -> router -> controller -> route handler``) and then
221 reversed.
222 """
223 resolved_middleware: list[Middleware] = []
224 for layer in self.ownership_layers:
225 resolved_middleware.extend(layer.middleware or [])
226 return list(reversed(resolved_middleware))
227
228 def resolve_exception_handlers(self) -> ExceptionHandlersMap:
229 """Resolve the exception_handlers by starting from the route handler and moving up.
230
231 This method is memoized so the computation occurs only once.
232 """
233 resolved_exception_handlers: dict[int | type[Exception], ExceptionHandler] = {}
234 for layer in self.ownership_layers:
235 resolved_exception_handlers.update(layer.exception_handlers or {})
236 return resolved_exception_handlers
237
238 def resolve_opts(self) -> None:
239 """Build the route handler opt dictionary by going from top to bottom.
240
241 When merging keys from multiple layers, if the same key is defined by multiple layers, the value from the
242 layer closest to the response handler will take precedence.
243 """
244
245 opt: dict[str, Any] = {}
246 for layer in self.ownership_layers:
247 opt.update(layer.opt or {})
248
249 self.opt = opt
250
251 def resolve_signature_namespace(self) -> dict[str, Any]:
252 """Build the route handler signature namespace dictionary by going from top to bottom.
253
254 When merging keys from multiple layers, if the same key is defined by multiple layers, the value from the
255 layer closest to the response handler will take precedence.
256 """
257 if self._resolved_layered_parameters is Empty:
258 ns: dict[str, Any] = {}
259 for layer in self.ownership_layers:
260 ns.update(layer.signature_namespace)
261
262 self._resolved_signature_namespace = ns
263 return cast("dict[str, Any]", self._resolved_signature_namespace)
264
265 async def authorize_connection(self, connection: "ASGIConnection") -> None:
266 """Ensure the connection is authorized by running all the route guards in scope."""
267 for guard in self.resolve_guards():
268 await guard(connection, copy(self)) # type: ignore
269
270 @staticmethod
271 def _validate_dependency_is_unique(dependencies: dict[str, Provide], key: str, provider: Provide) -> None:
272 """Validate that a given provider has not been already defined under a different key."""
273 for dependency_key, value in dependencies.items():
274 if provider == value:
275 raise ImproperlyConfiguredException(
276 f"Provider for key {key} is already defined under the different key {dependency_key}. "
277 f"If you wish to override a provider, it must have the same key."
278 )
279
280 def _validate_handler_function(self) -> None:
281 """Validate the route handler function once set by inspecting its return annotations."""
282 if not getattr(self, "fn", None):
283 raise ImproperlyConfiguredException("Cannot call _validate_handler_function without first setting self.fn")
284
285 def __str__(self) -> str:
286 """Return a unique identifier for the route handler.
287
288 Returns:
289 A string
290 """
291 target = unwrap_partial(self.fn.value)
292 if not hasattr(target, "__qualname__"):
293 target = type(target)
294 return f"{target.__module__}.{target.__qualname__}"
295
[end of starlite/handlers/base.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlite/handlers/base.py b/starlite/handlers/base.py
--- a/starlite/handlers/base.py
+++ b/starlite/handlers/base.py
@@ -22,7 +22,7 @@
from starlite.di import Provide
from starlite.params import ParameterKwarg
from starlite.router import Router
- from starlite.types import AnyCallable, AsyncAnyCallable, ExceptionHandler
+ from starlite.types import AsyncAnyCallable, ExceptionHandler
from starlite.types.composite_types import MaybePartial
T = TypeVar("T", bound="BaseRouteHandler")
@@ -34,10 +34,10 @@
Serves as a subclass for all route handlers
"""
- fn: Ref[MaybePartial[AnyCallable]]
signature: Signature
__slots__ = (
+ "_fn",
"_resolved_dependencies",
"_resolved_guards",
"_resolved_layered_parameters",
@@ -45,7 +45,6 @@
"_resolved_type_encoders",
"dependencies",
"exception_handlers",
- "fn",
"guards",
"middleware",
"name",
@@ -114,11 +113,25 @@
def __call__(self, fn: AsyncAnyCallable) -> Self:
"""Replace a function with itself."""
- self.fn = Ref["MaybePartial[AsyncAnyCallable]"](fn)
+ self._fn = Ref["MaybePartial[AsyncAnyCallable]"](fn)
self.signature = Signature.from_callable(fn)
self._validate_handler_function()
return self
+ @property
+ def fn(self) -> Ref[MaybePartial[AsyncAnyCallable]]:
+ """Get the handler function.
+
+ Raises:
+ ImproperlyConfiguredException: if handler fn is not set.
+
+ Returns:
+ Handler function
+ """
+ if not hasattr(self, "_fn"):
+ raise ImproperlyConfiguredException("Handler has not decorated a function")
+ return self._fn
+
@property
def handler_name(self) -> str:
"""Get the name of the handler function.
@@ -129,9 +142,6 @@
Returns:
Name of the handler function
"""
- fn = getattr(self, "fn", None)
- if not fn:
- raise ImproperlyConfiguredException("cannot access handler name before setting the handler function")
return get_name(unwrap_partial(self.fn.value))
@property
@@ -279,8 +289,6 @@
def _validate_handler_function(self) -> None:
"""Validate the route handler function once set by inspecting its return annotations."""
- if not getattr(self, "fn", None):
- raise ImproperlyConfiguredException("Cannot call _validate_handler_function without first setting self.fn")
def __str__(self) -> str:
"""Return a unique identifier for the route handler.
@@ -288,6 +296,7 @@
Returns:
A string
"""
+ target: type[AsyncAnyCallable] | AsyncAnyCallable
target = unwrap_partial(self.fn.value)
if not hasattr(target, "__qualname__"):
target = type(target)
| {"golden_diff": "diff --git a/starlite/handlers/base.py b/starlite/handlers/base.py\n--- a/starlite/handlers/base.py\n+++ b/starlite/handlers/base.py\n@@ -22,7 +22,7 @@\n from starlite.di import Provide\n from starlite.params import ParameterKwarg\n from starlite.router import Router\n- from starlite.types import AnyCallable, AsyncAnyCallable, ExceptionHandler\n+ from starlite.types import AsyncAnyCallable, ExceptionHandler\n from starlite.types.composite_types import MaybePartial\n \n T = TypeVar(\"T\", bound=\"BaseRouteHandler\")\n@@ -34,10 +34,10 @@\n Serves as a subclass for all route handlers\n \"\"\"\n \n- fn: Ref[MaybePartial[AnyCallable]]\n signature: Signature\n \n __slots__ = (\n+ \"_fn\",\n \"_resolved_dependencies\",\n \"_resolved_guards\",\n \"_resolved_layered_parameters\",\n@@ -45,7 +45,6 @@\n \"_resolved_type_encoders\",\n \"dependencies\",\n \"exception_handlers\",\n- \"fn\",\n \"guards\",\n \"middleware\",\n \"name\",\n@@ -114,11 +113,25 @@\n \n def __call__(self, fn: AsyncAnyCallable) -> Self:\n \"\"\"Replace a function with itself.\"\"\"\n- self.fn = Ref[\"MaybePartial[AsyncAnyCallable]\"](fn)\n+ self._fn = Ref[\"MaybePartial[AsyncAnyCallable]\"](fn)\n self.signature = Signature.from_callable(fn)\n self._validate_handler_function()\n return self\n \n+ @property\n+ def fn(self) -> Ref[MaybePartial[AsyncAnyCallable]]:\n+ \"\"\"Get the handler function.\n+\n+ Raises:\n+ ImproperlyConfiguredException: if handler fn is not set.\n+\n+ Returns:\n+ Handler function\n+ \"\"\"\n+ if not hasattr(self, \"_fn\"):\n+ raise ImproperlyConfiguredException(\"Handler has not decorated a function\")\n+ return self._fn\n+\n @property\n def handler_name(self) -> str:\n \"\"\"Get the name of the handler function.\n@@ -129,9 +142,6 @@\n Returns:\n Name of the handler function\n \"\"\"\n- fn = getattr(self, \"fn\", None)\n- if not fn:\n- raise ImproperlyConfiguredException(\"cannot access handler name before setting the handler function\")\n return get_name(unwrap_partial(self.fn.value))\n \n @property\n@@ -279,8 +289,6 @@\n \n def _validate_handler_function(self) -> None:\n \"\"\"Validate the route handler function once set by inspecting its return annotations.\"\"\"\n- if not getattr(self, \"fn\", None):\n- raise ImproperlyConfiguredException(\"Cannot call _validate_handler_function without first setting self.fn\")\n \n def __str__(self) -> str:\n \"\"\"Return a unique identifier for the route handler.\n@@ -288,6 +296,7 @@\n Returns:\n A string\n \"\"\"\n+ target: type[AsyncAnyCallable] | AsyncAnyCallable\n target = unwrap_partial(self.fn.value)\n if not hasattr(target, \"__qualname__\"):\n target = type(target)\n", "issue": "StaticFilesConfig and virtual directories\nI'm trying to write a ``FileSystemProtocol`` to load files from the package data using [importlib_resources](https://importlib-resources.readthedocs.io/en/latest/using.html#). But because ``directories`` is defined as ``DirectoryPath``, pydantic checks if the given directories exist in the local filesystem. \r\n\r\nThis is not generally true, especially in any kind of virtual filesystem (e.g. a zipped package). I think this condition should be relaxed to support virtual filesystems.\r\n\r\nhttps://github.com/starlite-api/starlite/blob/9bb6dcd57c10a591377cf8e3a537e9292566d5b9/starlite/config/static_files.py#L32\n", "before_files": [{"content": "from __future__ import annotations\n\nfrom copy import copy\nfrom inspect import Signature\nfrom typing import TYPE_CHECKING, Any, Generic, Mapping, Sequence, TypeVar, cast\n\nfrom starlite._signature.field import SignatureField\nfrom starlite.exceptions import ImproperlyConfiguredException\nfrom starlite.types import Dependencies, Empty, EmptyType, ExceptionHandlersMap, Guard, Middleware, TypeEncodersMap\nfrom starlite.utils import AsyncCallable, Ref, get_name, normalize_path\nfrom starlite.utils.helpers import unwrap_partial\n\n__all__ = (\"BaseRouteHandler\",)\n\n\nif TYPE_CHECKING:\n from typing_extensions import Self\n\n from starlite._signature.models import SignatureModel\n from starlite.connection import ASGIConnection\n from starlite.controller import Controller\n from starlite.di import Provide\n from starlite.params import ParameterKwarg\n from starlite.router import Router\n from starlite.types import AnyCallable, AsyncAnyCallable, ExceptionHandler\n from starlite.types.composite_types import MaybePartial\n\nT = TypeVar(\"T\", bound=\"BaseRouteHandler\")\n\n\nclass BaseRouteHandler(Generic[T]):\n \"\"\"Base route handler.\n\n Serves as a subclass for all route handlers\n \"\"\"\n\n fn: Ref[MaybePartial[AnyCallable]]\n signature: Signature\n\n __slots__ = (\n \"_resolved_dependencies\",\n \"_resolved_guards\",\n \"_resolved_layered_parameters\",\n \"_resolved_signature_namespace\",\n \"_resolved_type_encoders\",\n \"dependencies\",\n \"exception_handlers\",\n \"fn\",\n \"guards\",\n \"middleware\",\n \"name\",\n \"opt\",\n \"owner\",\n \"paths\",\n \"signature\",\n \"signature_model\",\n \"signature_namespace\",\n \"type_encoders\",\n )\n\n def __init__(\n self,\n path: str | Sequence[str] | None = None,\n *,\n dependencies: Dependencies | None = None,\n exception_handlers: ExceptionHandlersMap | None = None,\n guards: Sequence[Guard] | None = None,\n middleware: Sequence[Middleware] | None = None,\n name: str | None = None,\n opt: Mapping[str, Any] | None = None,\n signature_namespace: Mapping[str, Any] | None = None,\n type_encoders: TypeEncodersMap | None = None,\n **kwargs: Any,\n ) -> None:\n \"\"\"Initialize ``HTTPRouteHandler``.\n\n Args:\n path: A path fragment for the route handler function or a sequence of path fragments. If not given defaults\n to ``/``\n dependencies: A string keyed mapping of dependency :class:`Provider <.di.Provide>` instances.\n exception_handlers: A mapping of status codes and/or exception types to handler functions.\n guards: A sequence of :class:`Guard <.types.Guard>` callables.\n middleware: A sequence of :class:`Middleware <.types.Middleware>`.\n name: A string identifying the route handler.\n opt: A string keyed mapping of arbitrary values that can be accessed in :class:`Guards <.types.Guard>` or\n wherever you have access to :class:`Request <.connection.Request>` or\n :class:`ASGI Scope <.types.Scope>`.\n signature_namespace: A mapping of names to types for use in forward reference resolution during signature modelling.\n type_encoders: A mapping of types to callables that transform them into types supported for serialization.\n **kwargs: Any additional kwarg - will be set in the opt dictionary.\n \"\"\"\n self._resolved_dependencies: dict[str, Provide] | EmptyType = Empty\n self._resolved_guards: list[Guard] | EmptyType = Empty\n self._resolved_layered_parameters: dict[str, SignatureField] | EmptyType = Empty\n self._resolved_signature_namespace: dict[str, Any] | EmptyType = Empty\n self._resolved_type_encoders: TypeEncodersMap | EmptyType = Empty\n\n self.dependencies = dependencies\n self.exception_handlers = exception_handlers\n self.guards = guards\n self.middleware = middleware\n self.name = name\n self.opt = dict(opt or {})\n self.owner: Controller | Router | None = None\n self.signature_model: type[SignatureModel] | None = None\n self.signature_namespace = signature_namespace or {}\n self.paths = (\n {normalize_path(p) for p in path}\n if path and isinstance(path, list)\n else {normalize_path(path or \"/\")} # type: ignore\n )\n self.opt.update(**kwargs)\n self.type_encoders = type_encoders\n\n def __call__(self, fn: AsyncAnyCallable) -> Self:\n \"\"\"Replace a function with itself.\"\"\"\n self.fn = Ref[\"MaybePartial[AsyncAnyCallable]\"](fn)\n self.signature = Signature.from_callable(fn)\n self._validate_handler_function()\n return self\n\n @property\n def handler_name(self) -> str:\n \"\"\"Get the name of the handler function.\n\n Raises:\n ImproperlyConfiguredException: if handler fn is not set.\n\n Returns:\n Name of the handler function\n \"\"\"\n fn = getattr(self, \"fn\", None)\n if not fn:\n raise ImproperlyConfiguredException(\"cannot access handler name before setting the handler function\")\n return get_name(unwrap_partial(self.fn.value))\n\n @property\n def dependency_name_set(self) -> set[str]:\n \"\"\"Set of all dependency names provided in the handler's ownership layers.\"\"\"\n layered_dependencies = (layer.dependencies or {} for layer in self.ownership_layers)\n return {name for layer in layered_dependencies for name in layer}\n\n @property\n def ownership_layers(self) -> list[T | Controller | Router]:\n \"\"\"Return the handler layers from the app down to the route handler.\n\n ``app -> ... -> route handler``\n \"\"\"\n layers = []\n\n cur: Any = self\n while cur:\n layers.append(cur)\n cur = cur.owner\n\n return list(reversed(layers))\n\n def resolve_type_encoders(self) -> TypeEncodersMap:\n \"\"\"Return a merged type_encoders mapping.\n\n This method is memoized so the computation occurs only once.\n\n Returns:\n A dict of type encoders\n \"\"\"\n if self._resolved_type_encoders is Empty:\n self._resolved_type_encoders = {}\n\n for layer in self.ownership_layers:\n if type_encoders := getattr(layer, \"type_encoders\", None):\n self._resolved_type_encoders.update(type_encoders)\n return cast(\"TypeEncodersMap\", self._resolved_type_encoders)\n\n def resolve_layered_parameters(self) -> dict[str, SignatureField]:\n \"\"\"Return all parameters declared above the handler.\"\"\"\n if self._resolved_layered_parameters is Empty:\n parameter_kwargs: dict[str, ParameterKwarg] = {}\n\n for layer in self.ownership_layers:\n parameter_kwargs.update(getattr(layer, \"parameters\", {}) or {})\n\n self._resolved_layered_parameters = {\n key: SignatureField.create(\n name=key, field_type=parameter.value_type, default_value=parameter.default, kwarg_model=parameter\n )\n for key, parameter in parameter_kwargs.items()\n }\n\n return cast(\"dict[str, SignatureField]\", self._resolved_layered_parameters)\n\n def resolve_guards(self) -> list[Guard]:\n \"\"\"Return all guards in the handlers scope, starting from highest to current layer.\"\"\"\n if self._resolved_guards is Empty:\n self._resolved_guards = []\n\n for layer in self.ownership_layers:\n self._resolved_guards.extend(layer.guards or [])\n\n self._resolved_guards = cast(\"list[Guard]\", [AsyncCallable(guard) for guard in self._resolved_guards])\n\n return self._resolved_guards # type:ignore\n\n def resolve_dependencies(self) -> dict[str, Provide]:\n \"\"\"Return all dependencies correlating to handler function's kwargs that exist in the handler's scope.\"\"\"\n if self._resolved_dependencies is Empty:\n self._resolved_dependencies = {}\n\n for layer in self.ownership_layers:\n for key, value in (layer.dependencies or {}).items():\n self._validate_dependency_is_unique(\n dependencies=self._resolved_dependencies, key=key, provider=value\n )\n self._resolved_dependencies[key] = value\n\n return cast(\"dict[str, Provide]\", self._resolved_dependencies)\n\n def resolve_middleware(self) -> list[Middleware]:\n \"\"\"Build the middleware stack for the RouteHandler and return it.\n\n The middlewares are added from top to bottom (``app -> router -> controller -> route handler``) and then\n reversed.\n \"\"\"\n resolved_middleware: list[Middleware] = []\n for layer in self.ownership_layers:\n resolved_middleware.extend(layer.middleware or [])\n return list(reversed(resolved_middleware))\n\n def resolve_exception_handlers(self) -> ExceptionHandlersMap:\n \"\"\"Resolve the exception_handlers by starting from the route handler and moving up.\n\n This method is memoized so the computation occurs only once.\n \"\"\"\n resolved_exception_handlers: dict[int | type[Exception], ExceptionHandler] = {}\n for layer in self.ownership_layers:\n resolved_exception_handlers.update(layer.exception_handlers or {})\n return resolved_exception_handlers\n\n def resolve_opts(self) -> None:\n \"\"\"Build the route handler opt dictionary by going from top to bottom.\n\n When merging keys from multiple layers, if the same key is defined by multiple layers, the value from the\n layer closest to the response handler will take precedence.\n \"\"\"\n\n opt: dict[str, Any] = {}\n for layer in self.ownership_layers:\n opt.update(layer.opt or {})\n\n self.opt = opt\n\n def resolve_signature_namespace(self) -> dict[str, Any]:\n \"\"\"Build the route handler signature namespace dictionary by going from top to bottom.\n\n When merging keys from multiple layers, if the same key is defined by multiple layers, the value from the\n layer closest to the response handler will take precedence.\n \"\"\"\n if self._resolved_layered_parameters is Empty:\n ns: dict[str, Any] = {}\n for layer in self.ownership_layers:\n ns.update(layer.signature_namespace)\n\n self._resolved_signature_namespace = ns\n return cast(\"dict[str, Any]\", self._resolved_signature_namespace)\n\n async def authorize_connection(self, connection: \"ASGIConnection\") -> None:\n \"\"\"Ensure the connection is authorized by running all the route guards in scope.\"\"\"\n for guard in self.resolve_guards():\n await guard(connection, copy(self)) # type: ignore\n\n @staticmethod\n def _validate_dependency_is_unique(dependencies: dict[str, Provide], key: str, provider: Provide) -> None:\n \"\"\"Validate that a given provider has not been already defined under a different key.\"\"\"\n for dependency_key, value in dependencies.items():\n if provider == value:\n raise ImproperlyConfiguredException(\n f\"Provider for key {key} is already defined under the different key {dependency_key}. \"\n f\"If you wish to override a provider, it must have the same key.\"\n )\n\n def _validate_handler_function(self) -> None:\n \"\"\"Validate the route handler function once set by inspecting its return annotations.\"\"\"\n if not getattr(self, \"fn\", None):\n raise ImproperlyConfiguredException(\"Cannot call _validate_handler_function without first setting self.fn\")\n\n def __str__(self) -> str:\n \"\"\"Return a unique identifier for the route handler.\n\n Returns:\n A string\n \"\"\"\n target = unwrap_partial(self.fn.value)\n if not hasattr(target, \"__qualname__\"):\n target = type(target)\n return f\"{target.__module__}.{target.__qualname__}\"\n", "path": "starlite/handlers/base.py"}]} | 3,986 | 712 |
gh_patches_debug_26094 | rasdani/github-patches | git_diff | mlflow__mlflow-922 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
REST API error with sklearn classifier
### System information
- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: I wrote a simple random forest classifier using sklearn and the iris dataset to test the mlflow workflow from scratch.
- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS 10.14.1
- **MLflow installed from (source or binary)**: source (pip install mlflow)
- **MLflow version (run ``mlflow --version``)**: 0.8.2
- **Python version**: 3.7
- **npm version (if running the dev UI):
- **Exact command to reproduce**:curl -X POST -H "Content-Type:application/json; format=pandas-split" --data '{"columns":["sepal_length_cm", "sepal_width_cm", "petal_length_cm", "petal_width_cm"],"data":[[5.3, 1.7, 3.5, 0.5]]}' http://127.0.0.1:1234/invocations
### Describe the problem
If sklearn classifier.predict() method returns a categorical variable, then there is a 500 internal server error returned after the above http request.
The error is from the following function in mlflow/utils/__init.py__:
def ndarray2list(ndarray):
"""
Convert n-dimensional numpy array into nested lists and convert the elements types to native
python so that the list is json-able using standard json library.
:param ndarray: numpy array
:return: list representation of the numpy array with element types convereted to native python
"""
if len(ndarray.shape) <= 1:
return [x.item() for x in ndarray]
return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]
x.item() fails if the array returned from the sklearn classifier.predict() method contains strings, as in the case of the iris dataset (labels are 'setosa', etc). I corrected the error on my local install by changing the function as below:
def ndarray2list(ndarray):
"""
Convert n-dimensional numpy array into nested lists and convert the elements types to native
python so that the list is json-able using standard json library.
:param ndarray: numpy array
:return: list representation of the numpy array with element types convereted to native python
"""
if len(ndarray.shape) <= 1:
try:
return [x.item() for x in ndarray]
except:
return [x for x in ndarray]
return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]
### Source code / logs
server side error traceback:
[2019-02-19 10:57:20,668] ERROR in app: Exception on /invocations [POST]
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/anaconda3/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/anaconda3/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/anaconda3/lib/python3.7/site-packages/mlflow/server/handlers.py", line 68, in wrapper
return func(*args, **kwargs)
File "/anaconda3/lib/python3.7/site-packages/mlflow/pyfunc/scoring_server.py", line 185, in transformation
predictions = get_jsonable_obj(raw_predictions, pandas_orient="records")
File "/anaconda3/lib/python3.7/site-packages/mlflow/utils/__init__.py", line 34, in get_jsonable_obj
return ndarray2list(data)
File "/anaconda3/lib/python3.7/site-packages/mlflow/utils/__init__.py", line 20, in ndarray2list
return [x.item() for x in ndarray]
File "/anaconda3/lib/python3.7/site-packages/mlflow/utils/__init__.py", line 20, in <listcomp>
return [x.item() for x in ndarray]
AttributeError: 'str' object has no attribute 'item'
</issue>
<code>
[start of mlflow/utils/__init__.py]
1 from sys import version_info
2
3 import numpy as np
4 import pandas as pd
5
6
7 PYTHON_VERSION = "{major}.{minor}.{micro}".format(major=version_info.major,
8 minor=version_info.minor,
9 micro=version_info.micro)
10
11
12 def ndarray2list(ndarray):
13 """
14 Convert n-dimensional numpy array into nested lists and convert the elements types to native
15 python so that the list is json-able using standard json library.
16 :param ndarray: numpy array
17 :return: list representation of the numpy array with element types convereted to native python
18 """
19 if len(ndarray.shape) <= 1:
20 return [x.item() for x in ndarray]
21 return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]
22
23
24 def get_jsonable_obj(data, pandas_orient="records"):
25 """Attempt to make the data json-able via standard library.
26 Look for some commonly used types that are not jsonable and convert them into json-able ones.
27 Unknown data types are returned as is.
28
29 :param data: data to be converted, works with pandas and numpy, rest will be returned as is.
30 :param pandas_orient: If `data` is a Pandas DataFrame, it will be converted to a JSON
31 dictionary using this Pandas serialization orientation.
32 """
33 if isinstance(data, np.ndarray):
34 return ndarray2list(data)
35 if isinstance(data, pd.DataFrame):
36 return data.to_dict(orient=pandas_orient)
37 if isinstance(data, pd.Series):
38 return pd.DataFrame(data).to_dict(orient=pandas_orient)
39 else: # by default just return whatever this is and hope for the best
40 return data
41
42
43 def get_major_minor_py_version(py_version):
44 return ".".join(py_version.split(".")[:2])
45
46
47 def get_unique_resource_id(max_length=None):
48 """
49 Obtains a unique id that can be included in a resource name. This unique id is a valid
50 DNS subname.
51
52 :param max_length: The maximum length of the identifier
53 :return: A unique identifier that can be appended to a user-readable resource name to avoid
54 naming collisions.
55 """
56 import uuid
57 import base64
58 if max_length is not None and max_length <= 0:
59 raise ValueError(
60 "The specified maximum length for the unique resource id must be positive!")
61
62 uuid_bytes = uuid.uuid4().bytes
63 # Use base64 encoding to shorten the UUID length. Note that the replacement of the
64 # unsupported '+' symbol maintains uniqueness because the UUID byte string is of a fixed,
65 # 16-byte length
66 uuid_b64 = base64.b64encode(uuid_bytes)
67 if version_info >= (3, 0):
68 # In Python3, `uuid_b64` is a `bytes` object. It needs to be
69 # converted to a string
70 uuid_b64 = uuid_b64.decode("ascii")
71 unique_id = uuid_b64.rstrip('=\n').replace("/", "-").replace("+", "AB").lower()
72 if max_length is not None:
73 unique_id = unique_id[:int(max_length)]
74 return unique_id
75
[end of mlflow/utils/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mlflow/utils/__init__.py b/mlflow/utils/__init__.py
--- a/mlflow/utils/__init__.py
+++ b/mlflow/utils/__init__.py
@@ -9,18 +9,6 @@
micro=version_info.micro)
-def ndarray2list(ndarray):
- """
- Convert n-dimensional numpy array into nested lists and convert the elements types to native
- python so that the list is json-able using standard json library.
- :param ndarray: numpy array
- :return: list representation of the numpy array with element types convereted to native python
- """
- if len(ndarray.shape) <= 1:
- return [x.item() for x in ndarray]
- return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]
-
-
def get_jsonable_obj(data, pandas_orient="records"):
"""Attempt to make the data json-able via standard library.
Look for some commonly used types that are not jsonable and convert them into json-able ones.
@@ -31,7 +19,7 @@
dictionary using this Pandas serialization orientation.
"""
if isinstance(data, np.ndarray):
- return ndarray2list(data)
+ return data.tolist()
if isinstance(data, pd.DataFrame):
return data.to_dict(orient=pandas_orient)
if isinstance(data, pd.Series):
| {"golden_diff": "diff --git a/mlflow/utils/__init__.py b/mlflow/utils/__init__.py\n--- a/mlflow/utils/__init__.py\n+++ b/mlflow/utils/__init__.py\n@@ -9,18 +9,6 @@\n micro=version_info.micro)\n \n \n-def ndarray2list(ndarray):\n- \"\"\"\n- Convert n-dimensional numpy array into nested lists and convert the elements types to native\n- python so that the list is json-able using standard json library.\n- :param ndarray: numpy array\n- :return: list representation of the numpy array with element types convereted to native python\n- \"\"\"\n- if len(ndarray.shape) <= 1:\n- return [x.item() for x in ndarray]\n- return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]\n-\n-\n def get_jsonable_obj(data, pandas_orient=\"records\"):\n \"\"\"Attempt to make the data json-able via standard library.\n Look for some commonly used types that are not jsonable and convert them into json-able ones.\n@@ -31,7 +19,7 @@\n dictionary using this Pandas serialization orientation.\n \"\"\"\n if isinstance(data, np.ndarray):\n- return ndarray2list(data)\n+ return data.tolist()\n if isinstance(data, pd.DataFrame):\n return data.to_dict(orient=pandas_orient)\n if isinstance(data, pd.Series):\n", "issue": "REST API error with sklearn classifier\n### System information\r\n- **Have I written custom code (as opposed to using a stock example script provided in MLflow)**: I wrote a simple random forest classifier using sklearn and the iris dataset to test the mlflow workflow from scratch.\r\n- **OS Platform and Distribution (e.g., Linux Ubuntu 16.04)**: macOS 10.14.1\r\n- **MLflow installed from (source or binary)**: source (pip install mlflow)\r\n- **MLflow version (run ``mlflow --version``)**: 0.8.2\r\n- **Python version**: 3.7\r\n- **npm version (if running the dev UI):\r\n- **Exact command to reproduce**:curl -X POST -H \"Content-Type:application/json; format=pandas-split\" --data '{\"columns\":[\"sepal_length_cm\", \"sepal_width_cm\", \"petal_length_cm\", \"petal_width_cm\"],\"data\":[[5.3, 1.7, 3.5, 0.5]]}' http://127.0.0.1:1234/invocations\r\n\r\n### Describe the problem\r\nIf sklearn classifier.predict() method returns a categorical variable, then there is a 500 internal server error returned after the above http request.\r\n\r\nThe error is from the following function in mlflow/utils/__init.py__:\r\n\r\ndef ndarray2list(ndarray):\r\n \"\"\"\r\n Convert n-dimensional numpy array into nested lists and convert the elements types to native\r\n python so that the list is json-able using standard json library.\r\n :param ndarray: numpy array\r\n :return: list representation of the numpy array with element types convereted to native python\r\n \"\"\"\r\n if len(ndarray.shape) <= 1:\r\n return [x.item() for x in ndarray]\r\n return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]\r\n\r\nx.item() fails if the array returned from the sklearn classifier.predict() method contains strings, as in the case of the iris dataset (labels are 'setosa', etc). I corrected the error on my local install by changing the function as below:\r\n\r\ndef ndarray2list(ndarray):\r\n \"\"\"\r\n Convert n-dimensional numpy array into nested lists and convert the elements types to native\r\n python so that the list is json-able using standard json library.\r\n :param ndarray: numpy array\r\n :return: list representation of the numpy array with element types convereted to native python\r\n \"\"\"\r\n if len(ndarray.shape) <= 1:\r\n try:\r\n return [x.item() for x in ndarray]\r\n except:\r\n return [x for x in ndarray]\r\n return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]\r\n\r\n### Source code / logs\r\nserver side error traceback:\r\n[2019-02-19 10:57:20,668] ERROR in app: Exception on /invocations [POST]\r\nTraceback (most recent call last):\r\n File \"/anaconda3/lib/python3.7/site-packages/flask/app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/anaconda3/lib/python3.7/site-packages/flask/app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/anaconda3/lib/python3.7/site-packages/flask/app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/anaconda3/lib/python3.7/site-packages/flask/_compat.py\", line 35, in reraise\r\n raise value\r\n File \"/anaconda3/lib/python3.7/site-packages/flask/app.py\", line 1813, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/anaconda3/lib/python3.7/site-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/anaconda3/lib/python3.7/site-packages/mlflow/server/handlers.py\", line 68, in wrapper\r\n return func(*args, **kwargs)\r\n File \"/anaconda3/lib/python3.7/site-packages/mlflow/pyfunc/scoring_server.py\", line 185, in transformation\r\n predictions = get_jsonable_obj(raw_predictions, pandas_orient=\"records\")\r\n File \"/anaconda3/lib/python3.7/site-packages/mlflow/utils/__init__.py\", line 34, in get_jsonable_obj\r\n return ndarray2list(data)\r\n File \"/anaconda3/lib/python3.7/site-packages/mlflow/utils/__init__.py\", line 20, in ndarray2list\r\n return [x.item() for x in ndarray]\r\n File \"/anaconda3/lib/python3.7/site-packages/mlflow/utils/__init__.py\", line 20, in <listcomp>\r\n return [x.item() for x in ndarray]\r\nAttributeError: 'str' object has no attribute 'item'\n", "before_files": [{"content": "from sys import version_info\n\nimport numpy as np\nimport pandas as pd\n\n\nPYTHON_VERSION = \"{major}.{minor}.{micro}\".format(major=version_info.major,\n minor=version_info.minor,\n micro=version_info.micro)\n\n\ndef ndarray2list(ndarray):\n \"\"\"\n Convert n-dimensional numpy array into nested lists and convert the elements types to native\n python so that the list is json-able using standard json library.\n :param ndarray: numpy array\n :return: list representation of the numpy array with element types convereted to native python\n \"\"\"\n if len(ndarray.shape) <= 1:\n return [x.item() for x in ndarray]\n return [ndarray2list(ndarray[i, :]) for i in range(0, ndarray.shape[0])]\n\n\ndef get_jsonable_obj(data, pandas_orient=\"records\"):\n \"\"\"Attempt to make the data json-able via standard library.\n Look for some commonly used types that are not jsonable and convert them into json-able ones.\n Unknown data types are returned as is.\n\n :param data: data to be converted, works with pandas and numpy, rest will be returned as is.\n :param pandas_orient: If `data` is a Pandas DataFrame, it will be converted to a JSON\n dictionary using this Pandas serialization orientation.\n \"\"\"\n if isinstance(data, np.ndarray):\n return ndarray2list(data)\n if isinstance(data, pd.DataFrame):\n return data.to_dict(orient=pandas_orient)\n if isinstance(data, pd.Series):\n return pd.DataFrame(data).to_dict(orient=pandas_orient)\n else: # by default just return whatever this is and hope for the best\n return data\n\n\ndef get_major_minor_py_version(py_version):\n return \".\".join(py_version.split(\".\")[:2])\n\n\ndef get_unique_resource_id(max_length=None):\n \"\"\"\n Obtains a unique id that can be included in a resource name. This unique id is a valid\n DNS subname.\n\n :param max_length: The maximum length of the identifier\n :return: A unique identifier that can be appended to a user-readable resource name to avoid\n naming collisions.\n \"\"\"\n import uuid\n import base64\n if max_length is not None and max_length <= 0:\n raise ValueError(\n \"The specified maximum length for the unique resource id must be positive!\")\n\n uuid_bytes = uuid.uuid4().bytes\n # Use base64 encoding to shorten the UUID length. Note that the replacement of the\n # unsupported '+' symbol maintains uniqueness because the UUID byte string is of a fixed,\n # 16-byte length\n uuid_b64 = base64.b64encode(uuid_bytes)\n if version_info >= (3, 0):\n # In Python3, `uuid_b64` is a `bytes` object. It needs to be\n # converted to a string\n uuid_b64 = uuid_b64.decode(\"ascii\")\n unique_id = uuid_b64.rstrip('=\\n').replace(\"/\", \"-\").replace(\"+\", \"AB\").lower()\n if max_length is not None:\n unique_id = unique_id[:int(max_length)]\n return unique_id\n", "path": "mlflow/utils/__init__.py"}]} | 2,471 | 306 |
gh_patches_debug_53283 | rasdani/github-patches | git_diff | holoviz__panel-1789 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Image not scaling dynamically
#### ALL software version info
```
Python 3.9.0
bokeh==2.2.3
notebook==5.7.9
panel==0.10.1
macOS Catalina 10.15.7
Chrome 85.0.4183.121
```
#### Description of expected behavior and the observed behavior
##### Expected
I should be able to scale image up and down dynamically in Jupyter Notebook and using the standalone server.
##### Observed
In the notebook, I'm able to scale up and down <= 300 width. I can't make the image larger than 300 pixels wide.
Using the standalone server, it looks like it scales just once (either up or down) and then gets stuck.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import panel as pn
def panel_logo(width=300):
# also happens with .jpg
return pn.panel("https://panel.holoviz.org/_static/logo_stacked.png", width=width)
pn.interact(panel_logo)
```
</issue>
<code>
[start of panel/pane/image.py]
1 """
2 Contains Image panes including renderers for PNG, SVG, GIF and JPG
3 file types.
4 """
5 from __future__ import absolute_import, division, unicode_literals
6
7 import base64
8
9 from io import BytesIO
10 from six import string_types
11
12 import param
13
14 from .markup import escape, DivPaneBase
15 from ..util import isfile, isurl
16
17
18 class ImageBase(DivPaneBase):
19 """
20 Encodes an image as base64 and wraps it in a Bokeh Div model.
21 This is an abstract base class that needs the image type
22 to be specified and specific code for determining the image shape.
23
24 The imgtype determines the filetype, extension, and MIME type for
25 this image. Each image type (png,jpg,gif) has a base class that
26 supports anything with a `_repr_X_` method (where X is `png`,
27 `gif`, etc.), a local file with the given file extension, or a
28 HTTP(S) url with the given extension. Subclasses of each type can
29 provide their own way of obtaining or generating a PNG.
30 """
31
32 alt_text = param.String(default=None, doc="""
33 alt text to add to the image tag. The alt text is shown when a
34 user cannot load or display the image.""")
35
36 link_url = param.String(default=None, doc="""
37 A link URL to make the image clickable and link to some other
38 website.""")
39
40 embed = param.Boolean(default=True, doc="""
41 Whether to embed the image as base64.""")
42
43 imgtype = 'None'
44
45 _rerender_params = ['alt_text', 'link_url', 'embed', 'object', 'style']
46
47 _target_transforms = {'object': """'<img src="' + value + '"></img>'"""}
48
49 __abstract = True
50
51 @classmethod
52 def applies(cls, obj):
53 imgtype = cls.imgtype
54 if hasattr(obj, '_repr_{}_'.format(imgtype)):
55 return True
56 if isinstance(obj, string_types):
57 if isfile(obj) and obj.endswith('.'+imgtype):
58 return True
59 if isurl(obj, [cls.imgtype]):
60 return True
61 elif isurl(obj, None):
62 return 0
63 if hasattr(obj, 'read'): # Check for file like object
64 return True
65 return False
66
67 def _type_error(self, object):
68 if isinstance(object, string_types):
69 raise ValueError("%s pane cannot parse string that is not a filename "
70 "or URL." % type(self).__name__)
71 super(ImageBase, self)._type_error(object)
72
73 def _img(self):
74 if hasattr(self.object, '_repr_{}_'.format(self.imgtype)):
75 return getattr(self.object, '_repr_' + self.imgtype + '_')()
76 if isinstance(self.object, string_types):
77 if isfile(self.object):
78 with open(self.object, 'rb') as f:
79 return f.read()
80 if hasattr(self.object, 'read'):
81 if hasattr(self.object, 'seek'):
82 self.object.seek(0)
83 return self.object.read()
84 if isurl(self.object, None):
85 import requests
86 r = requests.request(url=self.object, method='GET')
87 return r.content
88
89 def _b64(self):
90 data = self._img()
91 if not isinstance(data, bytes):
92 data = data.encode('utf-8')
93 b64 = base64.b64encode(data).decode("utf-8")
94 return "data:image/"+self.imgtype+f";base64,{b64}"
95
96 def _imgshape(self, data):
97 """Calculate and return image width,height"""
98 raise NotImplementedError
99
100 def _get_properties(self):
101 p = super(ImageBase, self)._get_properties()
102 if self.object is None:
103 return dict(p, text='<img></img>')
104 data = self._img()
105 if not isinstance(data, bytes):
106 data = base64.b64decode(data)
107 width, height = self._imgshape(data)
108 if self.width is not None:
109 if self.height is None:
110 height = int((self.width/width)*height)
111 else:
112 height = self.height
113 width = self.width
114 elif self.height is not None:
115 width = int((self.height/height)*width)
116 height = self.height
117 if not self.embed:
118 src = self.object
119 else:
120 b64 = base64.b64encode(data).decode("utf-8")
121 src = "data:image/"+self.imgtype+";base64,{b64}".format(b64=b64)
122
123 smode = self.sizing_mode
124 if smode in ['fixed', None]:
125 w, h = '%spx' % width, '%spx' % height
126 elif smode == 'stretch_both':
127 w, h = '100%', '100%'
128 elif smode == 'stretch_width':
129 w, h = '%spx' % width, '100%'
130 elif smode == 'stretch_height':
131 w, h = '100%', '%spx' % height
132 elif smode == 'scale_height':
133 w, h = 'auto', '100%'
134 else:
135 w, h = '100%', 'auto'
136
137 html = '<img src="{src}" width="{width}" height="{height}" alt="{alt}"></img>'.format(
138 src=src, width=w, height=h, alt=self.alt_text or '')
139
140 if self.link_url:
141 html = '<a href="{url}" target="_blank">{html}</a>'.format(
142 url=self.link_url, html=html)
143
144 return dict(p, width=width, height=height, text=escape(html))
145
146
147 class PNG(ImageBase):
148
149 imgtype = 'png'
150
151 @classmethod
152 def _imgshape(cls, data):
153 import struct
154 w, h = struct.unpack('>LL', data[16:24])
155 return int(w), int(h)
156
157
158 class GIF(ImageBase):
159
160 imgtype = 'gif'
161
162 @classmethod
163 def _imgshape(cls, data):
164 import struct
165 w, h = struct.unpack("<HH", data[6:10])
166 return int(w), int(h)
167
168
169 class JPG(ImageBase):
170
171 imgtype = 'jpg'
172
173 @classmethod
174 def _imgshape(cls, data):
175 import struct
176 b = BytesIO(data)
177 b.read(2)
178 c = b.read(1)
179 while (c and ord(c) != 0xDA):
180 while (ord(c) != 0xFF): c = b.read(1)
181 while (ord(c) == 0xFF): c = b.read(1)
182 if (ord(c) >= 0xC0 and ord(c) <= 0xC3):
183 b.read(3)
184 h, w = struct.unpack(">HH", b.read(4))
185 break
186 else:
187 b.read(int(struct.unpack(">H", b.read(2))[0])-2)
188 c = b.read(1)
189 return int(w), int(h)
190
191
192 class SVG(ImageBase):
193
194 encode = param.Boolean(default=False, doc="""
195 Whether to enable base64 encoding of the SVG, base64 encoded
196 SVGs do not support links.""")
197
198 imgtype = 'svg'
199
200 _rerender_params = ImageBase._rerender_params + ['encode']
201
202 @classmethod
203 def applies(cls, obj):
204 return (super(SVG, cls).applies(obj) or
205 (isinstance(obj, string_types) and obj.lstrip().startswith('<svg')))
206
207 def _type_error(self, object):
208 if isinstance(object, string_types):
209 raise ValueError("%s pane cannot parse string that is not a filename, "
210 "URL or a SVG XML contents." % type(self).__name__)
211 super(SVG, self)._type_error(object)
212
213 def _img(self):
214 if (isinstance(self.object, string_types) and
215 self.object.lstrip().startswith('<svg')):
216 return self.object
217 return super(SVG, self)._img()
218
219 def _b64(self):
220 data = self._img()
221 if not isinstance(data, bytes):
222 data = data.encode('utf-8')
223 b64 = base64.b64encode(data).decode("utf-8")
224 return f"data:image/svg+xml;base64,{b64}"
225
226 def _imgshape(self, data):
227 return (self.width, self.height)
228
229 def _get_properties(self):
230 p = super(ImageBase, self)._get_properties()
231 if self.object is None:
232 return dict(p, text='<img></img>')
233 data = self._img()
234 width, height = self._imgshape(data)
235 if not isinstance(data, bytes):
236 data = data.encode('utf-8')
237
238 if self.encode:
239 b64 = base64.b64encode(data).decode("utf-8")
240 src = "data:image/svg+xml;base64,{b64}".format(b64=b64)
241 html = "<img src='{src}' width={width} height={height}></img>".format(
242 src=src, width=width, height=height
243 )
244 else:
245 html = data.decode("utf-8")
246 return dict(p, width=width, height=height, text=escape(html))
247
[end of panel/pane/image.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/panel/pane/image.py b/panel/pane/image.py
--- a/panel/pane/image.py
+++ b/panel/pane/image.py
@@ -42,7 +42,7 @@
imgtype = 'None'
- _rerender_params = ['alt_text', 'link_url', 'embed', 'object', 'style']
+ _rerender_params = ['alt_text', 'link_url', 'embed', 'object', 'style', 'width', 'height']
_target_transforms = {'object': """'<img src="' + value + '"></img>'"""}
| {"golden_diff": "diff --git a/panel/pane/image.py b/panel/pane/image.py\n--- a/panel/pane/image.py\n+++ b/panel/pane/image.py\n@@ -42,7 +42,7 @@\n \n imgtype = 'None'\n \n- _rerender_params = ['alt_text', 'link_url', 'embed', 'object', 'style']\n+ _rerender_params = ['alt_text', 'link_url', 'embed', 'object', 'style', 'width', 'height']\n \n _target_transforms = {'object': \"\"\"'<img src=\"' + value + '\"></img>'\"\"\"}\n", "issue": "Image not scaling dynamically\n#### ALL software version info\r\n\r\n```\r\nPython 3.9.0\r\n\r\nbokeh==2.2.3\r\nnotebook==5.7.9\r\npanel==0.10.1\r\n\r\nmacOS Catalina 10.15.7\r\n\r\nChrome 85.0.4183.121\r\n```\r\n\r\n#### Description of expected behavior and the observed behavior\r\n\r\n##### Expected\r\n\r\nI should be able to scale image up and down dynamically in Jupyter Notebook and using the standalone server.\r\n\r\n##### Observed\r\n\r\nIn the notebook, I'm able to scale up and down <= 300 width. I can't make the image larger than 300 pixels wide.\r\n\r\nUsing the standalone server, it looks like it scales just once (either up or down) and then gets stuck.\r\n\r\n#### Complete, minimal, self-contained example code that reproduces the issue\r\n\r\n```python\r\nimport panel as pn\r\n\r\ndef panel_logo(width=300):\r\n # also happens with .jpg\r\n return pn.panel(\"https://panel.holoviz.org/_static/logo_stacked.png\", width=width)\r\n\r\npn.interact(panel_logo)\r\n```\n", "before_files": [{"content": "\"\"\"\nContains Image panes including renderers for PNG, SVG, GIF and JPG\nfile types.\n\"\"\"\nfrom __future__ import absolute_import, division, unicode_literals\n\nimport base64\n\nfrom io import BytesIO\nfrom six import string_types\n\nimport param\n\nfrom .markup import escape, DivPaneBase\nfrom ..util import isfile, isurl\n\n\nclass ImageBase(DivPaneBase):\n \"\"\"\n Encodes an image as base64 and wraps it in a Bokeh Div model.\n This is an abstract base class that needs the image type\n to be specified and specific code for determining the image shape.\n\n The imgtype determines the filetype, extension, and MIME type for\n this image. Each image type (png,jpg,gif) has a base class that\n supports anything with a `_repr_X_` method (where X is `png`,\n `gif`, etc.), a local file with the given file extension, or a\n HTTP(S) url with the given extension. Subclasses of each type can\n provide their own way of obtaining or generating a PNG.\n \"\"\"\n\n alt_text = param.String(default=None, doc=\"\"\"\n alt text to add to the image tag. The alt text is shown when a\n user cannot load or display the image.\"\"\")\n\n link_url = param.String(default=None, doc=\"\"\"\n A link URL to make the image clickable and link to some other\n website.\"\"\")\n\n embed = param.Boolean(default=True, doc=\"\"\"\n Whether to embed the image as base64.\"\"\")\n\n imgtype = 'None'\n\n _rerender_params = ['alt_text', 'link_url', 'embed', 'object', 'style']\n\n _target_transforms = {'object': \"\"\"'<img src=\"' + value + '\"></img>'\"\"\"}\n\n __abstract = True\n\n @classmethod\n def applies(cls, obj):\n imgtype = cls.imgtype\n if hasattr(obj, '_repr_{}_'.format(imgtype)):\n return True\n if isinstance(obj, string_types):\n if isfile(obj) and obj.endswith('.'+imgtype):\n return True\n if isurl(obj, [cls.imgtype]):\n return True\n elif isurl(obj, None):\n return 0\n if hasattr(obj, 'read'): # Check for file like object\n return True\n return False\n\n def _type_error(self, object):\n if isinstance(object, string_types):\n raise ValueError(\"%s pane cannot parse string that is not a filename \"\n \"or URL.\" % type(self).__name__)\n super(ImageBase, self)._type_error(object)\n\n def _img(self):\n if hasattr(self.object, '_repr_{}_'.format(self.imgtype)):\n return getattr(self.object, '_repr_' + self.imgtype + '_')()\n if isinstance(self.object, string_types):\n if isfile(self.object):\n with open(self.object, 'rb') as f:\n return f.read()\n if hasattr(self.object, 'read'):\n if hasattr(self.object, 'seek'):\n self.object.seek(0)\n return self.object.read()\n if isurl(self.object, None):\n import requests\n r = requests.request(url=self.object, method='GET')\n return r.content\n\n def _b64(self):\n data = self._img()\n if not isinstance(data, bytes):\n data = data.encode('utf-8')\n b64 = base64.b64encode(data).decode(\"utf-8\")\n return \"data:image/\"+self.imgtype+f\";base64,{b64}\"\n\n def _imgshape(self, data):\n \"\"\"Calculate and return image width,height\"\"\"\n raise NotImplementedError\n\n def _get_properties(self):\n p = super(ImageBase, self)._get_properties()\n if self.object is None:\n return dict(p, text='<img></img>')\n data = self._img()\n if not isinstance(data, bytes):\n data = base64.b64decode(data)\n width, height = self._imgshape(data)\n if self.width is not None:\n if self.height is None:\n height = int((self.width/width)*height)\n else:\n height = self.height\n width = self.width\n elif self.height is not None:\n width = int((self.height/height)*width)\n height = self.height\n if not self.embed:\n src = self.object\n else:\n b64 = base64.b64encode(data).decode(\"utf-8\")\n src = \"data:image/\"+self.imgtype+\";base64,{b64}\".format(b64=b64)\n\n smode = self.sizing_mode\n if smode in ['fixed', None]:\n w, h = '%spx' % width, '%spx' % height\n elif smode == 'stretch_both':\n w, h = '100%', '100%'\n elif smode == 'stretch_width':\n w, h = '%spx' % width, '100%'\n elif smode == 'stretch_height':\n w, h = '100%', '%spx' % height\n elif smode == 'scale_height':\n w, h = 'auto', '100%'\n else:\n w, h = '100%', 'auto'\n\n html = '<img src=\"{src}\" width=\"{width}\" height=\"{height}\" alt=\"{alt}\"></img>'.format(\n src=src, width=w, height=h, alt=self.alt_text or '')\n\n if self.link_url:\n html = '<a href=\"{url}\" target=\"_blank\">{html}</a>'.format(\n url=self.link_url, html=html)\n\n return dict(p, width=width, height=height, text=escape(html))\n\n\nclass PNG(ImageBase):\n\n imgtype = 'png'\n\n @classmethod\n def _imgshape(cls, data):\n import struct\n w, h = struct.unpack('>LL', data[16:24])\n return int(w), int(h)\n\n\nclass GIF(ImageBase):\n\n imgtype = 'gif'\n\n @classmethod\n def _imgshape(cls, data):\n import struct\n w, h = struct.unpack(\"<HH\", data[6:10])\n return int(w), int(h)\n\n\nclass JPG(ImageBase):\n\n imgtype = 'jpg'\n\n @classmethod\n def _imgshape(cls, data):\n import struct\n b = BytesIO(data)\n b.read(2)\n c = b.read(1)\n while (c and ord(c) != 0xDA):\n while (ord(c) != 0xFF): c = b.read(1)\n while (ord(c) == 0xFF): c = b.read(1)\n if (ord(c) >= 0xC0 and ord(c) <= 0xC3):\n b.read(3)\n h, w = struct.unpack(\">HH\", b.read(4))\n break\n else:\n b.read(int(struct.unpack(\">H\", b.read(2))[0])-2)\n c = b.read(1)\n return int(w), int(h)\n\n\nclass SVG(ImageBase):\n\n encode = param.Boolean(default=False, doc=\"\"\"\n Whether to enable base64 encoding of the SVG, base64 encoded\n SVGs do not support links.\"\"\")\n\n imgtype = 'svg'\n\n _rerender_params = ImageBase._rerender_params + ['encode']\n\n @classmethod\n def applies(cls, obj):\n return (super(SVG, cls).applies(obj) or\n (isinstance(obj, string_types) and obj.lstrip().startswith('<svg')))\n\n def _type_error(self, object):\n if isinstance(object, string_types):\n raise ValueError(\"%s pane cannot parse string that is not a filename, \"\n \"URL or a SVG XML contents.\" % type(self).__name__)\n super(SVG, self)._type_error(object)\n\n def _img(self):\n if (isinstance(self.object, string_types) and\n self.object.lstrip().startswith('<svg')):\n return self.object\n return super(SVG, self)._img()\n\n def _b64(self):\n data = self._img()\n if not isinstance(data, bytes):\n data = data.encode('utf-8')\n b64 = base64.b64encode(data).decode(\"utf-8\")\n return f\"data:image/svg+xml;base64,{b64}\"\n\n def _imgshape(self, data):\n return (self.width, self.height)\n\n def _get_properties(self):\n p = super(ImageBase, self)._get_properties()\n if self.object is None:\n return dict(p, text='<img></img>')\n data = self._img()\n width, height = self._imgshape(data)\n if not isinstance(data, bytes):\n data = data.encode('utf-8')\n\n if self.encode:\n b64 = base64.b64encode(data).decode(\"utf-8\")\n src = \"data:image/svg+xml;base64,{b64}\".format(b64=b64)\n html = \"<img src='{src}' width={width} height={height}></img>\".format(\n src=src, width=width, height=height\n )\n else:\n html = data.decode(\"utf-8\")\n return dict(p, width=width, height=height, text=escape(html))\n", "path": "panel/pane/image.py"}]} | 3,475 | 135 |
gh_patches_debug_29456 | rasdani/github-patches | git_diff | oppia__oppia-7287 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show skill mastery values in the topic viewer
Add a skill tab in the topic viewer that will show skill mastery of all skills in that topic (Once we have enough skill mastery information for the skill)
Milestone 3.2 in @sophiewu6 's GSoC project
</issue>
<code>
[start of core/controllers/topic_viewer.py]
1 # Copyright 2018 The Oppia Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS-IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Controllers for the topic viewer page."""
16
17 from constants import constants
18 from core.controllers import acl_decorators
19 from core.controllers import base
20 from core.domain import story_fetchers
21 from core.domain import topic_fetchers
22 import feconf
23
24
25 class TopicViewerPage(base.BaseHandler):
26 """Renders the topic viewer page."""
27
28 @acl_decorators.can_access_topic_viewer_page
29 def get(self, _):
30 """Handles GET requests."""
31
32 if not constants.ENABLE_NEW_STRUCTURE_PLAYERS:
33 raise self.PageNotFoundException
34
35 self.render_template('dist/topic-viewer-page.mainpage.html')
36
37
38 class TopicPageDataHandler(base.BaseHandler):
39 """Manages the data that needs to be displayed to a learner on the topic
40 viewer page.
41 """
42 GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON
43
44 @acl_decorators.can_access_topic_viewer_page
45 def get(self, topic_name):
46 """Handles GET requests."""
47
48 if not constants.ENABLE_NEW_STRUCTURE_PLAYERS:
49 raise self.PageNotFoundException
50
51 topic = topic_fetchers.get_topic_by_name(topic_name)
52 canonical_story_ids = topic.get_canonical_story_ids(
53 include_only_published=True)
54 additional_story_ids = topic.get_additional_story_ids(
55 include_only_published=True)
56 canonical_story_summaries = [
57 story_fetchers.get_story_summary_by_id(
58 canonical_story_id) for canonical_story_id
59 in canonical_story_ids]
60
61 additional_story_summaries = [
62 story_fetchers.get_story_summary_by_id(
63 additional_story_id) for additional_story_id
64 in additional_story_ids]
65
66 canonical_story_dicts = [
67 summary.to_human_readable_dict() for summary
68 in canonical_story_summaries]
69
70 additional_story_dicts = [
71 summary.to_human_readable_dict() for summary
72 in additional_story_summaries]
73
74 uncategorized_skill_ids = topic.get_all_uncategorized_skill_ids()
75 subtopics = topic.get_all_subtopics()
76
77 self.values.update({
78 'topic_id': topic.id,
79 'topic_name': topic.name,
80 'canonical_story_dicts': canonical_story_dicts,
81 'additional_story_dicts': additional_story_dicts,
82 'uncategorized_skill_ids': uncategorized_skill_ids,
83 'subtopics': subtopics
84 })
85 self.render_json(self.values)
86
[end of core/controllers/topic_viewer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/controllers/topic_viewer.py b/core/controllers/topic_viewer.py
--- a/core/controllers/topic_viewer.py
+++ b/core/controllers/topic_viewer.py
@@ -17,6 +17,7 @@
from constants import constants
from core.controllers import acl_decorators
from core.controllers import base
+from core.domain import skill_services
from core.domain import story_fetchers
from core.domain import topic_fetchers
import feconf
@@ -74,12 +75,26 @@
uncategorized_skill_ids = topic.get_all_uncategorized_skill_ids()
subtopics = topic.get_all_subtopics()
+ assigned_skill_ids = topic.get_all_skill_ids()
+ skill_descriptions = skill_services.get_skill_descriptions_by_ids(
+ topic.id, assigned_skill_ids)
+
+ if self.user_id:
+ degrees_of_mastery = skill_services.get_multi_user_skill_mastery(
+ self.user_id, assigned_skill_ids)
+ else:
+ degrees_of_mastery = {}
+ for skill_id in assigned_skill_ids:
+ degrees_of_mastery[skill_id] = None
+
self.values.update({
'topic_id': topic.id,
'topic_name': topic.name,
'canonical_story_dicts': canonical_story_dicts,
'additional_story_dicts': additional_story_dicts,
'uncategorized_skill_ids': uncategorized_skill_ids,
- 'subtopics': subtopics
+ 'subtopics': subtopics,
+ 'degrees_of_mastery': degrees_of_mastery,
+ 'skill_descriptions': skill_descriptions
})
self.render_json(self.values)
| {"golden_diff": "diff --git a/core/controllers/topic_viewer.py b/core/controllers/topic_viewer.py\n--- a/core/controllers/topic_viewer.py\n+++ b/core/controllers/topic_viewer.py\n@@ -17,6 +17,7 @@\n from constants import constants\n from core.controllers import acl_decorators\n from core.controllers import base\n+from core.domain import skill_services\n from core.domain import story_fetchers\n from core.domain import topic_fetchers\n import feconf\n@@ -74,12 +75,26 @@\n uncategorized_skill_ids = topic.get_all_uncategorized_skill_ids()\n subtopics = topic.get_all_subtopics()\n \n+ assigned_skill_ids = topic.get_all_skill_ids()\n+ skill_descriptions = skill_services.get_skill_descriptions_by_ids(\n+ topic.id, assigned_skill_ids)\n+\n+ if self.user_id:\n+ degrees_of_mastery = skill_services.get_multi_user_skill_mastery(\n+ self.user_id, assigned_skill_ids)\n+ else:\n+ degrees_of_mastery = {}\n+ for skill_id in assigned_skill_ids:\n+ degrees_of_mastery[skill_id] = None\n+\n self.values.update({\n 'topic_id': topic.id,\n 'topic_name': topic.name,\n 'canonical_story_dicts': canonical_story_dicts,\n 'additional_story_dicts': additional_story_dicts,\n 'uncategorized_skill_ids': uncategorized_skill_ids,\n- 'subtopics': subtopics\n+ 'subtopics': subtopics,\n+ 'degrees_of_mastery': degrees_of_mastery,\n+ 'skill_descriptions': skill_descriptions\n })\n self.render_json(self.values)\n", "issue": "Show skill mastery values in the topic viewer\nAdd a skill tab in the topic viewer that will show skill mastery of all skills in that topic (Once we have enough skill mastery information for the skill)\r\n\r\nMilestone 3.2 in @sophiewu6 's GSoC project\n", "before_files": [{"content": "# Copyright 2018 The Oppia Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS-IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Controllers for the topic viewer page.\"\"\"\n\nfrom constants import constants\nfrom core.controllers import acl_decorators\nfrom core.controllers import base\nfrom core.domain import story_fetchers\nfrom core.domain import topic_fetchers\nimport feconf\n\n\nclass TopicViewerPage(base.BaseHandler):\n \"\"\"Renders the topic viewer page.\"\"\"\n\n @acl_decorators.can_access_topic_viewer_page\n def get(self, _):\n \"\"\"Handles GET requests.\"\"\"\n\n if not constants.ENABLE_NEW_STRUCTURE_PLAYERS:\n raise self.PageNotFoundException\n\n self.render_template('dist/topic-viewer-page.mainpage.html')\n\n\nclass TopicPageDataHandler(base.BaseHandler):\n \"\"\"Manages the data that needs to be displayed to a learner on the topic\n viewer page.\n \"\"\"\n GET_HANDLER_ERROR_RETURN_TYPE = feconf.HANDLER_TYPE_JSON\n\n @acl_decorators.can_access_topic_viewer_page\n def get(self, topic_name):\n \"\"\"Handles GET requests.\"\"\"\n\n if not constants.ENABLE_NEW_STRUCTURE_PLAYERS:\n raise self.PageNotFoundException\n\n topic = topic_fetchers.get_topic_by_name(topic_name)\n canonical_story_ids = topic.get_canonical_story_ids(\n include_only_published=True)\n additional_story_ids = topic.get_additional_story_ids(\n include_only_published=True)\n canonical_story_summaries = [\n story_fetchers.get_story_summary_by_id(\n canonical_story_id) for canonical_story_id\n in canonical_story_ids]\n\n additional_story_summaries = [\n story_fetchers.get_story_summary_by_id(\n additional_story_id) for additional_story_id\n in additional_story_ids]\n\n canonical_story_dicts = [\n summary.to_human_readable_dict() for summary\n in canonical_story_summaries]\n\n additional_story_dicts = [\n summary.to_human_readable_dict() for summary\n in additional_story_summaries]\n\n uncategorized_skill_ids = topic.get_all_uncategorized_skill_ids()\n subtopics = topic.get_all_subtopics()\n\n self.values.update({\n 'topic_id': topic.id,\n 'topic_name': topic.name,\n 'canonical_story_dicts': canonical_story_dicts,\n 'additional_story_dicts': additional_story_dicts,\n 'uncategorized_skill_ids': uncategorized_skill_ids,\n 'subtopics': subtopics\n })\n self.render_json(self.values)\n", "path": "core/controllers/topic_viewer.py"}]} | 1,374 | 340 |
gh_patches_debug_23586 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-1555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pymongo is not collecting the property: db.mongodb.collection
According to the [specs](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md#call-level-attributes-for-specific-technologies) -
mongodb should capture: "The collection being accessed within the database stated in db.name."
and save it in: `db.mongodb.collection`
**Steps to reproduce**
Instrument a client using PymongoInstrumentor().
Send a request to the db.
**What is the expected behavior?**
Produce a span with `db.mongodb.collection` value containing the collection name.
**What is the actual behavior?**
Produce a span without generating `db.mongodb.collection`.
**Example:**
Here is a simple code example:
```
PymongoInstrumentor().instrument()
client = MongoClient()
RECORD = {"test": "123"}
db = client["MongoDB_Database"]
collection = db["MongoDB_Collection"]
collection.find_one(RECORD)
```
and the result is missing the collection:
```
"attributes": {
"db.system": "mongodb",
"db.name": "MongoDB_Database",
"db.statement": "find",
"net.peer.name": "localhost",
"net.peer.port": 27017
}
```
If you can - assign this to me, thanks :)
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 The integration with MongoDB supports the `pymongo`_ library, it can be
17 enabled using the ``PymongoInstrumentor``.
18
19 .. _pymongo: https://pypi.org/project/pymongo
20
21 Usage
22 -----
23
24 .. code:: python
25
26 from pymongo import MongoClient
27 from opentelemetry.instrumentation.pymongo import PymongoInstrumentor
28
29 PymongoInstrumentor().instrument()
30 client = MongoClient()
31 db = client["MongoDB_Database"]
32 collection = db["MongoDB_Collection"]
33 collection.find_one()
34
35 API
36 ---
37 The `instrument` method accepts the following keyword args:
38
39 tracer_provider (TracerProvider) - an optional tracer provider
40 request_hook (Callable) -
41 a function with extra user-defined logic to be performed before querying mongodb
42 this function signature is: def request_hook(span: Span, event: CommandStartedEvent) -> None
43 response_hook (Callable) -
44 a function with extra user-defined logic to be performed after the query returns with a successful response
45 this function signature is: def response_hook(span: Span, event: CommandSucceededEvent) -> None
46 failed_hook (Callable) -
47 a function with extra user-defined logic to be performed after the query returns with a failed response
48 this function signature is: def failed_hook(span: Span, event: CommandFailedEvent) -> None
49
50 for example:
51
52 .. code: python
53
54 from opentelemetry.instrumentation.pymongo import PymongoInstrumentor
55 from pymongo import MongoClient
56
57 def request_hook(span, event):
58 # request hook logic
59
60 def response_hook(span, event):
61 # response hook logic
62
63 def failed_hook(span, event):
64 # failed hook logic
65
66 # Instrument pymongo with hooks
67 PymongoInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook, failed_hook=failed_hook)
68
69 # This will create a span with pymongo specific attributes, including custom attributes added from the hooks
70 client = MongoClient()
71 db = client["MongoDB_Database"]
72 collection = db["MongoDB_Collection"]
73 collection.find_one()
74
75 """
76 from logging import getLogger
77 from typing import Callable, Collection
78
79 from pymongo import monitoring
80
81 from opentelemetry import context
82 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
83 from opentelemetry.instrumentation.pymongo.package import _instruments
84 from opentelemetry.instrumentation.pymongo.version import __version__
85 from opentelemetry.instrumentation.utils import _SUPPRESS_INSTRUMENTATION_KEY
86 from opentelemetry.semconv.trace import DbSystemValues, SpanAttributes
87 from opentelemetry.trace import SpanKind, get_tracer
88 from opentelemetry.trace.span import Span
89 from opentelemetry.trace.status import Status, StatusCode
90
91 _LOG = getLogger(__name__)
92
93 RequestHookT = Callable[[Span, monitoring.CommandStartedEvent], None]
94 ResponseHookT = Callable[[Span, monitoring.CommandSucceededEvent], None]
95 FailedHookT = Callable[[Span, monitoring.CommandFailedEvent], None]
96
97
98 def dummy_callback(span, event):
99 ...
100
101
102 class CommandTracer(monitoring.CommandListener):
103 def __init__(
104 self,
105 tracer,
106 request_hook: RequestHookT = dummy_callback,
107 response_hook: ResponseHookT = dummy_callback,
108 failed_hook: FailedHookT = dummy_callback,
109 ):
110 self._tracer = tracer
111 self._span_dict = {}
112 self.is_enabled = True
113 self.start_hook = request_hook
114 self.success_hook = response_hook
115 self.failed_hook = failed_hook
116
117 def started(self, event: monitoring.CommandStartedEvent):
118 """Method to handle a pymongo CommandStartedEvent"""
119 if not self.is_enabled or context.get_value(
120 _SUPPRESS_INSTRUMENTATION_KEY
121 ):
122 return
123 command = event.command.get(event.command_name, "")
124 name = event.database_name
125 name += "." + event.command_name
126 statement = event.command_name
127 if command:
128 statement += " " + str(command)
129
130 try:
131 span = self._tracer.start_span(name, kind=SpanKind.CLIENT)
132 if span.is_recording():
133 span.set_attribute(
134 SpanAttributes.DB_SYSTEM, DbSystemValues.MONGODB.value
135 )
136 span.set_attribute(SpanAttributes.DB_NAME, event.database_name)
137 span.set_attribute(SpanAttributes.DB_STATEMENT, statement)
138 if event.connection_id is not None:
139 span.set_attribute(
140 SpanAttributes.NET_PEER_NAME, event.connection_id[0]
141 )
142 span.set_attribute(
143 SpanAttributes.NET_PEER_PORT, event.connection_id[1]
144 )
145 try:
146 self.start_hook(span, event)
147 except Exception as hook_exception: # noqa pylint: disable=broad-except
148 _LOG.exception(hook_exception)
149
150 # Add Span to dictionary
151 self._span_dict[_get_span_dict_key(event)] = span
152 except Exception as ex: # noqa pylint: disable=broad-except
153 if span is not None and span.is_recording():
154 span.set_status(Status(StatusCode.ERROR, str(ex)))
155 span.end()
156 self._pop_span(event)
157
158 def succeeded(self, event: monitoring.CommandSucceededEvent):
159 """Method to handle a pymongo CommandSucceededEvent"""
160 if not self.is_enabled or context.get_value(
161 _SUPPRESS_INSTRUMENTATION_KEY
162 ):
163 return
164 span = self._pop_span(event)
165 if span is None:
166 return
167 if span.is_recording():
168 try:
169 self.success_hook(span, event)
170 except Exception as hook_exception: # noqa pylint: disable=broad-except
171 _LOG.exception(hook_exception)
172 span.end()
173
174 def failed(self, event: monitoring.CommandFailedEvent):
175 """Method to handle a pymongo CommandFailedEvent"""
176 if not self.is_enabled or context.get_value(
177 _SUPPRESS_INSTRUMENTATION_KEY
178 ):
179 return
180 span = self._pop_span(event)
181 if span is None:
182 return
183 if span.is_recording():
184 span.set_status(Status(StatusCode.ERROR, event.failure))
185 try:
186 self.failed_hook(span, event)
187 except Exception as hook_exception: # noqa pylint: disable=broad-except
188 _LOG.exception(hook_exception)
189 span.end()
190
191 def _pop_span(self, event):
192 return self._span_dict.pop(_get_span_dict_key(event), None)
193
194
195 def _get_span_dict_key(event):
196 if event.connection_id is not None:
197 return event.request_id, event.connection_id
198 return event.request_id
199
200
201 class PymongoInstrumentor(BaseInstrumentor):
202 _commandtracer_instance = None # type CommandTracer
203 # The instrumentation for PyMongo is based on the event listener interface
204 # https://api.mongodb.com/python/current/api/pymongo/monitoring.html.
205 # This interface only allows to register listeners and does not provide
206 # an unregister API. In order to provide a mechanishm to disable
207 # instrumentation an enabled flag is implemented in CommandTracer,
208 # it's checked in the different listeners.
209
210 def instrumentation_dependencies(self) -> Collection[str]:
211 return _instruments
212
213 def _instrument(self, **kwargs):
214 """Integrate with pymongo to trace it using event listener.
215 https://api.mongodb.com/python/current/api/pymongo/monitoring.html
216
217 Args:
218 tracer_provider: The `TracerProvider` to use. If none is passed the
219 current configured one is used.
220 """
221
222 tracer_provider = kwargs.get("tracer_provider")
223 request_hook = kwargs.get("request_hook", dummy_callback)
224 response_hook = kwargs.get("response_hook", dummy_callback)
225 failed_hook = kwargs.get("failed_hook", dummy_callback)
226 # Create and register a CommandTracer only the first time
227 if self._commandtracer_instance is None:
228 tracer = get_tracer(__name__, __version__, tracer_provider)
229
230 self._commandtracer_instance = CommandTracer(
231 tracer,
232 request_hook=request_hook,
233 response_hook=response_hook,
234 failed_hook=failed_hook,
235 )
236 monitoring.register(self._commandtracer_instance)
237 # If already created, just enable it
238 self._commandtracer_instance.is_enabled = True
239
240 def _uninstrument(self, **kwargs):
241 if self._commandtracer_instance is not None:
242 self._commandtracer_instance.is_enabled = False
243
[end of instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py
@@ -126,6 +126,7 @@
statement = event.command_name
if command:
statement += " " + str(command)
+ collection = event.command.get(event.command_name)
try:
span = self._tracer.start_span(name, kind=SpanKind.CLIENT)
@@ -135,6 +136,10 @@
)
span.set_attribute(SpanAttributes.DB_NAME, event.database_name)
span.set_attribute(SpanAttributes.DB_STATEMENT, statement)
+ if collection:
+ span.set_attribute(
+ SpanAttributes.DB_MONGODB_COLLECTION, collection
+ )
if event.connection_id is not None:
span.set_attribute(
SpanAttributes.NET_PEER_NAME, event.connection_id[0]
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py\n@@ -126,6 +126,7 @@\n statement = event.command_name\n if command:\n statement += \" \" + str(command)\n+ collection = event.command.get(event.command_name)\n \n try:\n span = self._tracer.start_span(name, kind=SpanKind.CLIENT)\n@@ -135,6 +136,10 @@\n )\n span.set_attribute(SpanAttributes.DB_NAME, event.database_name)\n span.set_attribute(SpanAttributes.DB_STATEMENT, statement)\n+ if collection:\n+ span.set_attribute(\n+ SpanAttributes.DB_MONGODB_COLLECTION, collection\n+ )\n if event.connection_id is not None:\n span.set_attribute(\n SpanAttributes.NET_PEER_NAME, event.connection_id[0]\n", "issue": "pymongo is not collecting the property: db.mongodb.collection\nAccording to the [specs](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/trace/semantic_conventions/database.md#call-level-attributes-for-specific-technologies) - \r\nmongodb should capture: \"The collection being accessed within the database stated in db.name.\"\r\nand save it in: `db.mongodb.collection`\r\n\r\n**Steps to reproduce**\r\nInstrument a client using PymongoInstrumentor().\r\nSend a request to the db.\r\n\r\n**What is the expected behavior?**\r\nProduce a span with `db.mongodb.collection` value containing the collection name.\r\n\r\n**What is the actual behavior?**\r\nProduce a span without generating `db.mongodb.collection`.\r\n\r\n**Example:**\r\nHere is a simple code example:\r\n\r\n```\r\nPymongoInstrumentor().instrument()\r\nclient = MongoClient()\r\nRECORD = {\"test\": \"123\"}\r\ndb = client[\"MongoDB_Database\"]\r\ncollection = db[\"MongoDB_Collection\"]\r\ncollection.find_one(RECORD)\r\n```\r\n\r\nand the result is missing the collection:\r\n```\r\n\"attributes\": {\r\n \"db.system\": \"mongodb\",\r\n \"db.name\": \"MongoDB_Database\",\r\n \"db.statement\": \"find\",\r\n \"net.peer.name\": \"localhost\",\r\n \"net.peer.port\": 27017\r\n }\r\n```\r\n\r\nIf you can - assign this to me, thanks :)\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThe integration with MongoDB supports the `pymongo`_ library, it can be\nenabled using the ``PymongoInstrumentor``.\n\n.. _pymongo: https://pypi.org/project/pymongo\n\nUsage\n-----\n\n.. code:: python\n\n from pymongo import MongoClient\n from opentelemetry.instrumentation.pymongo import PymongoInstrumentor\n\n PymongoInstrumentor().instrument()\n client = MongoClient()\n db = client[\"MongoDB_Database\"]\n collection = db[\"MongoDB_Collection\"]\n collection.find_one()\n\nAPI\n---\nThe `instrument` method accepts the following keyword args:\n\ntracer_provider (TracerProvider) - an optional tracer provider\nrequest_hook (Callable) -\na function with extra user-defined logic to be performed before querying mongodb\nthis function signature is: def request_hook(span: Span, event: CommandStartedEvent) -> None\nresponse_hook (Callable) -\na function with extra user-defined logic to be performed after the query returns with a successful response\nthis function signature is: def response_hook(span: Span, event: CommandSucceededEvent) -> None\nfailed_hook (Callable) -\na function with extra user-defined logic to be performed after the query returns with a failed response\nthis function signature is: def failed_hook(span: Span, event: CommandFailedEvent) -> None\n\nfor example:\n\n.. code: python\n\n from opentelemetry.instrumentation.pymongo import PymongoInstrumentor\n from pymongo import MongoClient\n\n def request_hook(span, event):\n # request hook logic\n\n def response_hook(span, event):\n # response hook logic\n\n def failed_hook(span, event):\n # failed hook logic\n\n # Instrument pymongo with hooks\n PymongoInstrumentor().instrument(request_hook=request_hook, response_hook=response_hook, failed_hook=failed_hook)\n\n # This will create a span with pymongo specific attributes, including custom attributes added from the hooks\n client = MongoClient()\n db = client[\"MongoDB_Database\"]\n collection = db[\"MongoDB_Collection\"]\n collection.find_one()\n\n\"\"\"\nfrom logging import getLogger\nfrom typing import Callable, Collection\n\nfrom pymongo import monitoring\n\nfrom opentelemetry import context\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.instrumentation.pymongo.package import _instruments\nfrom opentelemetry.instrumentation.pymongo.version import __version__\nfrom opentelemetry.instrumentation.utils import _SUPPRESS_INSTRUMENTATION_KEY\nfrom opentelemetry.semconv.trace import DbSystemValues, SpanAttributes\nfrom opentelemetry.trace import SpanKind, get_tracer\nfrom opentelemetry.trace.span import Span\nfrom opentelemetry.trace.status import Status, StatusCode\n\n_LOG = getLogger(__name__)\n\nRequestHookT = Callable[[Span, monitoring.CommandStartedEvent], None]\nResponseHookT = Callable[[Span, monitoring.CommandSucceededEvent], None]\nFailedHookT = Callable[[Span, monitoring.CommandFailedEvent], None]\n\n\ndef dummy_callback(span, event):\n ...\n\n\nclass CommandTracer(monitoring.CommandListener):\n def __init__(\n self,\n tracer,\n request_hook: RequestHookT = dummy_callback,\n response_hook: ResponseHookT = dummy_callback,\n failed_hook: FailedHookT = dummy_callback,\n ):\n self._tracer = tracer\n self._span_dict = {}\n self.is_enabled = True\n self.start_hook = request_hook\n self.success_hook = response_hook\n self.failed_hook = failed_hook\n\n def started(self, event: monitoring.CommandStartedEvent):\n \"\"\"Method to handle a pymongo CommandStartedEvent\"\"\"\n if not self.is_enabled or context.get_value(\n _SUPPRESS_INSTRUMENTATION_KEY\n ):\n return\n command = event.command.get(event.command_name, \"\")\n name = event.database_name\n name += \".\" + event.command_name\n statement = event.command_name\n if command:\n statement += \" \" + str(command)\n\n try:\n span = self._tracer.start_span(name, kind=SpanKind.CLIENT)\n if span.is_recording():\n span.set_attribute(\n SpanAttributes.DB_SYSTEM, DbSystemValues.MONGODB.value\n )\n span.set_attribute(SpanAttributes.DB_NAME, event.database_name)\n span.set_attribute(SpanAttributes.DB_STATEMENT, statement)\n if event.connection_id is not None:\n span.set_attribute(\n SpanAttributes.NET_PEER_NAME, event.connection_id[0]\n )\n span.set_attribute(\n SpanAttributes.NET_PEER_PORT, event.connection_id[1]\n )\n try:\n self.start_hook(span, event)\n except Exception as hook_exception: # noqa pylint: disable=broad-except\n _LOG.exception(hook_exception)\n\n # Add Span to dictionary\n self._span_dict[_get_span_dict_key(event)] = span\n except Exception as ex: # noqa pylint: disable=broad-except\n if span is not None and span.is_recording():\n span.set_status(Status(StatusCode.ERROR, str(ex)))\n span.end()\n self._pop_span(event)\n\n def succeeded(self, event: monitoring.CommandSucceededEvent):\n \"\"\"Method to handle a pymongo CommandSucceededEvent\"\"\"\n if not self.is_enabled or context.get_value(\n _SUPPRESS_INSTRUMENTATION_KEY\n ):\n return\n span = self._pop_span(event)\n if span is None:\n return\n if span.is_recording():\n try:\n self.success_hook(span, event)\n except Exception as hook_exception: # noqa pylint: disable=broad-except\n _LOG.exception(hook_exception)\n span.end()\n\n def failed(self, event: monitoring.CommandFailedEvent):\n \"\"\"Method to handle a pymongo CommandFailedEvent\"\"\"\n if not self.is_enabled or context.get_value(\n _SUPPRESS_INSTRUMENTATION_KEY\n ):\n return\n span = self._pop_span(event)\n if span is None:\n return\n if span.is_recording():\n span.set_status(Status(StatusCode.ERROR, event.failure))\n try:\n self.failed_hook(span, event)\n except Exception as hook_exception: # noqa pylint: disable=broad-except\n _LOG.exception(hook_exception)\n span.end()\n\n def _pop_span(self, event):\n return self._span_dict.pop(_get_span_dict_key(event), None)\n\n\ndef _get_span_dict_key(event):\n if event.connection_id is not None:\n return event.request_id, event.connection_id\n return event.request_id\n\n\nclass PymongoInstrumentor(BaseInstrumentor):\n _commandtracer_instance = None # type CommandTracer\n # The instrumentation for PyMongo is based on the event listener interface\n # https://api.mongodb.com/python/current/api/pymongo/monitoring.html.\n # This interface only allows to register listeners and does not provide\n # an unregister API. In order to provide a mechanishm to disable\n # instrumentation an enabled flag is implemented in CommandTracer,\n # it's checked in the different listeners.\n\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n \"\"\"Integrate with pymongo to trace it using event listener.\n https://api.mongodb.com/python/current/api/pymongo/monitoring.html\n\n Args:\n tracer_provider: The `TracerProvider` to use. If none is passed the\n current configured one is used.\n \"\"\"\n\n tracer_provider = kwargs.get(\"tracer_provider\")\n request_hook = kwargs.get(\"request_hook\", dummy_callback)\n response_hook = kwargs.get(\"response_hook\", dummy_callback)\n failed_hook = kwargs.get(\"failed_hook\", dummy_callback)\n # Create and register a CommandTracer only the first time\n if self._commandtracer_instance is None:\n tracer = get_tracer(__name__, __version__, tracer_provider)\n\n self._commandtracer_instance = CommandTracer(\n tracer,\n request_hook=request_hook,\n response_hook=response_hook,\n failed_hook=failed_hook,\n )\n monitoring.register(self._commandtracer_instance)\n # If already created, just enable it\n self._commandtracer_instance.is_enabled = True\n\n def _uninstrument(self, **kwargs):\n if self._commandtracer_instance is not None:\n self._commandtracer_instance.is_enabled = False\n", "path": "instrumentation/opentelemetry-instrumentation-pymongo/src/opentelemetry/instrumentation/pymongo/__init__.py"}]} | 3,398 | 277 |
gh_patches_debug_14241 | rasdani/github-patches | git_diff | aws-cloudformation__cfn-lint-1774 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect rule: "E1019: Sub parameter should be an object of 1 or string for..."
*cfn-lint version: 0.40.0*
I am getting an incorrect error that [`E1019`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/rules.md#E1019)` Sub parameter should be an object of 1 or string for...` when using YAML:
```
- Fn::Sub:
- 'example-${Var}-${Var2}'
- Var: 123
Var2: 456
```
Official docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html
Official docs sample:
```
Fn::Sub:
- String
- Var1Name: Var1Value
Var2Name: Var2Value
```
</issue>
<code>
[start of src/cfnlint/rules/functions/Sub.py]
1 """
2 Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved.
3 SPDX-License-Identifier: MIT-0
4 """
5 import six
6 from cfnlint.helpers import PSEUDOPARAMS, VALID_PARAMETER_TYPES_LIST
7 from cfnlint.rules import CloudFormationLintRule
8 from cfnlint.rules import RuleMatch
9
10
11 class Sub(CloudFormationLintRule):
12 """Check if Sub values are correct"""
13 id = 'E1019'
14 shortdesc = 'Sub validation of parameters'
15 description = 'Making sure the sub function is properly configured'
16 source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'
17 tags = ['functions', 'sub']
18
19 def _test_string(self, cfn, sub_string, parameters, tree):
20 """Test if a string has appropriate parameters"""
21
22 matches = []
23 string_params = cfn.get_sub_parameters(sub_string)
24
25 for string_param in string_params:
26 if isinstance(string_param, (six.string_types)):
27 matches.extend(self._test_parameter(string_param, cfn, parameters, tree))
28
29 return matches
30
31 def _get_parameters(self, cfn):
32 """Get all Parameter Names"""
33 results = {}
34 parameters = cfn.template.get('Parameters', {})
35 if isinstance(parameters, dict):
36 for param_name, param_values in parameters.items():
37 # This rule isn't here to check the Types but we need
38 # something valid if it doesn't exist
39 results[param_name] = param_values.get('Type', 'String')
40
41 return results
42
43 def _test_parameters(self, parameters, cfn, tree):
44 """Check parameters for appropriate configuration"""
45
46 supported_functions = [
47 'Fn::Base64',
48 'Fn::FindInMap',
49 'Fn::GetAZs',
50 'Fn::GetAtt',
51 'Fn::If',
52 'Fn::ImportValue',
53 'Fn::Join',
54 'Fn::Select',
55 'Fn::Sub',
56 'Ref',
57 ]
58
59 matches = []
60 for parameter_name, parameter_value_obj in parameters.items():
61 param_tree = tree[:] + [parameter_name]
62 if isinstance(parameter_value_obj, dict):
63 if len(parameter_value_obj) == 1:
64 for key, value in parameter_value_obj.items():
65 if key not in supported_functions:
66 message = 'Sub parameter should use a valid function for {0}'
67 matches.append(RuleMatch(
68 param_tree, message.format('/'.join(map(str, tree)))))
69 elif key in ['Ref']:
70 matches.extend(self._test_parameter(value, cfn, {}, tree))
71 elif key in ['Fn::GetAtt']:
72 if isinstance(value, list):
73 # Only test this if all the items are a string
74 if_all_strings = True
75 for v in value:
76 if not isinstance(v, six.string_types):
77 # skip things got too complex
78 if_all_strings = False
79 if if_all_strings:
80 matches.extend(self._test_parameter(
81 '.'.join(value), cfn, {}, tree))
82 elif isinstance(value, six.string_types):
83 matches.extend(self._test_parameter(value, cfn, {}, tree))
84 else:
85 message = 'Sub parameter should be an object of 1 for {0}'
86 matches.append(RuleMatch(
87 param_tree, message.format('/'.join(map(str, tree)))))
88 elif not isinstance(parameter_value_obj, six.string_types):
89 message = 'Sub parameter should be an object of 1 or string for {0}'
90 matches.append(RuleMatch(
91 param_tree, message.format('/'.join(map(str, tree)))))
92
93 return matches
94
95 def _test_parameter(self, parameter, cfn, parameters, tree):
96 """ Test a parameter """
97
98 matches = []
99 get_atts = cfn.get_valid_getatts()
100
101 valid_params = list(PSEUDOPARAMS)
102 valid_params.extend(cfn.get_resource_names())
103 template_parameters = self._get_parameters(cfn)
104
105 for key, _ in parameters.items():
106 valid_params.append(key)
107
108 if parameter not in valid_params:
109 found = False
110 if parameter in template_parameters:
111 found = True
112 if template_parameters.get(parameter) in VALID_PARAMETER_TYPES_LIST:
113 message = 'Fn::Sub cannot use list {0} at {1}'
114 matches.append(RuleMatch(
115 tree, message.format(parameter, '/'.join(map(str, tree)))))
116 for resource, attributes in get_atts.items():
117 for attribute_name, attribute_values in attributes.items():
118 if resource == parameter.split('.')[0]:
119 if attribute_name == '*':
120 found = True
121 elif attribute_name == '.'.join(parameter.split('.')[1:]):
122 if attribute_values.get('Type') == 'List':
123 message = 'Fn::Sub cannot use list {0} at {1}'
124 matches.append(RuleMatch(
125 tree, message.format(parameter, '/'.join(map(str, tree)))))
126 found = True
127 else:
128 if attribute_name == parameter.split('.')[1] and attribute_values.get('Type') == 'Map':
129 found = True
130
131 if not found:
132 message = 'Parameter {0} for Fn::Sub not found at {1}'
133 matches.append(RuleMatch(
134 tree, message.format(parameter, '/'.join(map(str, tree)))))
135
136 return matches
137
138 def match(self, cfn):
139 matches = []
140
141 sub_objs = cfn.search_deep_keys('Fn::Sub')
142
143 for sub_obj in sub_objs:
144 sub_value_obj = sub_obj[-1]
145 tree = sub_obj[:-1]
146 if isinstance(sub_value_obj, six.string_types):
147 matches.extend(self._test_string(cfn, sub_value_obj, {}, tree))
148 elif isinstance(sub_value_obj, list):
149 if len(sub_value_obj) == 2:
150 sub_string = sub_value_obj[0]
151 parameters = sub_value_obj[1]
152 if not isinstance(sub_string, six.string_types):
153 message = 'Subs first element should be of type string for {0}'
154 matches.append(RuleMatch(
155 tree + [0], message.format('/'.join(map(str, tree)))))
156 if not isinstance(parameters, dict):
157 message = 'Subs second element should be an object for {0}'
158 matches.append(RuleMatch(
159 tree + [1], message.format('/'.join(map(str, tree)))))
160 else:
161 matches.extend(self._test_string(cfn, sub_string, parameters, tree + [0]))
162 matches.extend(self._test_parameters(parameters, cfn, tree))
163 else:
164 message = 'Sub should be an array of 2 for {0}'
165 matches.append(RuleMatch(
166 tree, message.format('/'.join(map(str, tree)))))
167 elif isinstance(sub_value_obj, dict):
168 if len(sub_value_obj) == 1:
169 for key, _ in sub_value_obj.items():
170 if not key == 'Fn::Transform':
171 message = 'Sub should be a string or array of 2 items for {0}'
172 matches.append(RuleMatch(
173 tree, message.format('/'.join(map(str, tree)))))
174 else:
175 message = 'Sub should be a string or array of 2 items for {0}'
176 matches.append(RuleMatch(
177 tree, message.format('/'.join(map(str, tree)))))
178 else:
179 message = 'Sub should be a string or array of 2 items for {0}'
180 matches.append(RuleMatch(
181 tree, message.format('/'.join(map(str, tree)))))
182
183 return matches
184
[end of src/cfnlint/rules/functions/Sub.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cfnlint/rules/functions/Sub.py b/src/cfnlint/rules/functions/Sub.py
--- a/src/cfnlint/rules/functions/Sub.py
+++ b/src/cfnlint/rules/functions/Sub.py
@@ -85,8 +85,8 @@
message = 'Sub parameter should be an object of 1 for {0}'
matches.append(RuleMatch(
param_tree, message.format('/'.join(map(str, tree)))))
- elif not isinstance(parameter_value_obj, six.string_types):
- message = 'Sub parameter should be an object of 1 or string for {0}'
+ elif isinstance(parameter_value_obj, list):
+ message = 'Sub parameter value should be a string for {0}'
matches.append(RuleMatch(
param_tree, message.format('/'.join(map(str, tree)))))
| {"golden_diff": "diff --git a/src/cfnlint/rules/functions/Sub.py b/src/cfnlint/rules/functions/Sub.py\n--- a/src/cfnlint/rules/functions/Sub.py\n+++ b/src/cfnlint/rules/functions/Sub.py\n@@ -85,8 +85,8 @@\n message = 'Sub parameter should be an object of 1 for {0}'\n matches.append(RuleMatch(\n param_tree, message.format('/'.join(map(str, tree)))))\n- elif not isinstance(parameter_value_obj, six.string_types):\n- message = 'Sub parameter should be an object of 1 or string for {0}'\n+ elif isinstance(parameter_value_obj, list):\n+ message = 'Sub parameter value should be a string for {0}'\n matches.append(RuleMatch(\n param_tree, message.format('/'.join(map(str, tree)))))\n", "issue": "Incorrect rule: \"E1019: Sub parameter should be an object of 1 or string for...\"\n*cfn-lint version: 0.40.0*\r\n\r\nI am getting an incorrect error that [`E1019`](https://github.com/aws-cloudformation/cfn-python-lint/blob/master/docs/rules.md#E1019)` Sub parameter should be an object of 1 or string for...` when using YAML:\r\n\r\n```\r\n - Fn::Sub:\r\n - 'example-${Var}-${Var2}'\r\n - Var: 123\r\n Var2: 456\r\n```\r\n\r\nOfficial docs: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html\r\n\r\nOfficial docs sample:\r\n```\r\nFn::Sub:\r\n - String\r\n - Var1Name: Var1Value\r\n Var2Name: Var2Value\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nCopyright Amazon.com, Inc. or its affiliates. All Rights Reserved.\nSPDX-License-Identifier: MIT-0\n\"\"\"\nimport six\nfrom cfnlint.helpers import PSEUDOPARAMS, VALID_PARAMETER_TYPES_LIST\nfrom cfnlint.rules import CloudFormationLintRule\nfrom cfnlint.rules import RuleMatch\n\n\nclass Sub(CloudFormationLintRule):\n \"\"\"Check if Sub values are correct\"\"\"\n id = 'E1019'\n shortdesc = 'Sub validation of parameters'\n description = 'Making sure the sub function is properly configured'\n source_url = 'https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference-sub.html'\n tags = ['functions', 'sub']\n\n def _test_string(self, cfn, sub_string, parameters, tree):\n \"\"\"Test if a string has appropriate parameters\"\"\"\n\n matches = []\n string_params = cfn.get_sub_parameters(sub_string)\n\n for string_param in string_params:\n if isinstance(string_param, (six.string_types)):\n matches.extend(self._test_parameter(string_param, cfn, parameters, tree))\n\n return matches\n\n def _get_parameters(self, cfn):\n \"\"\"Get all Parameter Names\"\"\"\n results = {}\n parameters = cfn.template.get('Parameters', {})\n if isinstance(parameters, dict):\n for param_name, param_values in parameters.items():\n # This rule isn't here to check the Types but we need\n # something valid if it doesn't exist\n results[param_name] = param_values.get('Type', 'String')\n\n return results\n\n def _test_parameters(self, parameters, cfn, tree):\n \"\"\"Check parameters for appropriate configuration\"\"\"\n\n supported_functions = [\n 'Fn::Base64',\n 'Fn::FindInMap',\n 'Fn::GetAZs',\n 'Fn::GetAtt',\n 'Fn::If',\n 'Fn::ImportValue',\n 'Fn::Join',\n 'Fn::Select',\n 'Fn::Sub',\n 'Ref',\n ]\n\n matches = []\n for parameter_name, parameter_value_obj in parameters.items():\n param_tree = tree[:] + [parameter_name]\n if isinstance(parameter_value_obj, dict):\n if len(parameter_value_obj) == 1:\n for key, value in parameter_value_obj.items():\n if key not in supported_functions:\n message = 'Sub parameter should use a valid function for {0}'\n matches.append(RuleMatch(\n param_tree, message.format('/'.join(map(str, tree)))))\n elif key in ['Ref']:\n matches.extend(self._test_parameter(value, cfn, {}, tree))\n elif key in ['Fn::GetAtt']:\n if isinstance(value, list):\n # Only test this if all the items are a string\n if_all_strings = True\n for v in value:\n if not isinstance(v, six.string_types):\n # skip things got too complex\n if_all_strings = False\n if if_all_strings:\n matches.extend(self._test_parameter(\n '.'.join(value), cfn, {}, tree))\n elif isinstance(value, six.string_types):\n matches.extend(self._test_parameter(value, cfn, {}, tree))\n else:\n message = 'Sub parameter should be an object of 1 for {0}'\n matches.append(RuleMatch(\n param_tree, message.format('/'.join(map(str, tree)))))\n elif not isinstance(parameter_value_obj, six.string_types):\n message = 'Sub parameter should be an object of 1 or string for {0}'\n matches.append(RuleMatch(\n param_tree, message.format('/'.join(map(str, tree)))))\n\n return matches\n\n def _test_parameter(self, parameter, cfn, parameters, tree):\n \"\"\" Test a parameter \"\"\"\n\n matches = []\n get_atts = cfn.get_valid_getatts()\n\n valid_params = list(PSEUDOPARAMS)\n valid_params.extend(cfn.get_resource_names())\n template_parameters = self._get_parameters(cfn)\n\n for key, _ in parameters.items():\n valid_params.append(key)\n\n if parameter not in valid_params:\n found = False\n if parameter in template_parameters:\n found = True\n if template_parameters.get(parameter) in VALID_PARAMETER_TYPES_LIST:\n message = 'Fn::Sub cannot use list {0} at {1}'\n matches.append(RuleMatch(\n tree, message.format(parameter, '/'.join(map(str, tree)))))\n for resource, attributes in get_atts.items():\n for attribute_name, attribute_values in attributes.items():\n if resource == parameter.split('.')[0]:\n if attribute_name == '*':\n found = True\n elif attribute_name == '.'.join(parameter.split('.')[1:]):\n if attribute_values.get('Type') == 'List':\n message = 'Fn::Sub cannot use list {0} at {1}'\n matches.append(RuleMatch(\n tree, message.format(parameter, '/'.join(map(str, tree)))))\n found = True\n else:\n if attribute_name == parameter.split('.')[1] and attribute_values.get('Type') == 'Map':\n found = True\n\n if not found:\n message = 'Parameter {0} for Fn::Sub not found at {1}'\n matches.append(RuleMatch(\n tree, message.format(parameter, '/'.join(map(str, tree)))))\n\n return matches\n\n def match(self, cfn):\n matches = []\n\n sub_objs = cfn.search_deep_keys('Fn::Sub')\n\n for sub_obj in sub_objs:\n sub_value_obj = sub_obj[-1]\n tree = sub_obj[:-1]\n if isinstance(sub_value_obj, six.string_types):\n matches.extend(self._test_string(cfn, sub_value_obj, {}, tree))\n elif isinstance(sub_value_obj, list):\n if len(sub_value_obj) == 2:\n sub_string = sub_value_obj[0]\n parameters = sub_value_obj[1]\n if not isinstance(sub_string, six.string_types):\n message = 'Subs first element should be of type string for {0}'\n matches.append(RuleMatch(\n tree + [0], message.format('/'.join(map(str, tree)))))\n if not isinstance(parameters, dict):\n message = 'Subs second element should be an object for {0}'\n matches.append(RuleMatch(\n tree + [1], message.format('/'.join(map(str, tree)))))\n else:\n matches.extend(self._test_string(cfn, sub_string, parameters, tree + [0]))\n matches.extend(self._test_parameters(parameters, cfn, tree))\n else:\n message = 'Sub should be an array of 2 for {0}'\n matches.append(RuleMatch(\n tree, message.format('/'.join(map(str, tree)))))\n elif isinstance(sub_value_obj, dict):\n if len(sub_value_obj) == 1:\n for key, _ in sub_value_obj.items():\n if not key == 'Fn::Transform':\n message = 'Sub should be a string or array of 2 items for {0}'\n matches.append(RuleMatch(\n tree, message.format('/'.join(map(str, tree)))))\n else:\n message = 'Sub should be a string or array of 2 items for {0}'\n matches.append(RuleMatch(\n tree, message.format('/'.join(map(str, tree)))))\n else:\n message = 'Sub should be a string or array of 2 items for {0}'\n matches.append(RuleMatch(\n tree, message.format('/'.join(map(str, tree)))))\n\n return matches\n", "path": "src/cfnlint/rules/functions/Sub.py"}]} | 2,773 | 174 |
gh_patches_debug_34814 | rasdani/github-patches | git_diff | dynaconf__dynaconf-825 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[RFC] Support multidoc yaml files
**Is your feature request related to a problem? Please describe.**
Sometimes it can be difficult or impossible to pass multiple files with config fragments. yaml support multiple documents in one file and `safe_load_all` from pyaml api loads that accordingly. It is standard yaml feature, it would be nice to support it and make in usable in cases when passing one file (composited from more files) would be easier.
**Describe the solution you'd like**
Support `safe_load_all` as yaml loader.
**Describe alternatives you've considered**
Passing multiple files will do the work, however it doesn't have to be always straightforward.
**Additional context**
I have prepared a patch
</issue>
<code>
[start of dynaconf/loaders/yaml_loader.py]
1 from __future__ import annotations
2
3 import sys
4 from pathlib import Path
5 from typing import TextIO
6 from warnings import warn
7
8 from dynaconf import default_settings
9 from dynaconf.constants import YAML_EXTENSIONS
10 from dynaconf.loaders.base import BaseLoader
11 from dynaconf.utils import object_merge
12 from dynaconf.utils.parse_conf import try_to_encode
13 from dynaconf.vendor.ruamel import yaml
14
15 # Add support for Dynaconf Lazy values to YAML dumper
16 yaml.SafeDumper.yaml_representers[
17 None
18 ] = lambda self, data: yaml.representer.SafeRepresenter.represent_str(
19 self, try_to_encode(data)
20 )
21
22
23 def load(obj, env=None, silent=True, key=None, filename=None, validate=False):
24 """
25 Reads and loads in to "obj" a single key or all keys from source file.
26
27 :param obj: the settings instance
28 :param env: settings current env default='development'
29 :param silent: if errors should raise
30 :param key: if defined load a single key, else load all in env
31 :param filename: Optional custom filename to load
32 :return: None
33 """
34 # Resolve the loaders
35 # https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation
36 # Possible values are `safe_load, full_load, unsafe_load, load`
37 yaml_reader = getattr(
38 yaml, obj.get("YAML_LOADER_FOR_DYNACONF"), yaml.safe_load
39 )
40 if yaml_reader.__name__ == "unsafe_load": # pragma: no cover
41 warn(
42 "yaml.unsafe_load is deprecated."
43 " Please read https://msg.pyyaml.org/load for full details."
44 " Try to use full_load or safe_load."
45 )
46
47 loader = BaseLoader(
48 obj=obj,
49 env=env,
50 identifier="yaml",
51 extensions=YAML_EXTENSIONS,
52 file_reader=yaml_reader,
53 string_reader=yaml_reader,
54 validate=validate,
55 )
56 loader.load(
57 filename=filename,
58 key=key,
59 silent=silent,
60 )
61
62
63 def write(settings_path, settings_data, merge=True):
64 """Write data to a settings file.
65
66 :param settings_path: the filepath
67 :param settings_data: a dictionary with data
68 :param merge: boolean if existing file should be merged with new data
69 :param stdout: boolean if should output to stdout instead of file
70 """
71 settings_path = Path(settings_path)
72 if settings_path.exists() and merge: # pragma: no cover
73 with open(
74 str(settings_path), encoding=default_settings.ENCODING_FOR_DYNACONF
75 ) as open_file:
76 object_merge(yaml.safe_load(open_file), settings_data)
77
78 with open(
79 str(settings_path),
80 "w",
81 encoding=default_settings.ENCODING_FOR_DYNACONF,
82 ) as open_file:
83 yaml.dump(
84 settings_data,
85 open_file,
86 Dumper=yaml.dumper.SafeDumper,
87 explicit_start=True,
88 indent=2,
89 default_flow_style=False,
90 )
91
[end of dynaconf/loaders/yaml_loader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dynaconf/loaders/yaml_loader.py b/dynaconf/loaders/yaml_loader.py
--- a/dynaconf/loaders/yaml_loader.py
+++ b/dynaconf/loaders/yaml_loader.py
@@ -20,6 +20,41 @@
)
+class AllLoader(BaseLoader):
+ """YAML Loader to load multi doc files"""
+
+ @staticmethod
+ def _assign_data(data, source_file, content):
+ """Helper to iterate through all docs in a file"""
+ content = tuple(content)
+ if len(content) == 1:
+ data[source_file] = content[0]
+ elif len(content) > 1:
+ for i, doc in enumerate(content):
+ data[f"{source_file}[{i}]"] = doc
+
+ def get_source_data(self, files):
+ data = {}
+ for source_file in files:
+ if source_file.endswith(self.extensions):
+ try:
+ with open(source_file, **self.opener_params) as open_file:
+ content = self.file_reader(open_file)
+ self.obj._loaded_files.append(source_file)
+ self._assign_data(data, source_file, content)
+ except OSError as e:
+ if ".local." not in source_file:
+ warn(
+ f"{self.identifier}_loader: {source_file} "
+ f":{str(e)}"
+ )
+ else:
+ # for tests it is possible to pass string
+ content = self.string_reader(source_file)
+ self._assign_data(data, source_file, content)
+ return data
+
+
def load(obj, env=None, silent=True, key=None, filename=None, validate=False):
"""
Reads and loads in to "obj" a single key or all keys from source file.
@@ -33,7 +68,8 @@
"""
# Resolve the loaders
# https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation
- # Possible values are `safe_load, full_load, unsafe_load, load`
+ # Possible values are:
+ # `safe_load, full_load, unsafe_load, load, safe_load_all`
yaml_reader = getattr(
yaml, obj.get("YAML_LOADER_FOR_DYNACONF"), yaml.safe_load
)
@@ -44,7 +80,11 @@
" Try to use full_load or safe_load."
)
- loader = BaseLoader(
+ _loader = BaseLoader
+ if yaml_reader.__name__.endswith("_all"):
+ _loader = AllLoader
+
+ loader = _loader(
obj=obj,
env=env,
identifier="yaml",
| {"golden_diff": "diff --git a/dynaconf/loaders/yaml_loader.py b/dynaconf/loaders/yaml_loader.py\n--- a/dynaconf/loaders/yaml_loader.py\n+++ b/dynaconf/loaders/yaml_loader.py\n@@ -20,6 +20,41 @@\n )\n \n \n+class AllLoader(BaseLoader):\n+ \"\"\"YAML Loader to load multi doc files\"\"\"\n+\n+ @staticmethod\n+ def _assign_data(data, source_file, content):\n+ \"\"\"Helper to iterate through all docs in a file\"\"\"\n+ content = tuple(content)\n+ if len(content) == 1:\n+ data[source_file] = content[0]\n+ elif len(content) > 1:\n+ for i, doc in enumerate(content):\n+ data[f\"{source_file}[{i}]\"] = doc\n+\n+ def get_source_data(self, files):\n+ data = {}\n+ for source_file in files:\n+ if source_file.endswith(self.extensions):\n+ try:\n+ with open(source_file, **self.opener_params) as open_file:\n+ content = self.file_reader(open_file)\n+ self.obj._loaded_files.append(source_file)\n+ self._assign_data(data, source_file, content)\n+ except OSError as e:\n+ if \".local.\" not in source_file:\n+ warn(\n+ f\"{self.identifier}_loader: {source_file} \"\n+ f\":{str(e)}\"\n+ )\n+ else:\n+ # for tests it is possible to pass string\n+ content = self.string_reader(source_file)\n+ self._assign_data(data, source_file, content)\n+ return data\n+\n+\n def load(obj, env=None, silent=True, key=None, filename=None, validate=False):\n \"\"\"\n Reads and loads in to \"obj\" a single key or all keys from source file.\n@@ -33,7 +68,8 @@\n \"\"\"\n # Resolve the loaders\n # https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation\n- # Possible values are `safe_load, full_load, unsafe_load, load`\n+ # Possible values are:\n+ # `safe_load, full_load, unsafe_load, load, safe_load_all`\n yaml_reader = getattr(\n yaml, obj.get(\"YAML_LOADER_FOR_DYNACONF\"), yaml.safe_load\n )\n@@ -44,7 +80,11 @@\n \" Try to use full_load or safe_load.\"\n )\n \n- loader = BaseLoader(\n+ _loader = BaseLoader\n+ if yaml_reader.__name__.endswith(\"_all\"):\n+ _loader = AllLoader\n+\n+ loader = _loader(\n obj=obj,\n env=env,\n identifier=\"yaml\",\n", "issue": "[RFC] Support multidoc yaml files\n**Is your feature request related to a problem? Please describe.**\r\nSometimes it can be difficult or impossible to pass multiple files with config fragments. yaml support multiple documents in one file and `safe_load_all` from pyaml api loads that accordingly. It is standard yaml feature, it would be nice to support it and make in usable in cases when passing one file (composited from more files) would be easier.\r\n\r\n**Describe the solution you'd like**\r\nSupport `safe_load_all` as yaml loader.\r\n\r\n**Describe alternatives you've considered**\r\nPassing multiple files will do the work, however it doesn't have to be always straightforward.\r\n\r\n**Additional context**\r\nI have prepared a patch\r\n\n", "before_files": [{"content": "from __future__ import annotations\n\nimport sys\nfrom pathlib import Path\nfrom typing import TextIO\nfrom warnings import warn\n\nfrom dynaconf import default_settings\nfrom dynaconf.constants import YAML_EXTENSIONS\nfrom dynaconf.loaders.base import BaseLoader\nfrom dynaconf.utils import object_merge\nfrom dynaconf.utils.parse_conf import try_to_encode\nfrom dynaconf.vendor.ruamel import yaml\n\n# Add support for Dynaconf Lazy values to YAML dumper\nyaml.SafeDumper.yaml_representers[\n None\n] = lambda self, data: yaml.representer.SafeRepresenter.represent_str(\n self, try_to_encode(data)\n)\n\n\ndef load(obj, env=None, silent=True, key=None, filename=None, validate=False):\n \"\"\"\n Reads and loads in to \"obj\" a single key or all keys from source file.\n\n :param obj: the settings instance\n :param env: settings current env default='development'\n :param silent: if errors should raise\n :param key: if defined load a single key, else load all in env\n :param filename: Optional custom filename to load\n :return: None\n \"\"\"\n # Resolve the loaders\n # https://github.com/yaml/pyyaml/wiki/PyYAML-yaml.load(input)-Deprecation\n # Possible values are `safe_load, full_load, unsafe_load, load`\n yaml_reader = getattr(\n yaml, obj.get(\"YAML_LOADER_FOR_DYNACONF\"), yaml.safe_load\n )\n if yaml_reader.__name__ == \"unsafe_load\": # pragma: no cover\n warn(\n \"yaml.unsafe_load is deprecated.\"\n \" Please read https://msg.pyyaml.org/load for full details.\"\n \" Try to use full_load or safe_load.\"\n )\n\n loader = BaseLoader(\n obj=obj,\n env=env,\n identifier=\"yaml\",\n extensions=YAML_EXTENSIONS,\n file_reader=yaml_reader,\n string_reader=yaml_reader,\n validate=validate,\n )\n loader.load(\n filename=filename,\n key=key,\n silent=silent,\n )\n\n\ndef write(settings_path, settings_data, merge=True):\n \"\"\"Write data to a settings file.\n\n :param settings_path: the filepath\n :param settings_data: a dictionary with data\n :param merge: boolean if existing file should be merged with new data\n :param stdout: boolean if should output to stdout instead of file\n \"\"\"\n settings_path = Path(settings_path)\n if settings_path.exists() and merge: # pragma: no cover\n with open(\n str(settings_path), encoding=default_settings.ENCODING_FOR_DYNACONF\n ) as open_file:\n object_merge(yaml.safe_load(open_file), settings_data)\n\n with open(\n str(settings_path),\n \"w\",\n encoding=default_settings.ENCODING_FOR_DYNACONF,\n ) as open_file:\n yaml.dump(\n settings_data,\n open_file,\n Dumper=yaml.dumper.SafeDumper,\n explicit_start=True,\n indent=2,\n default_flow_style=False,\n )\n", "path": "dynaconf/loaders/yaml_loader.py"}]} | 1,539 | 601 |
gh_patches_debug_9478 | rasdani/github-patches | git_diff | bridgecrewio__checkov-548 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add new check: API Gateway V2 should have access logging enabled
AccessLogSettings: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
Terraform does not currently support this: https://github.com/terraform-providers/terraform-provider-aws/issues/7004
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py]
1 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories
3 from checkov.common.models.consts import ANY_VALUE
4
5
6 class APIGatewayAccessLogging(BaseResourceValueCheck):
7
8 def __init__(self):
9 name = "Ensure API Gateway has Access Logging enabled"
10 id = "CKV_AWS_76"
11 supported_resources = ['aws_api_gateway_stage']
12 categories = [CheckCategories.LOGGING]
13 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
14
15 def get_inspected_key(self):
16 return "access_log_settings/[0]/destination_arn"
17
18 def get_expected_value(self):
19 return ANY_VALUE
20
21
22 check = APIGatewayAccessLogging()
23
[end of checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py b/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py
--- a/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py
+++ b/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py
@@ -8,7 +8,7 @@
def __init__(self):
name = "Ensure API Gateway has Access Logging enabled"
id = "CKV_AWS_76"
- supported_resources = ['aws_api_gateway_stage']
+ supported_resources = ['aws_api_gateway_stage', 'aws_apigatewayv2_stage']
categories = [CheckCategories.LOGGING]
super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py b/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py\n--- a/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py\n+++ b/checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py\n@@ -8,7 +8,7 @@\n def __init__(self):\n name = \"Ensure API Gateway has Access Logging enabled\"\n id = \"CKV_AWS_76\"\n- supported_resources = ['aws_api_gateway_stage']\n+ supported_resources = ['aws_api_gateway_stage', 'aws_apigatewayv2_stage']\n categories = [CheckCategories.LOGGING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n", "issue": "Add new check: API Gateway V2 should have access logging enabled \nAccessLogSettings: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html\r\n\r\nTerraform does not currently support this: https://github.com/terraform-providers/terraform-provider-aws/issues/7004\n", "before_files": [{"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories\nfrom checkov.common.models.consts import ANY_VALUE\n\n\nclass APIGatewayAccessLogging(BaseResourceValueCheck):\n\n def __init__(self):\n name = \"Ensure API Gateway has Access Logging enabled\"\n id = \"CKV_AWS_76\"\n supported_resources = ['aws_api_gateway_stage']\n categories = [CheckCategories.LOGGING]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return \"access_log_settings/[0]/destination_arn\"\n\n def get_expected_value(self):\n return ANY_VALUE\n\n\ncheck = APIGatewayAccessLogging()\n", "path": "checkov/terraform/checks/resource/aws/APIGatewayAccessLogging.py"}]} | 825 | 166 |
gh_patches_debug_1431 | rasdani/github-patches | git_diff | pyca__cryptography-4077 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
utils.int_from_bytes gives incorrect answers when passed "builtins.bytes" in python 2.7
```
$ mkvirtualenv repro
$ python --version
Python 2.7.12
$ pip install cryptography future
$ python
from cryptography import utils
from builtins import bytes
x = bytes.fromhex('deadbeef')
y = utils.int_from_bytes(x, 'big')
hex(y)
'0x6227deadbeef27'
```
The reason this happens is that `int_from_bytes` (in py27 mode) casts the passed-in value to `bytes`, which, in py27 mode, is an alias for `str`. Passing a `builtins.bytes` value to `str` somewhat insanely wraps the string with `b'` and `'`. These then get parsed by the rest of `int_from_bytes` as if they were part of the original byte string.
I think this is particularly unfortunate since all the "cryptography" functions say they accept and return `bytes` in their docstrings. Ideally it'd be compatible with all three definitions of `bytes`: the py27 alias to `str`, the one from "future", and the py3 one.
</issue>
<code>
[start of src/cryptography/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import abc
8 import binascii
9 import inspect
10 import sys
11 import warnings
12
13
14 # We use a UserWarning subclass, instead of DeprecationWarning, because CPython
15 # decided deprecation warnings should be invisble by default.
16 class CryptographyDeprecationWarning(UserWarning):
17 pass
18
19
20 # Several APIs were deprecated with no specific end-of-life date because of the
21 # ubiquity of their use. They should not be removed until we agree on when that
22 # cycle ends.
23 PersistentlyDeprecated = CryptographyDeprecationWarning
24 DeprecatedIn21 = CryptographyDeprecationWarning
25
26
27 def _check_bytes(name, value):
28 if not isinstance(value, bytes):
29 raise TypeError("{0} must be bytes".format(name))
30
31
32 def read_only_property(name):
33 return property(lambda self: getattr(self, name))
34
35
36 def register_interface(iface):
37 def register_decorator(klass):
38 verify_interface(iface, klass)
39 iface.register(klass)
40 return klass
41 return register_decorator
42
43
44 def register_interface_if(predicate, iface):
45 def register_decorator(klass):
46 if predicate:
47 verify_interface(iface, klass)
48 iface.register(klass)
49 return klass
50 return register_decorator
51
52
53 if hasattr(int, "from_bytes"):
54 int_from_bytes = int.from_bytes
55 else:
56 def int_from_bytes(data, byteorder, signed=False):
57 assert byteorder == 'big'
58 assert not signed
59
60 # call bytes() on data to allow the use of bytearrays
61 return int(bytes(data).encode('hex'), 16)
62
63
64 if hasattr(int, "to_bytes"):
65 def int_to_bytes(integer, length=None):
66 return integer.to_bytes(
67 length or (integer.bit_length() + 7) // 8 or 1, 'big'
68 )
69 else:
70 def int_to_bytes(integer, length=None):
71 hex_string = '%x' % integer
72 if length is None:
73 n = len(hex_string)
74 else:
75 n = length * 2
76 return binascii.unhexlify(hex_string.zfill(n + (n & 1)))
77
78
79 class InterfaceNotImplemented(Exception):
80 pass
81
82
83 if hasattr(inspect, "signature"):
84 signature = inspect.signature
85 else:
86 signature = inspect.getargspec
87
88
89 def verify_interface(iface, klass):
90 for method in iface.__abstractmethods__:
91 if not hasattr(klass, method):
92 raise InterfaceNotImplemented(
93 "{0} is missing a {1!r} method".format(klass, method)
94 )
95 if isinstance(getattr(iface, method), abc.abstractproperty):
96 # Can't properly verify these yet.
97 continue
98 sig = signature(getattr(iface, method))
99 actual = signature(getattr(klass, method))
100 if sig != actual:
101 raise InterfaceNotImplemented(
102 "{0}.{1}'s signature differs from the expected. Expected: "
103 "{2!r}. Received: {3!r}".format(
104 klass, method, sig, actual
105 )
106 )
107
108
109 # No longer needed as of 2.2, but retained because we have external consumers
110 # who use it.
111 def bit_length(x):
112 return x.bit_length()
113
114
115 class _DeprecatedValue(object):
116 def __init__(self, value, message, warning_class):
117 self.value = value
118 self.message = message
119 self.warning_class = warning_class
120
121
122 class _ModuleWithDeprecations(object):
123 def __init__(self, module):
124 self.__dict__["_module"] = module
125
126 def __getattr__(self, attr):
127 obj = getattr(self._module, attr)
128 if isinstance(obj, _DeprecatedValue):
129 warnings.warn(obj.message, obj.warning_class, stacklevel=2)
130 obj = obj.value
131 return obj
132
133 def __setattr__(self, attr, value):
134 setattr(self._module, attr, value)
135
136 def __delattr__(self, attr):
137 obj = getattr(self._module, attr)
138 if isinstance(obj, _DeprecatedValue):
139 warnings.warn(obj.message, obj.warning_class, stacklevel=2)
140
141 delattr(self._module, attr)
142
143 def __dir__(self):
144 return ["_module"] + dir(self._module)
145
146
147 def deprecated(value, module_name, message, warning_class):
148 module = sys.modules[module_name]
149 if not isinstance(module, _ModuleWithDeprecations):
150 sys.modules[module_name] = _ModuleWithDeprecations(module)
151 return _DeprecatedValue(value, message, warning_class)
152
153
154 def cached_property(func):
155 cached_name = "_cached_{0}".format(func)
156 sentinel = object()
157
158 def inner(instance):
159 cache = getattr(instance, cached_name, sentinel)
160 if cache is not sentinel:
161 return cache
162 result = func(instance)
163 setattr(instance, cached_name, result)
164 return result
165 return property(inner)
166
[end of src/cryptography/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py
--- a/src/cryptography/utils.py
+++ b/src/cryptography/utils.py
@@ -57,8 +57,7 @@
assert byteorder == 'big'
assert not signed
- # call bytes() on data to allow the use of bytearrays
- return int(bytes(data).encode('hex'), 16)
+ return int(binascii.hexlify(data), 16)
if hasattr(int, "to_bytes"):
| {"golden_diff": "diff --git a/src/cryptography/utils.py b/src/cryptography/utils.py\n--- a/src/cryptography/utils.py\n+++ b/src/cryptography/utils.py\n@@ -57,8 +57,7 @@\n assert byteorder == 'big'\n assert not signed\n \n- # call bytes() on data to allow the use of bytearrays\n- return int(bytes(data).encode('hex'), 16)\n+ return int(binascii.hexlify(data), 16)\n \n \n if hasattr(int, \"to_bytes\"):\n", "issue": "utils.int_from_bytes gives incorrect answers when passed \"builtins.bytes\" in python 2.7\n```\r\n$ mkvirtualenv repro\r\n$ python --version\r\nPython 2.7.12\r\n$ pip install cryptography future\r\n$ python\r\n\r\nfrom cryptography import utils\r\nfrom builtins import bytes\r\nx = bytes.fromhex('deadbeef')\r\ny = utils.int_from_bytes(x, 'big')\r\nhex(y)\r\n'0x6227deadbeef27'\r\n```\r\n\r\nThe reason this happens is that `int_from_bytes` (in py27 mode) casts the passed-in value to `bytes`, which, in py27 mode, is an alias for `str`. Passing a `builtins.bytes` value to `str` somewhat insanely wraps the string with `b'` and `'`. These then get parsed by the rest of `int_from_bytes` as if they were part of the original byte string.\r\n\r\nI think this is particularly unfortunate since all the \"cryptography\" functions say they accept and return `bytes` in their docstrings. Ideally it'd be compatible with all three definitions of `bytes`: the py27 alias to `str`, the one from \"future\", and the py3 one.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport abc\nimport binascii\nimport inspect\nimport sys\nimport warnings\n\n\n# We use a UserWarning subclass, instead of DeprecationWarning, because CPython\n# decided deprecation warnings should be invisble by default.\nclass CryptographyDeprecationWarning(UserWarning):\n pass\n\n\n# Several APIs were deprecated with no specific end-of-life date because of the\n# ubiquity of their use. They should not be removed until we agree on when that\n# cycle ends.\nPersistentlyDeprecated = CryptographyDeprecationWarning\nDeprecatedIn21 = CryptographyDeprecationWarning\n\n\ndef _check_bytes(name, value):\n if not isinstance(value, bytes):\n raise TypeError(\"{0} must be bytes\".format(name))\n\n\ndef read_only_property(name):\n return property(lambda self: getattr(self, name))\n\n\ndef register_interface(iface):\n def register_decorator(klass):\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\ndef register_interface_if(predicate, iface):\n def register_decorator(klass):\n if predicate:\n verify_interface(iface, klass)\n iface.register(klass)\n return klass\n return register_decorator\n\n\nif hasattr(int, \"from_bytes\"):\n int_from_bytes = int.from_bytes\nelse:\n def int_from_bytes(data, byteorder, signed=False):\n assert byteorder == 'big'\n assert not signed\n\n # call bytes() on data to allow the use of bytearrays\n return int(bytes(data).encode('hex'), 16)\n\n\nif hasattr(int, \"to_bytes\"):\n def int_to_bytes(integer, length=None):\n return integer.to_bytes(\n length or (integer.bit_length() + 7) // 8 or 1, 'big'\n )\nelse:\n def int_to_bytes(integer, length=None):\n hex_string = '%x' % integer\n if length is None:\n n = len(hex_string)\n else:\n n = length * 2\n return binascii.unhexlify(hex_string.zfill(n + (n & 1)))\n\n\nclass InterfaceNotImplemented(Exception):\n pass\n\n\nif hasattr(inspect, \"signature\"):\n signature = inspect.signature\nelse:\n signature = inspect.getargspec\n\n\ndef verify_interface(iface, klass):\n for method in iface.__abstractmethods__:\n if not hasattr(klass, method):\n raise InterfaceNotImplemented(\n \"{0} is missing a {1!r} method\".format(klass, method)\n )\n if isinstance(getattr(iface, method), abc.abstractproperty):\n # Can't properly verify these yet.\n continue\n sig = signature(getattr(iface, method))\n actual = signature(getattr(klass, method))\n if sig != actual:\n raise InterfaceNotImplemented(\n \"{0}.{1}'s signature differs from the expected. Expected: \"\n \"{2!r}. Received: {3!r}\".format(\n klass, method, sig, actual\n )\n )\n\n\n# No longer needed as of 2.2, but retained because we have external consumers\n# who use it.\ndef bit_length(x):\n return x.bit_length()\n\n\nclass _DeprecatedValue(object):\n def __init__(self, value, message, warning_class):\n self.value = value\n self.message = message\n self.warning_class = warning_class\n\n\nclass _ModuleWithDeprecations(object):\n def __init__(self, module):\n self.__dict__[\"_module\"] = module\n\n def __getattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n obj = obj.value\n return obj\n\n def __setattr__(self, attr, value):\n setattr(self._module, attr, value)\n\n def __delattr__(self, attr):\n obj = getattr(self._module, attr)\n if isinstance(obj, _DeprecatedValue):\n warnings.warn(obj.message, obj.warning_class, stacklevel=2)\n\n delattr(self._module, attr)\n\n def __dir__(self):\n return [\"_module\"] + dir(self._module)\n\n\ndef deprecated(value, module_name, message, warning_class):\n module = sys.modules[module_name]\n if not isinstance(module, _ModuleWithDeprecations):\n sys.modules[module_name] = _ModuleWithDeprecations(module)\n return _DeprecatedValue(value, message, warning_class)\n\n\ndef cached_property(func):\n cached_name = \"_cached_{0}\".format(func)\n sentinel = object()\n\n def inner(instance):\n cache = getattr(instance, cached_name, sentinel)\n if cache is not sentinel:\n return cache\n result = func(instance)\n setattr(instance, cached_name, result)\n return result\n return property(inner)\n", "path": "src/cryptography/utils.py"}]} | 2,279 | 111 |
gh_patches_debug_27329 | rasdani/github-patches | git_diff | pytorch__text-1467 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Should hide the symbols from the third party
I am integrating KenLM in torchaudio and realized that KenLM uses double-conversion like torchtext does.
In torchaudio we are hiding the symbols of third party with `-fhidden` flag with compiling, but it turns out that torchtext does not do this. (and according to the conversation I had with @malfet about a year ago, PyTorch also hides the symbol of their own code, in addition to third party.)
Torchtext may want to do this in case client code imports the same package compiled differently.
## References:
- https://stackoverflow.com/a/22254251
- https://labjack.com/news/simple-cpp-symbol-visibility-demo
## Double conversion
```
nm torchtext/_torchtext.so| grep double_c | grep 'T __' | head -10
0000000000175a48 T __ZN17double_conversion13StrtodTrimmedENS_6VectorIKcEEi
000000000017175c T __ZN17double_conversion16PowersOfTenCache32GetCachedPowerForDecimalExponentEiPNS_5DiyFpEPi
00000000001716fc T __ZN17double_conversion16PowersOfTenCache36GetCachedPowerForBinaryExponentRangeEiiPNS_5DiyFpEPi
00000000001715bc T __ZN17double_conversion6Bignum11PlusCompareERKS0_S2_S2_
000000000016f7cc T __ZN17double_conversion6Bignum12AssignBignumERKS0_
000000000016f788 T __ZN17double_conversion6Bignum12AssignUInt16Et
000000000016f7a0 T __ZN17double_conversion6Bignum12AssignUInt64Ey
0000000000171188 T __ZN17double_conversion6Bignum13SubtractTimesERKS0_i
00000000001702e8 T __ZN17double_conversion6Bignum14SubtractBignumERKS0_
000000000016ff70 T __ZN17double_conversion6Bignum15AssignHexStringENS_6VectorIKcEE
```
## Sentencepiece
```
$ nm torchtext/_torchtext.so| grep sentencep | grep 'T __' | head -10
0000000000128718 T __ZN13sentencepiece10ModelProto12InternalSwapEPS0_
00000000001277fc T __ZN13sentencepiece10ModelProto14_InternalParseEPKcPN6google8protobuf8internal12ParseContextE
000000000012765c T __ZN13sentencepiece10ModelProto16default_instanceEv
0000000000128334 T __ZN13sentencepiece10ModelProto21CheckTypeAndMergeFromERKN6google8protobuf11MessageLiteE
00000000001276a0 T __ZN13sentencepiece10ModelProto5ClearEv
0000000000128620 T __ZN13sentencepiece10ModelProto8CopyFromERKS0_
0000000000127650 T __ZN13sentencepiece10ModelProto9ArenaDtorEPv
0000000000128338 T __ZN13sentencepiece10ModelProto9MergeFromERKS0_
0000000000127168 T __ZN13sentencepiece10ModelProto9_Internal12trainer_specEPKS0_
0000000000127178 T __ZN13sentencepiece10ModelProto9_Internal14self_test_dataEPKS0_
```
</issue>
<code>
[start of build_tools/setup_helpers/extension.py]
1 import os
2 import platform
3 import subprocess
4 from pathlib import Path
5
6 from torch.utils.cpp_extension import (
7 CppExtension,
8 BuildExtension as TorchBuildExtension
9 )
10
11 __all__ = [
12 'get_ext_modules',
13 'BuildExtension',
14 ]
15
16 _ROOT_DIR = Path(__file__).parent.parent.parent.resolve()
17 _CSRC_DIR = _ROOT_DIR / 'torchtext' / 'csrc'
18 _TP_BASE_DIR = _ROOT_DIR / 'third_party'
19 _TP_INSTALL_DIR = _TP_BASE_DIR / 'build'
20
21
22 def _get_eca(debug):
23 eca = []
24 if platform.system() == "Windows":
25 eca += ['/MT']
26 if debug:
27 eca += ["-O0", "-g"]
28 else:
29 if platform.system() == "Windows":
30 eca += ['-O2']
31 else:
32 eca += ["-O3"]
33 return eca
34
35
36 def _get_ela(debug):
37 ela = []
38 if debug:
39 if platform.system() == "Windows":
40 ela += ["/DEBUG:FULL"]
41 else:
42 ela += ["-O0", "-g"]
43 else:
44 if platform.system() != "Windows":
45 ela += ["-O3"]
46 return ela
47
48
49 def _get_srcs():
50 return [str(p) for p in _CSRC_DIR.glob('**/*.cpp')]
51
52
53 def _get_include_dirs():
54 return [
55 str(_CSRC_DIR),
56 str(_TP_INSTALL_DIR / 'include'),
57 ]
58
59
60 def _get_library_dirs():
61 return [
62 str(_TP_INSTALL_DIR / 'lib'),
63 str(_TP_INSTALL_DIR / 'lib64')
64 ]
65
66
67 def _get_libraries():
68 # NOTE: The order of the library listed bellow matters.
69 #
70 # For example, the symbol `sentencepiece::unigram::Model` is
71 # defined in sentencepiece but UNDEFINED in sentencepiece_train.
72 # GCC only remembers the last encountered symbol.
73 # Therefore placing 'sentencepiece_train' after 'sentencepiece' cause runtime error.
74 #
75 # $ nm third_party/build/lib/libsentencepiece_train.a | grep _ZTIN13sentencepiece7unigram5ModelE
76 # U _ZTIN13sentencepiece7unigram5ModelE
77 # $ nm third_party/build/lib/libsentencepiece.a | grep _ZTIN13sentencepiece7unigram5ModelE
78 # 0000000000000000 V _ZTIN13sentencepiece7unigram5ModelE
79 return [
80 'sentencepiece_train',
81 'sentencepiece',
82 're2',
83 'double-conversion'
84 ]
85
86
87 def _get_cxx11_abi():
88 try:
89 import torch
90 value = int(torch._C._GLIBCXX_USE_CXX11_ABI)
91 except ImportError:
92 value = 0
93 return '-D_GLIBCXX_USE_CXX11_ABI=' + str(value)
94
95
96 def _build_third_party(debug):
97 build_dir = _TP_BASE_DIR / 'build'
98 build_dir.mkdir(exist_ok=True)
99 build_env = os.environ.copy()
100 config = 'Debug' if debug else 'Release'
101 if platform.system() == 'Windows':
102 extra_args = [
103 '-GNinja',
104 ]
105 build_env.setdefault('CC', 'cl')
106 build_env.setdefault('CXX', 'cl')
107 else:
108 extra_args = ['-DCMAKE_CXX_FLAGS=-fPIC ' + _get_cxx11_abi()]
109 subprocess.run(
110 args=[
111 'cmake',
112 '-DBUILD_SHARED_LIBS=OFF',
113 '-DRE2_BUILD_TESTING=OFF',
114 '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON',
115 f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',
116 f'-DCMAKE_BUILD_TYPE={config}',
117 ] + extra_args + ['..'],
118 cwd=str(build_dir),
119 check=True,
120 env=build_env,
121 )
122 print('*** Command list Thirdparty ***')
123 with open(build_dir / 'compile_commands.json', 'r') as fileobj:
124 print(fileobj.read())
125 print('running cmake --build', flush=True)
126 subprocess.run(
127 args=['cmake', '--build', '.', '--target', 'install', '--config', config],
128 cwd=str(build_dir),
129 check=True,
130 env=build_env,
131 )
132
133
134 def _build_sentence_piece(debug):
135 build_dir = _TP_BASE_DIR / 'sentencepiece' / 'build'
136 build_dir.mkdir(exist_ok=True)
137 build_env = os.environ.copy()
138 config = 'Debug' if debug else 'Release'
139 if platform.system() == 'Windows':
140 extra_args = ['-GNinja']
141 build_env.setdefault('CC', 'cl')
142 build_env.setdefault('CXX', 'cl')
143 else:
144 extra_args = []
145 subprocess.run(
146 args=['cmake', '-DSPM_ENABLE_SHARED=OFF', f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',
147 '-DCMAKE_CXX_FLAGS=' + _get_cxx11_abi(),
148 f'-DCMAKE_BUILD_TYPE={config}'] + extra_args + ['..'],
149 cwd=str(build_dir),
150 check=True,
151 env=build_env,
152 )
153 subprocess.run(
154 args=['cmake', '--build', '.', '--target', 'install', '--config', config],
155 cwd=str(build_dir),
156 check=True,
157 env=build_env,
158 )
159
160
161 def _configure_third_party(debug):
162 _build_third_party(debug)
163 _build_sentence_piece(debug)
164
165
166 _EXT_NAME = 'torchtext._torchtext'
167
168
169 def get_ext_modules(debug=False):
170 return [
171 CppExtension(
172 _EXT_NAME,
173 _get_srcs(),
174 libraries=_get_libraries(),
175 include_dirs=_get_include_dirs(),
176 library_dirs=_get_library_dirs(),
177 extra_compile_args=_get_eca(debug),
178 extra_link_args=_get_ela(debug),
179 ),
180 ]
181
182
183 class BuildExtension(TorchBuildExtension):
184 def build_extension(self, ext):
185 if ext.name == _EXT_NAME:
186 _configure_third_party(self.debug)
187 super().build_extension(ext)
188
[end of build_tools/setup_helpers/extension.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/build_tools/setup_helpers/extension.py b/build_tools/setup_helpers/extension.py
--- a/build_tools/setup_helpers/extension.py
+++ b/build_tools/setup_helpers/extension.py
@@ -29,7 +29,7 @@
if platform.system() == "Windows":
eca += ['-O2']
else:
- eca += ["-O3"]
+ eca += ["-O3", "-fvisibility=hidden"]
return eca
@@ -114,6 +114,8 @@
'-DCMAKE_EXPORT_COMPILE_COMMANDS=ON',
f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',
f'-DCMAKE_BUILD_TYPE={config}',
+ '-DCMAKE_CXX_VISIBILITY_PRESET=hidden',
+ '-DCMAKE_POLICY_DEFAULT_CMP0063=NEW',
] + extra_args + ['..'],
cwd=str(build_dir),
check=True,
@@ -144,8 +146,11 @@
extra_args = []
subprocess.run(
args=['cmake', '-DSPM_ENABLE_SHARED=OFF', f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',
+ '-DCMAKE_CXX_VISIBILITY_PRESET=hidden',
'-DCMAKE_CXX_FLAGS=' + _get_cxx11_abi(),
+ '-DCMAKE_POLICY_DEFAULT_CMP0063=NEW',
f'-DCMAKE_BUILD_TYPE={config}'] + extra_args + ['..'],
+
cwd=str(build_dir),
check=True,
env=build_env,
| {"golden_diff": "diff --git a/build_tools/setup_helpers/extension.py b/build_tools/setup_helpers/extension.py\n--- a/build_tools/setup_helpers/extension.py\n+++ b/build_tools/setup_helpers/extension.py\n@@ -29,7 +29,7 @@\n if platform.system() == \"Windows\":\n eca += ['-O2']\n else:\n- eca += [\"-O3\"]\n+ eca += [\"-O3\", \"-fvisibility=hidden\"]\n return eca\n \n \n@@ -114,6 +114,8 @@\n '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON',\n f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',\n f'-DCMAKE_BUILD_TYPE={config}',\n+ '-DCMAKE_CXX_VISIBILITY_PRESET=hidden',\n+ '-DCMAKE_POLICY_DEFAULT_CMP0063=NEW',\n ] + extra_args + ['..'],\n cwd=str(build_dir),\n check=True,\n@@ -144,8 +146,11 @@\n extra_args = []\n subprocess.run(\n args=['cmake', '-DSPM_ENABLE_SHARED=OFF', f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',\n+ '-DCMAKE_CXX_VISIBILITY_PRESET=hidden',\n '-DCMAKE_CXX_FLAGS=' + _get_cxx11_abi(),\n+ '-DCMAKE_POLICY_DEFAULT_CMP0063=NEW',\n f'-DCMAKE_BUILD_TYPE={config}'] + extra_args + ['..'],\n+\n cwd=str(build_dir),\n check=True,\n env=build_env,\n", "issue": "Should hide the symbols from the third party\nI am integrating KenLM in torchaudio and realized that KenLM uses double-conversion like torchtext does.\r\n\r\nIn torchaudio we are hiding the symbols of third party with `-fhidden` flag with compiling, but it turns out that torchtext does not do this. (and according to the conversation I had with @malfet about a year ago, PyTorch also hides the symbol of their own code, in addition to third party.)\r\n\r\nTorchtext may want to do this in case client code imports the same package compiled differently.\r\n\r\n## References:\r\n- https://stackoverflow.com/a/22254251\r\n- https://labjack.com/news/simple-cpp-symbol-visibility-demo\r\n\r\n## Double conversion\r\n\r\n```\r\nnm torchtext/_torchtext.so| grep double_c | grep 'T __' | head -10\r\n0000000000175a48 T __ZN17double_conversion13StrtodTrimmedENS_6VectorIKcEEi\r\n000000000017175c T __ZN17double_conversion16PowersOfTenCache32GetCachedPowerForDecimalExponentEiPNS_5DiyFpEPi\r\n00000000001716fc T __ZN17double_conversion16PowersOfTenCache36GetCachedPowerForBinaryExponentRangeEiiPNS_5DiyFpEPi\r\n00000000001715bc T __ZN17double_conversion6Bignum11PlusCompareERKS0_S2_S2_\r\n000000000016f7cc T __ZN17double_conversion6Bignum12AssignBignumERKS0_\r\n000000000016f788 T __ZN17double_conversion6Bignum12AssignUInt16Et\r\n000000000016f7a0 T __ZN17double_conversion6Bignum12AssignUInt64Ey\r\n0000000000171188 T __ZN17double_conversion6Bignum13SubtractTimesERKS0_i\r\n00000000001702e8 T __ZN17double_conversion6Bignum14SubtractBignumERKS0_\r\n000000000016ff70 T __ZN17double_conversion6Bignum15AssignHexStringENS_6VectorIKcEE\r\n```\r\n\r\n## Sentencepiece\r\n\r\n```\r\n$ nm torchtext/_torchtext.so| grep sentencep | grep 'T __' | head -10\r\n0000000000128718 T __ZN13sentencepiece10ModelProto12InternalSwapEPS0_\r\n00000000001277fc T __ZN13sentencepiece10ModelProto14_InternalParseEPKcPN6google8protobuf8internal12ParseContextE\r\n000000000012765c T __ZN13sentencepiece10ModelProto16default_instanceEv\r\n0000000000128334 T __ZN13sentencepiece10ModelProto21CheckTypeAndMergeFromERKN6google8protobuf11MessageLiteE\r\n00000000001276a0 T __ZN13sentencepiece10ModelProto5ClearEv\r\n0000000000128620 T __ZN13sentencepiece10ModelProto8CopyFromERKS0_\r\n0000000000127650 T __ZN13sentencepiece10ModelProto9ArenaDtorEPv\r\n0000000000128338 T __ZN13sentencepiece10ModelProto9MergeFromERKS0_\r\n0000000000127168 T __ZN13sentencepiece10ModelProto9_Internal12trainer_specEPKS0_\r\n0000000000127178 T __ZN13sentencepiece10ModelProto9_Internal14self_test_dataEPKS0_\r\n```\r\n\r\n\n", "before_files": [{"content": "import os\nimport platform\nimport subprocess\nfrom pathlib import Path\n\nfrom torch.utils.cpp_extension import (\n CppExtension,\n BuildExtension as TorchBuildExtension\n)\n\n__all__ = [\n 'get_ext_modules',\n 'BuildExtension',\n]\n\n_ROOT_DIR = Path(__file__).parent.parent.parent.resolve()\n_CSRC_DIR = _ROOT_DIR / 'torchtext' / 'csrc'\n_TP_BASE_DIR = _ROOT_DIR / 'third_party'\n_TP_INSTALL_DIR = _TP_BASE_DIR / 'build'\n\n\ndef _get_eca(debug):\n eca = []\n if platform.system() == \"Windows\":\n eca += ['/MT']\n if debug:\n eca += [\"-O0\", \"-g\"]\n else:\n if platform.system() == \"Windows\":\n eca += ['-O2']\n else:\n eca += [\"-O3\"]\n return eca\n\n\ndef _get_ela(debug):\n ela = []\n if debug:\n if platform.system() == \"Windows\":\n ela += [\"/DEBUG:FULL\"]\n else:\n ela += [\"-O0\", \"-g\"]\n else:\n if platform.system() != \"Windows\":\n ela += [\"-O3\"]\n return ela\n\n\ndef _get_srcs():\n return [str(p) for p in _CSRC_DIR.glob('**/*.cpp')]\n\n\ndef _get_include_dirs():\n return [\n str(_CSRC_DIR),\n str(_TP_INSTALL_DIR / 'include'),\n ]\n\n\ndef _get_library_dirs():\n return [\n str(_TP_INSTALL_DIR / 'lib'),\n str(_TP_INSTALL_DIR / 'lib64')\n ]\n\n\ndef _get_libraries():\n # NOTE: The order of the library listed bellow matters.\n #\n # For example, the symbol `sentencepiece::unigram::Model` is\n # defined in sentencepiece but UNDEFINED in sentencepiece_train.\n # GCC only remembers the last encountered symbol.\n # Therefore placing 'sentencepiece_train' after 'sentencepiece' cause runtime error.\n #\n # $ nm third_party/build/lib/libsentencepiece_train.a | grep _ZTIN13sentencepiece7unigram5ModelE\n # U _ZTIN13sentencepiece7unigram5ModelE\n # $ nm third_party/build/lib/libsentencepiece.a | grep _ZTIN13sentencepiece7unigram5ModelE\n # 0000000000000000 V _ZTIN13sentencepiece7unigram5ModelE\n return [\n 'sentencepiece_train',\n 'sentencepiece',\n 're2',\n 'double-conversion'\n ]\n\n\ndef _get_cxx11_abi():\n try:\n import torch\n value = int(torch._C._GLIBCXX_USE_CXX11_ABI)\n except ImportError:\n value = 0\n return '-D_GLIBCXX_USE_CXX11_ABI=' + str(value)\n\n\ndef _build_third_party(debug):\n build_dir = _TP_BASE_DIR / 'build'\n build_dir.mkdir(exist_ok=True)\n build_env = os.environ.copy()\n config = 'Debug' if debug else 'Release'\n if platform.system() == 'Windows':\n extra_args = [\n '-GNinja',\n ]\n build_env.setdefault('CC', 'cl')\n build_env.setdefault('CXX', 'cl')\n else:\n extra_args = ['-DCMAKE_CXX_FLAGS=-fPIC ' + _get_cxx11_abi()]\n subprocess.run(\n args=[\n 'cmake',\n '-DBUILD_SHARED_LIBS=OFF',\n '-DRE2_BUILD_TESTING=OFF',\n '-DCMAKE_EXPORT_COMPILE_COMMANDS=ON',\n f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',\n f'-DCMAKE_BUILD_TYPE={config}',\n ] + extra_args + ['..'],\n cwd=str(build_dir),\n check=True,\n env=build_env,\n )\n print('*** Command list Thirdparty ***')\n with open(build_dir / 'compile_commands.json', 'r') as fileobj:\n print(fileobj.read())\n print('running cmake --build', flush=True)\n subprocess.run(\n args=['cmake', '--build', '.', '--target', 'install', '--config', config],\n cwd=str(build_dir),\n check=True,\n env=build_env,\n )\n\n\ndef _build_sentence_piece(debug):\n build_dir = _TP_BASE_DIR / 'sentencepiece' / 'build'\n build_dir.mkdir(exist_ok=True)\n build_env = os.environ.copy()\n config = 'Debug' if debug else 'Release'\n if platform.system() == 'Windows':\n extra_args = ['-GNinja']\n build_env.setdefault('CC', 'cl')\n build_env.setdefault('CXX', 'cl')\n else:\n extra_args = []\n subprocess.run(\n args=['cmake', '-DSPM_ENABLE_SHARED=OFF', f'-DCMAKE_INSTALL_PREFIX={_TP_INSTALL_DIR}',\n '-DCMAKE_CXX_FLAGS=' + _get_cxx11_abi(),\n f'-DCMAKE_BUILD_TYPE={config}'] + extra_args + ['..'],\n cwd=str(build_dir),\n check=True,\n env=build_env,\n )\n subprocess.run(\n args=['cmake', '--build', '.', '--target', 'install', '--config', config],\n cwd=str(build_dir),\n check=True,\n env=build_env,\n )\n\n\ndef _configure_third_party(debug):\n _build_third_party(debug)\n _build_sentence_piece(debug)\n\n\n_EXT_NAME = 'torchtext._torchtext'\n\n\ndef get_ext_modules(debug=False):\n return [\n CppExtension(\n _EXT_NAME,\n _get_srcs(),\n libraries=_get_libraries(),\n include_dirs=_get_include_dirs(),\n library_dirs=_get_library_dirs(),\n extra_compile_args=_get_eca(debug),\n extra_link_args=_get_ela(debug),\n ),\n ]\n\n\nclass BuildExtension(TorchBuildExtension):\n def build_extension(self, ext):\n if ext.name == _EXT_NAME:\n _configure_third_party(self.debug)\n super().build_extension(ext)\n", "path": "build_tools/setup_helpers/extension.py"}]} | 3,307 | 332 |
gh_patches_debug_26111 | rasdani/github-patches | git_diff | pytorch__ignite-1200 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pascal training bug with CM
## 🐛 Bug description
```
File "./code/scripts/training.py", line 252, in log_cm
cm = cm_metric.compute().numpy()
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
2020-07-14 03:01:54,416 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'nccl'
2020-07-14 03:01:54,417|training|ERROR|
```
## Environment
- PyTorch Version (e.g., 1.4):
- Ignite Version (e.g., 0.3.0):
- OS (e.g., Linux):
- How you installed Ignite (`conda`, `pip`, source):
- Python version:
- Any other relevant information:
</issue>
<code>
[start of examples/references/segmentation/pascal_voc2012/code/scripts/training.py]
1 # This a training script launched with py_config_runner
2 # It should obligatory contain `run(config, **kwargs)` method
3
4 from pathlib import Path
5 from collections.abc import Mapping
6
7 import torch
8
9 from apex import amp
10
11 import ignite
12 import ignite.distributed as idist
13 from ignite.contrib.engines import common
14 from ignite.engine import Engine, Events, create_supervised_evaluator
15 from ignite.handlers import DiskSaver
16 from ignite.metrics import ConfusionMatrix, IoU, mIoU
17 from ignite.utils import setup_logger
18
19 from py_config_runner.utils import set_seed
20 from py_config_runner.config_utils import get_params, TRAINVAL_CONFIG, assert_config
21
22 import sys
23
24 # Adds "code" folder to python path
25 sys.path.insert(0, Path(__file__).parent.parent.as_posix())
26
27 from utils.handlers import predictions_gt_images_handler
28 from utils import exp_tracking
29 from dataflow.datasets import VOCSegmentationOpencv
30
31
32 def initialize(config):
33
34 model = config.model.to(config.device)
35 optimizer = config.optimizer
36 # Setup Nvidia/Apex AMP
37 model, optimizer = amp.initialize(model, optimizer, opt_level=getattr(config, "fp16_opt_level", "O2"), num_losses=1)
38
39 # Adapt model to dist conf
40 model = idist.auto_model(model)
41
42 criterion = config.criterion.to(config.device)
43
44 return model, optimizer, criterion
45
46
47 def get_save_handler(config):
48 if exp_tracking.has_trains:
49 from ignite.contrib.handlers.trains_logger import TrainsSaver
50
51 return TrainsSaver(dirname=config.output_path.as_posix())
52
53 return DiskSaver(config.output_path.as_posix())
54
55
56 def create_trainer(model, optimizer, criterion, train_sampler, config, logger):
57 prepare_batch = config.prepare_batch
58 device = config.device
59
60 # Setup trainer
61 accumulation_steps = getattr(config, "accumulation_steps", 1)
62 model_output_transform = getattr(config, "model_output_transform", lambda x: x)
63
64 def train_update_function(engine, batch):
65
66 model.train()
67
68 x, y = prepare_batch(batch, device=device, non_blocking=True)
69 y_pred = model(x)
70 y_pred = model_output_transform(y_pred)
71 loss = criterion(y_pred, y)
72
73 if isinstance(loss, Mapping):
74 assert "supervised batch loss" in loss
75 loss_dict = loss
76 output = {k: v.item() for k, v in loss_dict.items()}
77 loss = loss_dict["supervised batch loss"] / accumulation_steps
78 else:
79 output = {"supervised batch loss": loss.item()}
80
81 with amp.scale_loss(loss, optimizer, loss_id=0) as scaled_loss:
82 scaled_loss.backward()
83
84 if engine.state.iteration % accumulation_steps == 0:
85 optimizer.step()
86 optimizer.zero_grad()
87
88 return output
89
90 output_names = getattr(config, "output_names", ["supervised batch loss",])
91 lr_scheduler = config.lr_scheduler
92
93 trainer = Engine(train_update_function)
94 trainer.logger = logger
95
96 to_save = {"model": model, "optimizer": optimizer, "lr_scheduler": lr_scheduler, "trainer": trainer, "amp": amp}
97
98 save_every_iters = getattr(config, "save_every_iters", 1000)
99
100 common.setup_common_training_handlers(
101 trainer,
102 train_sampler,
103 to_save=to_save,
104 save_every_iters=save_every_iters,
105 save_handler=get_save_handler(config),
106 lr_scheduler=lr_scheduler,
107 with_gpu_stats=exp_tracking.has_mlflow,
108 output_names=output_names,
109 with_pbars=False,
110 )
111
112 if idist.get_rank() == 0:
113 common.ProgressBar(persist=False).attach(trainer, metric_names="all")
114
115 return trainer
116
117
118 def create_evaluators(model, metrics, config):
119 model_output_transform = getattr(config, "model_output_transform", lambda x: x)
120
121 evaluator_args = dict(
122 model=model,
123 metrics=metrics,
124 device=config.device,
125 non_blocking=True,
126 prepare_batch=config.prepare_batch,
127 output_transform=lambda x, y, y_pred: (model_output_transform(y_pred), y,),
128 )
129 train_evaluator = create_supervised_evaluator(**evaluator_args)
130 evaluator = create_supervised_evaluator(**evaluator_args)
131
132 if idist.get_rank() == 0:
133 common.ProgressBar(desc="Evaluation (train)", persist=False).attach(train_evaluator)
134 common.ProgressBar(desc="Evaluation (val)", persist=False).attach(evaluator)
135
136 return evaluator, train_evaluator
137
138
139 def log_metrics(logger, epoch, elapsed, tag, metrics):
140 logger.info(
141 "\nEpoch {} - Evaluation time (seconds): {} - {} metrics:\n {}".format(
142 epoch, int(elapsed), tag, "\n".join(["\t{}: {}".format(k, v) for k, v in metrics.items()])
143 )
144 )
145
146
147 def log_basic_info(logger, config):
148
149 msg = "\n- PyTorch version: {}".format(torch.__version__)
150 msg += "\n- Ignite version: {}".format(ignite.__version__)
151 msg += "\n- Cuda device name: {}".format(torch.cuda.get_device_name(idist.get_local_rank()))
152
153 logger.info(msg)
154
155 if idist.get_world_size() > 1:
156 msg = "\nDistributed setting:"
157 msg += "\tbackend: {}".format(idist.backend())
158 msg += "\trank: {}".format(idist.get_rank())
159 msg += "\tworld size: {}".format(idist.get_world_size())
160 logger.info(msg)
161
162
163 def training(local_rank, config, logger=None):
164
165 if not getattr(config, "use_fp16", True):
166 raise RuntimeError("This training script uses by default fp16 AMP")
167
168 torch.backends.cudnn.benchmark = True
169
170 set_seed(config.seed + local_rank)
171
172 train_loader, val_loader, train_eval_loader = config.train_loader, config.val_loader, config.train_eval_loader
173
174 # Setup model, optimizer, criterion
175 model, optimizer, criterion = initialize(config)
176
177 # Setup trainer for this specific task
178 trainer = create_trainer(model, optimizer, criterion, train_loader.sampler, config, logger)
179
180 # Setup evaluators
181 num_classes = config.num_classes
182 cm_metric = ConfusionMatrix(num_classes=num_classes)
183
184 val_metrics = {
185 "IoU": IoU(cm_metric),
186 "mIoU_bg": mIoU(cm_metric),
187 }
188
189 if hasattr(config, "val_metrics") and isinstance(config.val_metrics, dict):
190 val_metrics.update(config.val_metrics)
191
192 evaluator, train_evaluator = create_evaluators(model, val_metrics, config)
193
194 @trainer.on(Events.EPOCH_COMPLETED(every=getattr(config, "val_interval", 1)) | Events.COMPLETED)
195 def run_validation():
196 epoch = trainer.state.epoch
197 state = train_evaluator.run(train_eval_loader)
198 log_metrics(logger, epoch, state.times["COMPLETED"], "Train", state.metrics)
199 state = evaluator.run(val_loader)
200 log_metrics(logger, epoch, state.times["COMPLETED"], "Test", state.metrics)
201
202 if getattr(config, "start_by_validation", False):
203 trainer.add_event_handler(Events.STARTED, run_validation)
204
205 score_metric_name = "mIoU_bg"
206
207 if hasattr(config, "es_patience"):
208 common.add_early_stopping_by_val_score(config.es_patience, evaluator, trainer, metric_name=score_metric_name)
209
210 # Store 3 best models by validation accuracy:
211 common.gen_save_best_models_by_val_score(
212 save_handler=get_save_handler(config),
213 evaluator=evaluator,
214 models=model,
215 metric_name=score_metric_name,
216 n_saved=3,
217 trainer=trainer,
218 tag="val",
219 )
220
221 if idist.get_rank() == 0:
222
223 tb_logger = common.setup_tb_logging(
224 config.output_path.as_posix(),
225 trainer,
226 optimizer,
227 evaluators={"training": train_evaluator, "validation": evaluator},
228 )
229
230 if not exp_tracking.has_trains:
231 exp_tracking_logger = exp_tracking.setup_logging(
232 trainer, optimizer, evaluators={"training": train_evaluator, "validation": evaluator}
233 )
234
235 # Log val predictions:
236 tb_logger.attach(
237 evaluator,
238 log_handler=predictions_gt_images_handler(
239 img_denormalize_fn=config.img_denormalize, n_images=15, another_engine=trainer, prefix_tag="validation"
240 ),
241 event_name=Events.ITERATION_COMPLETED(once=len(val_loader) // 2),
242 )
243
244 # Log confusion matrix to Trains:
245 if exp_tracking.has_trains:
246 from trains import Task
247
248 trains_logger = Task.current_task().get_logger()
249
250 @trainer.on(Events.COMPLETED)
251 def log_cm():
252 cm = cm_metric.compute().numpy()
253 cm = cm / (cm.sum(axis=1)[:, None] + 1e-15)
254 trains_logger.report_confusion_matrix(
255 title="Final Confusion Matrix",
256 series="cm-preds-gt",
257 matrix=cm,
258 iteration=trainer.state.iteration,
259 xlabels=VOCSegmentationOpencv.target_names,
260 ylabels=VOCSegmentationOpencv.target_names,
261 )
262
263 trainer.run(train_loader, max_epochs=config.num_epochs)
264
265 if idist.get_rank() == 0:
266 tb_logger.close()
267 if not exp_tracking.has_trains:
268 exp_tracking_logger.close()
269
270
271 def run(config, **kwargs):
272 """This is the main method to run the training. As this training script is launched with `py_config_runner`
273 it should obligatory contain `run(config, **kwargs)` method.
274
275 """
276
277 assert torch.cuda.is_available(), torch.cuda.is_available()
278 assert torch.backends.cudnn.enabled, "Nvidia/Amp requires cudnn backend to be enabled."
279
280 with idist.Parallel(backend="nccl") as parallel:
281
282 logger = setup_logger(name="Pascal-VOC12 Training", distributed_rank=idist.get_rank())
283
284 assert_config(config, TRAINVAL_CONFIG)
285 # The following attributes are automatically added by py_config_runner
286 assert hasattr(config, "config_filepath") and isinstance(config.config_filepath, Path)
287 assert hasattr(config, "script_filepath") and isinstance(config.script_filepath, Path)
288
289 if idist.get_rank() == 0 and exp_tracking.has_trains:
290 from trains import Task
291
292 task = Task.init("Pascal-VOC12 Training", config.config_filepath.stem)
293 task.connect_configuration(config.config_filepath.as_posix())
294
295 log_basic_info(logger, config)
296
297 config.output_path = Path(exp_tracking.get_output_path())
298 # dump python files to reproduce the run
299 exp_tracking.log_artifact(config.config_filepath.as_posix())
300 exp_tracking.log_artifact(config.script_filepath.as_posix())
301 exp_tracking.log_params(get_params(config, TRAINVAL_CONFIG))
302
303 try:
304 parallel.run(training, config, logger=logger)
305 except KeyboardInterrupt:
306 logger.info("Catched KeyboardInterrupt -> exit")
307 except Exception as e: # noqa
308 logger.exception("")
309 raise e
310
[end of examples/references/segmentation/pascal_voc2012/code/scripts/training.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/references/segmentation/pascal_voc2012/code/scripts/training.py b/examples/references/segmentation/pascal_voc2012/code/scripts/training.py
--- a/examples/references/segmentation/pascal_voc2012/code/scripts/training.py
+++ b/examples/references/segmentation/pascal_voc2012/code/scripts/training.py
@@ -179,7 +179,7 @@
# Setup evaluators
num_classes = config.num_classes
- cm_metric = ConfusionMatrix(num_classes=num_classes)
+ cm_metric = ConfusionMatrix(num_classes=num_classes, average="recall")
val_metrics = {
"IoU": IoU(cm_metric),
@@ -241,16 +241,17 @@
event_name=Events.ITERATION_COMPLETED(once=len(val_loader) // 2),
)
- # Log confusion matrix to Trains:
- if exp_tracking.has_trains:
- from trains import Task
+ # Log confusion matrix to Trains:
+ if exp_tracking.has_trains:
+
+ @trainer.on(Events.COMPLETED)
+ def compute_and_log_cm():
+ cm = cm_metric.compute().cpu().numpy()
- trains_logger = Task.current_task().get_logger()
+ if idist.get_rank() == 0:
+ from trains import Task
- @trainer.on(Events.COMPLETED)
- def log_cm():
- cm = cm_metric.compute().numpy()
- cm = cm / (cm.sum(axis=1)[:, None] + 1e-15)
+ trains_logger = Task.current_task().get_logger()
trains_logger.report_confusion_matrix(
title="Final Confusion Matrix",
series="cm-preds-gt",
| {"golden_diff": "diff --git a/examples/references/segmentation/pascal_voc2012/code/scripts/training.py b/examples/references/segmentation/pascal_voc2012/code/scripts/training.py\n--- a/examples/references/segmentation/pascal_voc2012/code/scripts/training.py\n+++ b/examples/references/segmentation/pascal_voc2012/code/scripts/training.py\n@@ -179,7 +179,7 @@\n \n # Setup evaluators\n num_classes = config.num_classes\n- cm_metric = ConfusionMatrix(num_classes=num_classes)\n+ cm_metric = ConfusionMatrix(num_classes=num_classes, average=\"recall\")\n \n val_metrics = {\n \"IoU\": IoU(cm_metric),\n@@ -241,16 +241,17 @@\n event_name=Events.ITERATION_COMPLETED(once=len(val_loader) // 2),\n )\n \n- # Log confusion matrix to Trains:\n- if exp_tracking.has_trains:\n- from trains import Task\n+ # Log confusion matrix to Trains:\n+ if exp_tracking.has_trains:\n+\n+ @trainer.on(Events.COMPLETED)\n+ def compute_and_log_cm():\n+ cm = cm_metric.compute().cpu().numpy()\n \n- trains_logger = Task.current_task().get_logger()\n+ if idist.get_rank() == 0:\n+ from trains import Task\n \n- @trainer.on(Events.COMPLETED)\n- def log_cm():\n- cm = cm_metric.compute().numpy()\n- cm = cm / (cm.sum(axis=1)[:, None] + 1e-15)\n+ trains_logger = Task.current_task().get_logger()\n trains_logger.report_confusion_matrix(\n title=\"Final Confusion Matrix\",\n series=\"cm-preds-gt\",\n", "issue": "Pascal training bug with CM\n## \ud83d\udc1b Bug description\r\n\r\n```\r\n File \"./code/scripts/training.py\", line 252, in log_cm\r\ncm = cm_metric.compute().numpy()\r\nTypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.\r\n2020-07-14 03:01:54,416 ignite.distributed.launcher.Parallel INFO: Finalized processing group with backend: 'nccl'\r\n2020-07-14 03:01:54,417|training|ERROR|\r\n```\r\n\r\n## Environment\r\n\r\n - PyTorch Version (e.g., 1.4):\r\n - Ignite Version (e.g., 0.3.0):\r\n - OS (e.g., Linux):\r\n - How you installed Ignite (`conda`, `pip`, source):\r\n - Python version:\r\n - Any other relevant information:\r\n\n", "before_files": [{"content": "# This a training script launched with py_config_runner\n# It should obligatory contain `run(config, **kwargs)` method\n\nfrom pathlib import Path\nfrom collections.abc import Mapping\n\nimport torch\n\nfrom apex import amp\n\nimport ignite\nimport ignite.distributed as idist\nfrom ignite.contrib.engines import common\nfrom ignite.engine import Engine, Events, create_supervised_evaluator\nfrom ignite.handlers import DiskSaver\nfrom ignite.metrics import ConfusionMatrix, IoU, mIoU\nfrom ignite.utils import setup_logger\n\nfrom py_config_runner.utils import set_seed\nfrom py_config_runner.config_utils import get_params, TRAINVAL_CONFIG, assert_config\n\nimport sys\n\n# Adds \"code\" folder to python path\nsys.path.insert(0, Path(__file__).parent.parent.as_posix())\n\nfrom utils.handlers import predictions_gt_images_handler\nfrom utils import exp_tracking\nfrom dataflow.datasets import VOCSegmentationOpencv\n\n\ndef initialize(config):\n\n model = config.model.to(config.device)\n optimizer = config.optimizer\n # Setup Nvidia/Apex AMP\n model, optimizer = amp.initialize(model, optimizer, opt_level=getattr(config, \"fp16_opt_level\", \"O2\"), num_losses=1)\n\n # Adapt model to dist conf\n model = idist.auto_model(model)\n\n criterion = config.criterion.to(config.device)\n\n return model, optimizer, criterion\n\n\ndef get_save_handler(config):\n if exp_tracking.has_trains:\n from ignite.contrib.handlers.trains_logger import TrainsSaver\n\n return TrainsSaver(dirname=config.output_path.as_posix())\n\n return DiskSaver(config.output_path.as_posix())\n\n\ndef create_trainer(model, optimizer, criterion, train_sampler, config, logger):\n prepare_batch = config.prepare_batch\n device = config.device\n\n # Setup trainer\n accumulation_steps = getattr(config, \"accumulation_steps\", 1)\n model_output_transform = getattr(config, \"model_output_transform\", lambda x: x)\n\n def train_update_function(engine, batch):\n\n model.train()\n\n x, y = prepare_batch(batch, device=device, non_blocking=True)\n y_pred = model(x)\n y_pred = model_output_transform(y_pred)\n loss = criterion(y_pred, y)\n\n if isinstance(loss, Mapping):\n assert \"supervised batch loss\" in loss\n loss_dict = loss\n output = {k: v.item() for k, v in loss_dict.items()}\n loss = loss_dict[\"supervised batch loss\"] / accumulation_steps\n else:\n output = {\"supervised batch loss\": loss.item()}\n\n with amp.scale_loss(loss, optimizer, loss_id=0) as scaled_loss:\n scaled_loss.backward()\n\n if engine.state.iteration % accumulation_steps == 0:\n optimizer.step()\n optimizer.zero_grad()\n\n return output\n\n output_names = getattr(config, \"output_names\", [\"supervised batch loss\",])\n lr_scheduler = config.lr_scheduler\n\n trainer = Engine(train_update_function)\n trainer.logger = logger\n\n to_save = {\"model\": model, \"optimizer\": optimizer, \"lr_scheduler\": lr_scheduler, \"trainer\": trainer, \"amp\": amp}\n\n save_every_iters = getattr(config, \"save_every_iters\", 1000)\n\n common.setup_common_training_handlers(\n trainer,\n train_sampler,\n to_save=to_save,\n save_every_iters=save_every_iters,\n save_handler=get_save_handler(config),\n lr_scheduler=lr_scheduler,\n with_gpu_stats=exp_tracking.has_mlflow,\n output_names=output_names,\n with_pbars=False,\n )\n\n if idist.get_rank() == 0:\n common.ProgressBar(persist=False).attach(trainer, metric_names=\"all\")\n\n return trainer\n\n\ndef create_evaluators(model, metrics, config):\n model_output_transform = getattr(config, \"model_output_transform\", lambda x: x)\n\n evaluator_args = dict(\n model=model,\n metrics=metrics,\n device=config.device,\n non_blocking=True,\n prepare_batch=config.prepare_batch,\n output_transform=lambda x, y, y_pred: (model_output_transform(y_pred), y,),\n )\n train_evaluator = create_supervised_evaluator(**evaluator_args)\n evaluator = create_supervised_evaluator(**evaluator_args)\n\n if idist.get_rank() == 0:\n common.ProgressBar(desc=\"Evaluation (train)\", persist=False).attach(train_evaluator)\n common.ProgressBar(desc=\"Evaluation (val)\", persist=False).attach(evaluator)\n\n return evaluator, train_evaluator\n\n\ndef log_metrics(logger, epoch, elapsed, tag, metrics):\n logger.info(\n \"\\nEpoch {} - Evaluation time (seconds): {} - {} metrics:\\n {}\".format(\n epoch, int(elapsed), tag, \"\\n\".join([\"\\t{}: {}\".format(k, v) for k, v in metrics.items()])\n )\n )\n\n\ndef log_basic_info(logger, config):\n\n msg = \"\\n- PyTorch version: {}\".format(torch.__version__)\n msg += \"\\n- Ignite version: {}\".format(ignite.__version__)\n msg += \"\\n- Cuda device name: {}\".format(torch.cuda.get_device_name(idist.get_local_rank()))\n\n logger.info(msg)\n\n if idist.get_world_size() > 1:\n msg = \"\\nDistributed setting:\"\n msg += \"\\tbackend: {}\".format(idist.backend())\n msg += \"\\trank: {}\".format(idist.get_rank())\n msg += \"\\tworld size: {}\".format(idist.get_world_size())\n logger.info(msg)\n\n\ndef training(local_rank, config, logger=None):\n\n if not getattr(config, \"use_fp16\", True):\n raise RuntimeError(\"This training script uses by default fp16 AMP\")\n\n torch.backends.cudnn.benchmark = True\n\n set_seed(config.seed + local_rank)\n\n train_loader, val_loader, train_eval_loader = config.train_loader, config.val_loader, config.train_eval_loader\n\n # Setup model, optimizer, criterion\n model, optimizer, criterion = initialize(config)\n\n # Setup trainer for this specific task\n trainer = create_trainer(model, optimizer, criterion, train_loader.sampler, config, logger)\n\n # Setup evaluators\n num_classes = config.num_classes\n cm_metric = ConfusionMatrix(num_classes=num_classes)\n\n val_metrics = {\n \"IoU\": IoU(cm_metric),\n \"mIoU_bg\": mIoU(cm_metric),\n }\n\n if hasattr(config, \"val_metrics\") and isinstance(config.val_metrics, dict):\n val_metrics.update(config.val_metrics)\n\n evaluator, train_evaluator = create_evaluators(model, val_metrics, config)\n\n @trainer.on(Events.EPOCH_COMPLETED(every=getattr(config, \"val_interval\", 1)) | Events.COMPLETED)\n def run_validation():\n epoch = trainer.state.epoch\n state = train_evaluator.run(train_eval_loader)\n log_metrics(logger, epoch, state.times[\"COMPLETED\"], \"Train\", state.metrics)\n state = evaluator.run(val_loader)\n log_metrics(logger, epoch, state.times[\"COMPLETED\"], \"Test\", state.metrics)\n\n if getattr(config, \"start_by_validation\", False):\n trainer.add_event_handler(Events.STARTED, run_validation)\n\n score_metric_name = \"mIoU_bg\"\n\n if hasattr(config, \"es_patience\"):\n common.add_early_stopping_by_val_score(config.es_patience, evaluator, trainer, metric_name=score_metric_name)\n\n # Store 3 best models by validation accuracy:\n common.gen_save_best_models_by_val_score(\n save_handler=get_save_handler(config),\n evaluator=evaluator,\n models=model,\n metric_name=score_metric_name,\n n_saved=3,\n trainer=trainer,\n tag=\"val\",\n )\n\n if idist.get_rank() == 0:\n\n tb_logger = common.setup_tb_logging(\n config.output_path.as_posix(),\n trainer,\n optimizer,\n evaluators={\"training\": train_evaluator, \"validation\": evaluator},\n )\n\n if not exp_tracking.has_trains:\n exp_tracking_logger = exp_tracking.setup_logging(\n trainer, optimizer, evaluators={\"training\": train_evaluator, \"validation\": evaluator}\n )\n\n # Log val predictions:\n tb_logger.attach(\n evaluator,\n log_handler=predictions_gt_images_handler(\n img_denormalize_fn=config.img_denormalize, n_images=15, another_engine=trainer, prefix_tag=\"validation\"\n ),\n event_name=Events.ITERATION_COMPLETED(once=len(val_loader) // 2),\n )\n\n # Log confusion matrix to Trains:\n if exp_tracking.has_trains:\n from trains import Task\n\n trains_logger = Task.current_task().get_logger()\n\n @trainer.on(Events.COMPLETED)\n def log_cm():\n cm = cm_metric.compute().numpy()\n cm = cm / (cm.sum(axis=1)[:, None] + 1e-15)\n trains_logger.report_confusion_matrix(\n title=\"Final Confusion Matrix\",\n series=\"cm-preds-gt\",\n matrix=cm,\n iteration=trainer.state.iteration,\n xlabels=VOCSegmentationOpencv.target_names,\n ylabels=VOCSegmentationOpencv.target_names,\n )\n\n trainer.run(train_loader, max_epochs=config.num_epochs)\n\n if idist.get_rank() == 0:\n tb_logger.close()\n if not exp_tracking.has_trains:\n exp_tracking_logger.close()\n\n\ndef run(config, **kwargs):\n \"\"\"This is the main method to run the training. As this training script is launched with `py_config_runner`\n it should obligatory contain `run(config, **kwargs)` method.\n\n \"\"\"\n\n assert torch.cuda.is_available(), torch.cuda.is_available()\n assert torch.backends.cudnn.enabled, \"Nvidia/Amp requires cudnn backend to be enabled.\"\n\n with idist.Parallel(backend=\"nccl\") as parallel:\n\n logger = setup_logger(name=\"Pascal-VOC12 Training\", distributed_rank=idist.get_rank())\n\n assert_config(config, TRAINVAL_CONFIG)\n # The following attributes are automatically added by py_config_runner\n assert hasattr(config, \"config_filepath\") and isinstance(config.config_filepath, Path)\n assert hasattr(config, \"script_filepath\") and isinstance(config.script_filepath, Path)\n\n if idist.get_rank() == 0 and exp_tracking.has_trains:\n from trains import Task\n\n task = Task.init(\"Pascal-VOC12 Training\", config.config_filepath.stem)\n task.connect_configuration(config.config_filepath.as_posix())\n\n log_basic_info(logger, config)\n\n config.output_path = Path(exp_tracking.get_output_path())\n # dump python files to reproduce the run\n exp_tracking.log_artifact(config.config_filepath.as_posix())\n exp_tracking.log_artifact(config.script_filepath.as_posix())\n exp_tracking.log_params(get_params(config, TRAINVAL_CONFIG))\n\n try:\n parallel.run(training, config, logger=logger)\n except KeyboardInterrupt:\n logger.info(\"Catched KeyboardInterrupt -> exit\")\n except Exception as e: # noqa\n logger.exception(\"\")\n raise e\n", "path": "examples/references/segmentation/pascal_voc2012/code/scripts/training.py"}]} | 3,991 | 396 |
gh_patches_debug_29298 | rasdani/github-patches | git_diff | qutebrowser__qutebrowser-5502 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Have option to disable Mouse 4 and Mouse 5 from jumping forward and back through tab history.
When using Mouse 4 or Mouse 5 as a global hotkey for another application, for example a voice chat program such as Discord or Mumble using either button as push to talk, qutebrowser still receives the button press and goes forward and backwards through history while focused. Some way to disable Mouse 4 and Mouse 5 from being used by qutebrowser would be cool.
</issue>
<code>
[start of qutebrowser/browser/eventfilter.py]
1 # vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:
2
3 # Copyright 2016-2020 Florian Bruhin (The Compiler) <[email protected]>
4 #
5 # This file is part of qutebrowser.
6 #
7 # qutebrowser is free software: you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation, either version 3 of the License, or
10 # (at your option) any later version.
11 #
12 # qutebrowser is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.
19
20 """Event handling for a browser tab."""
21
22 from PyQt5.QtCore import QObject, QEvent, Qt, QTimer
23
24 from qutebrowser.config import config
25 from qutebrowser.utils import message, log, usertypes, qtutils, objreg
26 from qutebrowser.misc import objects
27 from qutebrowser.keyinput import modeman
28
29
30 class ChildEventFilter(QObject):
31
32 """An event filter re-adding TabEventFilter on ChildEvent.
33
34 This is needed because QtWebEngine likes to randomly change its
35 focusProxy...
36
37 FIXME:qtwebengine Add a test for this happening
38
39 Attributes:
40 _filter: The event filter to install.
41 _widget: The widget expected to send out childEvents.
42 """
43
44 def __init__(self, eventfilter, widget, win_id, parent=None):
45 super().__init__(parent)
46 self._filter = eventfilter
47 assert widget is not None
48 self._widget = widget
49 self._win_id = win_id
50
51 def eventFilter(self, obj, event):
52 """Act on ChildAdded events."""
53 if event.type() == QEvent.ChildAdded:
54 child = event.child()
55 log.misc.debug("{} got new child {}, installing filter".format(
56 obj, child))
57 assert obj is self._widget
58 child.installEventFilter(self._filter)
59
60 if qtutils.version_check('5.11', compiled=False, exact=True):
61 # WORKAROUND for https://bugreports.qt.io/browse/QTBUG-68076
62 pass_modes = [usertypes.KeyMode.command,
63 usertypes.KeyMode.prompt,
64 usertypes.KeyMode.yesno]
65 if modeman.instance(self._win_id).mode not in pass_modes:
66 tabbed_browser = objreg.get('tabbed-browser',
67 scope='window',
68 window=self._win_id)
69 current_index = tabbed_browser.widget.currentIndex()
70 try:
71 widget_index = tabbed_browser.widget.indexOf(
72 self._widget.parent())
73 except RuntimeError:
74 widget_index = -1
75 if current_index == widget_index:
76 QTimer.singleShot(0, self._widget.setFocus)
77
78 elif event.type() == QEvent.ChildRemoved:
79 child = event.child()
80 log.misc.debug("{}: removed child {}".format(obj, child))
81
82 return False
83
84
85 class TabEventFilter(QObject):
86
87 """Handle mouse/keyboard events on a tab.
88
89 Attributes:
90 _tab: The browsertab object this filter is installed on.
91 _handlers: A dict of handler functions for the handled events.
92 _ignore_wheel_event: Whether to ignore the next wheelEvent.
93 _check_insertmode_on_release: Whether an insertmode check should be
94 done when the mouse is released.
95 """
96
97 def __init__(self, tab, *, parent=None):
98 super().__init__(parent)
99 self._tab = tab
100 self._handlers = {
101 QEvent.MouseButtonPress: self._handle_mouse_press,
102 QEvent.MouseButtonRelease: self._handle_mouse_release,
103 QEvent.Wheel: self._handle_wheel,
104 QEvent.ContextMenu: self._handle_context_menu,
105 QEvent.KeyRelease: self._handle_key_release,
106 }
107 self._ignore_wheel_event = False
108 self._check_insertmode_on_release = False
109
110 def _handle_mouse_press(self, e):
111 """Handle pressing of a mouse button.
112
113 Args:
114 e: The QMouseEvent.
115
116 Return:
117 True if the event should be filtered, False otherwise.
118 """
119 is_rocker_gesture = (config.val.input.rocker_gestures and
120 e.buttons() == Qt.LeftButton | Qt.RightButton)
121
122 if e.button() in [Qt.XButton1, Qt.XButton2] or is_rocker_gesture:
123 self._mousepress_backforward(e)
124 return True
125
126 self._ignore_wheel_event = True
127
128 pos = e.pos()
129 if pos.x() < 0 or pos.y() < 0:
130 log.mouse.warning("Ignoring invalid click at {}".format(pos))
131 return False
132
133 if e.button() != Qt.NoButton:
134 self._tab.elements.find_at_pos(pos, self._mousepress_insertmode_cb)
135
136 return False
137
138 def _handle_mouse_release(self, _e):
139 """Handle releasing of a mouse button.
140
141 Args:
142 e: The QMouseEvent.
143
144 Return:
145 True if the event should be filtered, False otherwise.
146 """
147 # We want to make sure we check the focus element after the WebView is
148 # updated completely.
149 QTimer.singleShot(0, self._mouserelease_insertmode)
150 return False
151
152 def _handle_wheel(self, e):
153 """Zoom on Ctrl-Mousewheel.
154
155 Args:
156 e: The QWheelEvent.
157
158 Return:
159 True if the event should be filtered, False otherwise.
160 """
161 if self._ignore_wheel_event:
162 # See https://github.com/qutebrowser/qutebrowser/issues/395
163 self._ignore_wheel_event = False
164 return True
165
166 # Don't allow scrolling while hinting
167 mode = modeman.instance(self._tab.win_id).mode
168 if mode == usertypes.KeyMode.hint:
169 return True
170
171 elif e.modifiers() & Qt.ControlModifier:
172 if mode == usertypes.KeyMode.passthrough:
173 return False
174
175 divider = config.val.zoom.mouse_divider
176 if divider == 0:
177 # Disable mouse zooming
178 return True
179
180 factor = self._tab.zoom.factor() + (e.angleDelta().y() / divider)
181 if factor < 0:
182 return True
183
184 perc = int(100 * factor)
185 message.info("Zoom level: {}%".format(perc), replace=True)
186 self._tab.zoom.set_factor(factor)
187 return True
188 elif (e.modifiers() & Qt.ShiftModifier and
189 not qtutils.version_check('5.9', compiled=False)):
190 if e.angleDelta().y() > 0:
191 self._tab.scroller.left()
192 else:
193 self._tab.scroller.right()
194 return True
195
196 return False
197
198 def _handle_context_menu(self, _e):
199 """Suppress context menus if rocker gestures are turned on.
200
201 Args:
202 e: The QContextMenuEvent.
203
204 Return:
205 True if the event should be filtered, False otherwise.
206 """
207 return config.val.input.rocker_gestures
208
209 def _handle_key_release(self, e):
210 """Ignore repeated key release events going to the website.
211
212 WORKAROUND for https://bugreports.qt.io/browse/QTBUG-77208
213
214 Args:
215 e: The QKeyEvent.
216
217 Return:
218 True if the event should be filtered, False otherwise.
219 """
220 return (e.isAutoRepeat() and
221 qtutils.version_check('5.10', compiled=False) and
222 not qtutils.version_check('5.14', compiled=False) and
223 objects.backend == usertypes.Backend.QtWebEngine)
224
225 def _mousepress_insertmode_cb(self, elem):
226 """Check if the clicked element is editable."""
227 if elem is None:
228 # Something didn't work out, let's find the focus element after
229 # a mouse release.
230 log.mouse.debug("Got None element, scheduling check on "
231 "mouse release")
232 self._check_insertmode_on_release = True
233 return
234
235 if elem.is_editable():
236 log.mouse.debug("Clicked editable element!")
237 if config.val.input.insert_mode.auto_enter:
238 modeman.enter(self._tab.win_id, usertypes.KeyMode.insert,
239 'click', only_if_normal=True)
240 else:
241 log.mouse.debug("Clicked non-editable element!")
242 if config.val.input.insert_mode.auto_leave:
243 modeman.leave(self._tab.win_id, usertypes.KeyMode.insert,
244 'click', maybe=True)
245
246 def _mouserelease_insertmode(self):
247 """If we have an insertmode check scheduled, handle it."""
248 if not self._check_insertmode_on_release:
249 return
250 self._check_insertmode_on_release = False
251
252 def mouserelease_insertmode_cb(elem):
253 """Callback which gets called from JS."""
254 if elem is None:
255 log.mouse.debug("Element vanished!")
256 return
257
258 if elem.is_editable():
259 log.mouse.debug("Clicked editable element (delayed)!")
260 modeman.enter(self._tab.win_id, usertypes.KeyMode.insert,
261 'click-delayed', only_if_normal=True)
262 else:
263 log.mouse.debug("Clicked non-editable element (delayed)!")
264 if config.val.input.insert_mode.auto_leave:
265 modeman.leave(self._tab.win_id, usertypes.KeyMode.insert,
266 'click-delayed', maybe=True)
267
268 self._tab.elements.find_focused(mouserelease_insertmode_cb)
269
270 def _mousepress_backforward(self, e):
271 """Handle back/forward mouse button presses.
272
273 Args:
274 e: The QMouseEvent.
275
276 Return:
277 True if the event should be filtered, False otherwise.
278 """
279 if e.button() in [Qt.XButton1, Qt.LeftButton]:
280 # Back button on mice which have it, or rocker gesture
281 if self._tab.history.can_go_back():
282 self._tab.history.back()
283 else:
284 message.error("At beginning of history.")
285 elif e.button() in [Qt.XButton2, Qt.RightButton]:
286 # Forward button on mice which have it, or rocker gesture
287 if self._tab.history.can_go_forward():
288 self._tab.history.forward()
289 else:
290 message.error("At end of history.")
291
292 def eventFilter(self, obj, event):
293 """Filter events going to a QWeb(Engine)View.
294
295 Return:
296 True if the event should be filtered, False otherwise.
297 """
298 evtype = event.type()
299 if evtype not in self._handlers:
300 return False
301 if obj is not self._tab.private_api.event_target():
302 log.mouse.debug("Ignoring {} to {}".format(
303 event.__class__.__name__, obj))
304 return False
305 return self._handlers[evtype](event)
306
[end of qutebrowser/browser/eventfilter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutebrowser/browser/eventfilter.py b/qutebrowser/browser/eventfilter.py
--- a/qutebrowser/browser/eventfilter.py
+++ b/qutebrowser/browser/eventfilter.py
@@ -116,7 +116,7 @@
Return:
True if the event should be filtered, False otherwise.
"""
- is_rocker_gesture = (config.val.input.rocker_gestures and
+ is_rocker_gesture = (config.val.input.mouse.rocker_gestures and
e.buttons() == Qt.LeftButton | Qt.RightButton)
if e.button() in [Qt.XButton1, Qt.XButton2] or is_rocker_gesture:
@@ -204,7 +204,7 @@
Return:
True if the event should be filtered, False otherwise.
"""
- return config.val.input.rocker_gestures
+ return config.val.input.mouse.rocker_gestures
def _handle_key_release(self, e):
"""Ignore repeated key release events going to the website.
@@ -276,6 +276,11 @@
Return:
True if the event should be filtered, False otherwise.
"""
+ if (not config.val.input.mouse.back_forward_buttons and
+ e.button() in [Qt.XButton1, Qt.XButton2]):
+ # Back and forward on mice are disabled
+ return
+
if e.button() in [Qt.XButton1, Qt.LeftButton]:
# Back button on mice which have it, or rocker gesture
if self._tab.history.can_go_back():
| {"golden_diff": "diff --git a/qutebrowser/browser/eventfilter.py b/qutebrowser/browser/eventfilter.py\n--- a/qutebrowser/browser/eventfilter.py\n+++ b/qutebrowser/browser/eventfilter.py\n@@ -116,7 +116,7 @@\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n- is_rocker_gesture = (config.val.input.rocker_gestures and\n+ is_rocker_gesture = (config.val.input.mouse.rocker_gestures and\n e.buttons() == Qt.LeftButton | Qt.RightButton)\n \n if e.button() in [Qt.XButton1, Qt.XButton2] or is_rocker_gesture:\n@@ -204,7 +204,7 @@\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n- return config.val.input.rocker_gestures\n+ return config.val.input.mouse.rocker_gestures\n \n def _handle_key_release(self, e):\n \"\"\"Ignore repeated key release events going to the website.\n@@ -276,6 +276,11 @@\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n+ if (not config.val.input.mouse.back_forward_buttons and\n+ e.button() in [Qt.XButton1, Qt.XButton2]):\n+ # Back and forward on mice are disabled\n+ return\n+\n if e.button() in [Qt.XButton1, Qt.LeftButton]:\n # Back button on mice which have it, or rocker gesture\n if self._tab.history.can_go_back():\n", "issue": "Have option to disable Mouse 4 and Mouse 5 from jumping forward and back through tab history.\nWhen using Mouse 4 or Mouse 5 as a global hotkey for another application, for example a voice chat program such as Discord or Mumble using either button as push to talk, qutebrowser still receives the button press and goes forward and backwards through history while focused. Some way to disable Mouse 4 and Mouse 5 from being used by qutebrowser would be cool.\n", "before_files": [{"content": "# vim: ft=python fileencoding=utf-8 sts=4 sw=4 et:\n\n# Copyright 2016-2020 Florian Bruhin (The Compiler) <[email protected]>\n#\n# This file is part of qutebrowser.\n#\n# qutebrowser is free software: you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation, either version 3 of the License, or\n# (at your option) any later version.\n#\n# qutebrowser is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with qutebrowser. If not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Event handling for a browser tab.\"\"\"\n\nfrom PyQt5.QtCore import QObject, QEvent, Qt, QTimer\n\nfrom qutebrowser.config import config\nfrom qutebrowser.utils import message, log, usertypes, qtutils, objreg\nfrom qutebrowser.misc import objects\nfrom qutebrowser.keyinput import modeman\n\n\nclass ChildEventFilter(QObject):\n\n \"\"\"An event filter re-adding TabEventFilter on ChildEvent.\n\n This is needed because QtWebEngine likes to randomly change its\n focusProxy...\n\n FIXME:qtwebengine Add a test for this happening\n\n Attributes:\n _filter: The event filter to install.\n _widget: The widget expected to send out childEvents.\n \"\"\"\n\n def __init__(self, eventfilter, widget, win_id, parent=None):\n super().__init__(parent)\n self._filter = eventfilter\n assert widget is not None\n self._widget = widget\n self._win_id = win_id\n\n def eventFilter(self, obj, event):\n \"\"\"Act on ChildAdded events.\"\"\"\n if event.type() == QEvent.ChildAdded:\n child = event.child()\n log.misc.debug(\"{} got new child {}, installing filter\".format(\n obj, child))\n assert obj is self._widget\n child.installEventFilter(self._filter)\n\n if qtutils.version_check('5.11', compiled=False, exact=True):\n # WORKAROUND for https://bugreports.qt.io/browse/QTBUG-68076\n pass_modes = [usertypes.KeyMode.command,\n usertypes.KeyMode.prompt,\n usertypes.KeyMode.yesno]\n if modeman.instance(self._win_id).mode not in pass_modes:\n tabbed_browser = objreg.get('tabbed-browser',\n scope='window',\n window=self._win_id)\n current_index = tabbed_browser.widget.currentIndex()\n try:\n widget_index = tabbed_browser.widget.indexOf(\n self._widget.parent())\n except RuntimeError:\n widget_index = -1\n if current_index == widget_index:\n QTimer.singleShot(0, self._widget.setFocus)\n\n elif event.type() == QEvent.ChildRemoved:\n child = event.child()\n log.misc.debug(\"{}: removed child {}\".format(obj, child))\n\n return False\n\n\nclass TabEventFilter(QObject):\n\n \"\"\"Handle mouse/keyboard events on a tab.\n\n Attributes:\n _tab: The browsertab object this filter is installed on.\n _handlers: A dict of handler functions for the handled events.\n _ignore_wheel_event: Whether to ignore the next wheelEvent.\n _check_insertmode_on_release: Whether an insertmode check should be\n done when the mouse is released.\n \"\"\"\n\n def __init__(self, tab, *, parent=None):\n super().__init__(parent)\n self._tab = tab\n self._handlers = {\n QEvent.MouseButtonPress: self._handle_mouse_press,\n QEvent.MouseButtonRelease: self._handle_mouse_release,\n QEvent.Wheel: self._handle_wheel,\n QEvent.ContextMenu: self._handle_context_menu,\n QEvent.KeyRelease: self._handle_key_release,\n }\n self._ignore_wheel_event = False\n self._check_insertmode_on_release = False\n\n def _handle_mouse_press(self, e):\n \"\"\"Handle pressing of a mouse button.\n\n Args:\n e: The QMouseEvent.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n is_rocker_gesture = (config.val.input.rocker_gestures and\n e.buttons() == Qt.LeftButton | Qt.RightButton)\n\n if e.button() in [Qt.XButton1, Qt.XButton2] or is_rocker_gesture:\n self._mousepress_backforward(e)\n return True\n\n self._ignore_wheel_event = True\n\n pos = e.pos()\n if pos.x() < 0 or pos.y() < 0:\n log.mouse.warning(\"Ignoring invalid click at {}\".format(pos))\n return False\n\n if e.button() != Qt.NoButton:\n self._tab.elements.find_at_pos(pos, self._mousepress_insertmode_cb)\n\n return False\n\n def _handle_mouse_release(self, _e):\n \"\"\"Handle releasing of a mouse button.\n\n Args:\n e: The QMouseEvent.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n # We want to make sure we check the focus element after the WebView is\n # updated completely.\n QTimer.singleShot(0, self._mouserelease_insertmode)\n return False\n\n def _handle_wheel(self, e):\n \"\"\"Zoom on Ctrl-Mousewheel.\n\n Args:\n e: The QWheelEvent.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n if self._ignore_wheel_event:\n # See https://github.com/qutebrowser/qutebrowser/issues/395\n self._ignore_wheel_event = False\n return True\n\n # Don't allow scrolling while hinting\n mode = modeman.instance(self._tab.win_id).mode\n if mode == usertypes.KeyMode.hint:\n return True\n\n elif e.modifiers() & Qt.ControlModifier:\n if mode == usertypes.KeyMode.passthrough:\n return False\n\n divider = config.val.zoom.mouse_divider\n if divider == 0:\n # Disable mouse zooming\n return True\n\n factor = self._tab.zoom.factor() + (e.angleDelta().y() / divider)\n if factor < 0:\n return True\n\n perc = int(100 * factor)\n message.info(\"Zoom level: {}%\".format(perc), replace=True)\n self._tab.zoom.set_factor(factor)\n return True\n elif (e.modifiers() & Qt.ShiftModifier and\n not qtutils.version_check('5.9', compiled=False)):\n if e.angleDelta().y() > 0:\n self._tab.scroller.left()\n else:\n self._tab.scroller.right()\n return True\n\n return False\n\n def _handle_context_menu(self, _e):\n \"\"\"Suppress context menus if rocker gestures are turned on.\n\n Args:\n e: The QContextMenuEvent.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n return config.val.input.rocker_gestures\n\n def _handle_key_release(self, e):\n \"\"\"Ignore repeated key release events going to the website.\n\n WORKAROUND for https://bugreports.qt.io/browse/QTBUG-77208\n\n Args:\n e: The QKeyEvent.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n return (e.isAutoRepeat() and\n qtutils.version_check('5.10', compiled=False) and\n not qtutils.version_check('5.14', compiled=False) and\n objects.backend == usertypes.Backend.QtWebEngine)\n\n def _mousepress_insertmode_cb(self, elem):\n \"\"\"Check if the clicked element is editable.\"\"\"\n if elem is None:\n # Something didn't work out, let's find the focus element after\n # a mouse release.\n log.mouse.debug(\"Got None element, scheduling check on \"\n \"mouse release\")\n self._check_insertmode_on_release = True\n return\n\n if elem.is_editable():\n log.mouse.debug(\"Clicked editable element!\")\n if config.val.input.insert_mode.auto_enter:\n modeman.enter(self._tab.win_id, usertypes.KeyMode.insert,\n 'click', only_if_normal=True)\n else:\n log.mouse.debug(\"Clicked non-editable element!\")\n if config.val.input.insert_mode.auto_leave:\n modeman.leave(self._tab.win_id, usertypes.KeyMode.insert,\n 'click', maybe=True)\n\n def _mouserelease_insertmode(self):\n \"\"\"If we have an insertmode check scheduled, handle it.\"\"\"\n if not self._check_insertmode_on_release:\n return\n self._check_insertmode_on_release = False\n\n def mouserelease_insertmode_cb(elem):\n \"\"\"Callback which gets called from JS.\"\"\"\n if elem is None:\n log.mouse.debug(\"Element vanished!\")\n return\n\n if elem.is_editable():\n log.mouse.debug(\"Clicked editable element (delayed)!\")\n modeman.enter(self._tab.win_id, usertypes.KeyMode.insert,\n 'click-delayed', only_if_normal=True)\n else:\n log.mouse.debug(\"Clicked non-editable element (delayed)!\")\n if config.val.input.insert_mode.auto_leave:\n modeman.leave(self._tab.win_id, usertypes.KeyMode.insert,\n 'click-delayed', maybe=True)\n\n self._tab.elements.find_focused(mouserelease_insertmode_cb)\n\n def _mousepress_backforward(self, e):\n \"\"\"Handle back/forward mouse button presses.\n\n Args:\n e: The QMouseEvent.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n if e.button() in [Qt.XButton1, Qt.LeftButton]:\n # Back button on mice which have it, or rocker gesture\n if self._tab.history.can_go_back():\n self._tab.history.back()\n else:\n message.error(\"At beginning of history.\")\n elif e.button() in [Qt.XButton2, Qt.RightButton]:\n # Forward button on mice which have it, or rocker gesture\n if self._tab.history.can_go_forward():\n self._tab.history.forward()\n else:\n message.error(\"At end of history.\")\n\n def eventFilter(self, obj, event):\n \"\"\"Filter events going to a QWeb(Engine)View.\n\n Return:\n True if the event should be filtered, False otherwise.\n \"\"\"\n evtype = event.type()\n if evtype not in self._handlers:\n return False\n if obj is not self._tab.private_api.event_target():\n log.mouse.debug(\"Ignoring {} to {}\".format(\n event.__class__.__name__, obj))\n return False\n return self._handlers[evtype](event)\n", "path": "qutebrowser/browser/eventfilter.py"}]} | 3,850 | 345 |
gh_patches_debug_25513 | rasdani/github-patches | git_diff | pypa__setuptools-3805 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[FR] Cache supported tags in Wheel.is_compatible
### What's the problem this feature will solve?
Calling `is_compatible` on an instance of the `Wheel` class takes mere milliseconds. But when calling this on thousands of wheels, this quickly adds up.
I think this method is called when setuptools or pip reads an index page, for example https://pypi.org/simple/setuptools/, and for each link checks if the wheel is compatible with the current interpreter and platform. (Not completely sure if this is how it works.)
My own use case is with Buildout. If this downloads a distribution, it saves it in a directory. Buildout uses this directory as an extra find-link. So the next time you call buildout, these distributions are available. This can help for the case where you have no internet, or someone has removed a distribution from PyPI (which happens a lot less these days, I am glad to say.) With thousands of wheels in there, and Buildout/setuptools calling `is_compatible` on each wheel, this takes too much time.
I created an [issue in Buildout](https://github.com/buildout/buildout/issues/626) to track this, so some more details are there. There it seems it is worse with the combination of the very latest setuptools (67.0.0) and pip (23.0.0), and extra worse on Python 3.8 compared to 3.11. But this is a bit unclear.
### Describe the solution you'd like
This is fixable in the `_is_compatible` method in [`setuptools/wheel.py`](https://github.com/pypa/setuptools/blob/v67.0.0/setuptools/wheel.py#L85-L89) by calculating the supported tags once, outside of the class.
When I checked on my system, this gives a set of 1700 supported tags. With 1000 wheels, we would calculate 1.7 million tags. A tad much. ;-)
The assumption is that calling `sys_tags` from the vendored packaging returns the same result each time.
I am preparing a PR.
### Alternative Solutions
I suppose Buildout could add another to its existing [patches](https://github.com/buildout/buildout/blob/master/src/zc/buildout/patches.py), which already includes `setuptools.package_index.PackageIndex` which is involved here.
But I think pip would benefit from a faster method as well.
### Additional context
Here is a test file to get some timings on your own system. Prerequisite: a directory with some wheel, the more the better.
```
from setuptools.wheel import Wheel
from time import time
import os
DIR = "/Users/maurits/cached-downloads/dist"
print(f"Looking for compatible wheels in {DIR}...")
wheels = 0
compatible = 0
start = time()
for filename in os.listdir(DIR):
if not filename.endswith(".whl"):
continue
wheel = Wheel(os.path.join(DIR, filename))
wheels += 1
if wheel.is_compatible():
compatible += 1
stop = time()
print(f"""
Processed {wheels} wheels.
There were {compatible} compatible wheels.
Time taken: {stop - start} seconds.
""")
```
Save this as `test.py`.
With a clone of the pip repo and using the main branch, this is the result:
```
$ .tox/python/bin/python test.py
Looking for compatible wheels in /Users/maurits/cached-downloads/dist...
Processed 2284 wheels.
There were 1776 compatible wheels.
Time taken: 7.127894639968872 seconds.
```
With my branch I get the same numbers, except the time:
```
Time taken: 0.04627823829650879 seconds.
```
That is about 150 times faster.
### Code of Conduct
- [X] I agree to follow the PSF Code of Conduct
</issue>
<code>
[start of setuptools/wheel.py]
1 """Wheels support."""
2
3 import email
4 import itertools
5 import os
6 import posixpath
7 import re
8 import zipfile
9 import contextlib
10
11 from distutils.util import get_platform
12
13 import setuptools
14 from setuptools.extern.packaging.version import Version as parse_version
15 from setuptools.extern.packaging.tags import sys_tags
16 from setuptools.extern.packaging.utils import canonicalize_name
17 from setuptools.command.egg_info import write_requirements, _egg_basename
18 from setuptools.archive_util import _unpack_zipfile_obj
19
20
21 WHEEL_NAME = re.compile(
22 r"""^(?P<project_name>.+?)-(?P<version>\d.*?)
23 ((-(?P<build>\d.*?))?-(?P<py_version>.+?)-(?P<abi>.+?)-(?P<platform>.+?)
24 )\.whl$""",
25 re.VERBOSE).match
26
27 NAMESPACE_PACKAGE_INIT = \
28 "__import__('pkg_resources').declare_namespace(__name__)\n"
29
30
31 def unpack(src_dir, dst_dir):
32 '''Move everything under `src_dir` to `dst_dir`, and delete the former.'''
33 for dirpath, dirnames, filenames in os.walk(src_dir):
34 subdir = os.path.relpath(dirpath, src_dir)
35 for f in filenames:
36 src = os.path.join(dirpath, f)
37 dst = os.path.join(dst_dir, subdir, f)
38 os.renames(src, dst)
39 for n, d in reversed(list(enumerate(dirnames))):
40 src = os.path.join(dirpath, d)
41 dst = os.path.join(dst_dir, subdir, d)
42 if not os.path.exists(dst):
43 # Directory does not exist in destination,
44 # rename it and prune it from os.walk list.
45 os.renames(src, dst)
46 del dirnames[n]
47 # Cleanup.
48 for dirpath, dirnames, filenames in os.walk(src_dir, topdown=True):
49 assert not filenames
50 os.rmdir(dirpath)
51
52
53 @contextlib.contextmanager
54 def disable_info_traces():
55 """
56 Temporarily disable info traces.
57 """
58 from distutils import log
59 saved = log.set_threshold(log.WARN)
60 try:
61 yield
62 finally:
63 log.set_threshold(saved)
64
65
66 class Wheel:
67
68 def __init__(self, filename):
69 match = WHEEL_NAME(os.path.basename(filename))
70 if match is None:
71 raise ValueError('invalid wheel name: %r' % filename)
72 self.filename = filename
73 for k, v in match.groupdict().items():
74 setattr(self, k, v)
75
76 def tags(self):
77 '''List tags (py_version, abi, platform) supported by this wheel.'''
78 return itertools.product(
79 self.py_version.split('.'),
80 self.abi.split('.'),
81 self.platform.split('.'),
82 )
83
84 def is_compatible(self):
85 '''Is the wheel is compatible with the current platform?'''
86 supported_tags = set(
87 (t.interpreter, t.abi, t.platform) for t in sys_tags())
88 return next((True for t in self.tags() if t in supported_tags), False)
89
90 def egg_name(self):
91 return _egg_basename(
92 self.project_name,
93 self.version,
94 platform=(None if self.platform == 'any' else get_platform()),
95 ) + ".egg"
96
97 def get_dist_info(self, zf):
98 # find the correct name of the .dist-info dir in the wheel file
99 for member in zf.namelist():
100 dirname = posixpath.dirname(member)
101 if (dirname.endswith('.dist-info') and
102 canonicalize_name(dirname).startswith(
103 canonicalize_name(self.project_name))):
104 return dirname
105 raise ValueError("unsupported wheel format. .dist-info not found")
106
107 def install_as_egg(self, destination_eggdir):
108 '''Install wheel as an egg directory.'''
109 with zipfile.ZipFile(self.filename) as zf:
110 self._install_as_egg(destination_eggdir, zf)
111
112 def _install_as_egg(self, destination_eggdir, zf):
113 dist_basename = '%s-%s' % (self.project_name, self.version)
114 dist_info = self.get_dist_info(zf)
115 dist_data = '%s.data' % dist_basename
116 egg_info = os.path.join(destination_eggdir, 'EGG-INFO')
117
118 self._convert_metadata(zf, destination_eggdir, dist_info, egg_info)
119 self._move_data_entries(destination_eggdir, dist_data)
120 self._fix_namespace_packages(egg_info, destination_eggdir)
121
122 @staticmethod
123 def _convert_metadata(zf, destination_eggdir, dist_info, egg_info):
124 import pkg_resources
125
126 def get_metadata(name):
127 with zf.open(posixpath.join(dist_info, name)) as fp:
128 value = fp.read().decode('utf-8')
129 return email.parser.Parser().parsestr(value)
130
131 wheel_metadata = get_metadata('WHEEL')
132 # Check wheel format version is supported.
133 wheel_version = parse_version(wheel_metadata.get('Wheel-Version'))
134 wheel_v1 = (
135 parse_version('1.0') <= wheel_version < parse_version('2.0dev0')
136 )
137 if not wheel_v1:
138 raise ValueError(
139 'unsupported wheel format version: %s' % wheel_version)
140 # Extract to target directory.
141 _unpack_zipfile_obj(zf, destination_eggdir)
142 # Convert metadata.
143 dist_info = os.path.join(destination_eggdir, dist_info)
144 dist = pkg_resources.Distribution.from_location(
145 destination_eggdir, dist_info,
146 metadata=pkg_resources.PathMetadata(destination_eggdir, dist_info),
147 )
148
149 # Note: Evaluate and strip markers now,
150 # as it's difficult to convert back from the syntax:
151 # foobar; "linux" in sys_platform and extra == 'test'
152 def raw_req(req):
153 req.marker = None
154 return str(req)
155 install_requires = list(map(raw_req, dist.requires()))
156 extras_require = {
157 extra: [
158 req
159 for req in map(raw_req, dist.requires((extra,)))
160 if req not in install_requires
161 ]
162 for extra in dist.extras
163 }
164 os.rename(dist_info, egg_info)
165 os.rename(
166 os.path.join(egg_info, 'METADATA'),
167 os.path.join(egg_info, 'PKG-INFO'),
168 )
169 setup_dist = setuptools.Distribution(
170 attrs=dict(
171 install_requires=install_requires,
172 extras_require=extras_require,
173 ),
174 )
175 with disable_info_traces():
176 write_requirements(
177 setup_dist.get_command_obj('egg_info'),
178 None,
179 os.path.join(egg_info, 'requires.txt'),
180 )
181
182 @staticmethod
183 def _move_data_entries(destination_eggdir, dist_data):
184 """Move data entries to their correct location."""
185 dist_data = os.path.join(destination_eggdir, dist_data)
186 dist_data_scripts = os.path.join(dist_data, 'scripts')
187 if os.path.exists(dist_data_scripts):
188 egg_info_scripts = os.path.join(
189 destination_eggdir, 'EGG-INFO', 'scripts')
190 os.mkdir(egg_info_scripts)
191 for entry in os.listdir(dist_data_scripts):
192 # Remove bytecode, as it's not properly handled
193 # during easy_install scripts install phase.
194 if entry.endswith('.pyc'):
195 os.unlink(os.path.join(dist_data_scripts, entry))
196 else:
197 os.rename(
198 os.path.join(dist_data_scripts, entry),
199 os.path.join(egg_info_scripts, entry),
200 )
201 os.rmdir(dist_data_scripts)
202 for subdir in filter(os.path.exists, (
203 os.path.join(dist_data, d)
204 for d in ('data', 'headers', 'purelib', 'platlib')
205 )):
206 unpack(subdir, destination_eggdir)
207 if os.path.exists(dist_data):
208 os.rmdir(dist_data)
209
210 @staticmethod
211 def _fix_namespace_packages(egg_info, destination_eggdir):
212 namespace_packages = os.path.join(
213 egg_info, 'namespace_packages.txt')
214 if os.path.exists(namespace_packages):
215 with open(namespace_packages) as fp:
216 namespace_packages = fp.read().split()
217 for mod in namespace_packages:
218 mod_dir = os.path.join(destination_eggdir, *mod.split('.'))
219 mod_init = os.path.join(mod_dir, '__init__.py')
220 if not os.path.exists(mod_dir):
221 os.mkdir(mod_dir)
222 if not os.path.exists(mod_init):
223 with open(mod_init, 'w') as fp:
224 fp.write(NAMESPACE_PACKAGE_INIT)
225
[end of setuptools/wheel.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/wheel.py b/setuptools/wheel.py
--- a/setuptools/wheel.py
+++ b/setuptools/wheel.py
@@ -2,6 +2,7 @@
import email
import itertools
+import functools
import os
import posixpath
import re
@@ -28,6 +29,14 @@
"__import__('pkg_resources').declare_namespace(__name__)\n"
[email protected]_cache(maxsize=None)
+def _get_supported_tags():
+ # We calculate the supported tags only once, otherwise calling
+ # this method on thousands of wheels takes seconds instead of
+ # milliseconds.
+ return {(t.interpreter, t.abi, t.platform) for t in sys_tags()}
+
+
def unpack(src_dir, dst_dir):
'''Move everything under `src_dir` to `dst_dir`, and delete the former.'''
for dirpath, dirnames, filenames in os.walk(src_dir):
@@ -82,10 +91,8 @@
)
def is_compatible(self):
- '''Is the wheel is compatible with the current platform?'''
- supported_tags = set(
- (t.interpreter, t.abi, t.platform) for t in sys_tags())
- return next((True for t in self.tags() if t in supported_tags), False)
+ '''Is the wheel compatible with the current platform?'''
+ return next((True for t in self.tags() if t in _get_supported_tags()), False)
def egg_name(self):
return _egg_basename(
| {"golden_diff": "diff --git a/setuptools/wheel.py b/setuptools/wheel.py\n--- a/setuptools/wheel.py\n+++ b/setuptools/wheel.py\n@@ -2,6 +2,7 @@\n \n import email\n import itertools\n+import functools\n import os\n import posixpath\n import re\n@@ -28,6 +29,14 @@\n \"__import__('pkg_resources').declare_namespace(__name__)\\n\"\n \n \[email protected]_cache(maxsize=None)\n+def _get_supported_tags():\n+ # We calculate the supported tags only once, otherwise calling\n+ # this method on thousands of wheels takes seconds instead of\n+ # milliseconds.\n+ return {(t.interpreter, t.abi, t.platform) for t in sys_tags()}\n+\n+\n def unpack(src_dir, dst_dir):\n '''Move everything under `src_dir` to `dst_dir`, and delete the former.'''\n for dirpath, dirnames, filenames in os.walk(src_dir):\n@@ -82,10 +91,8 @@\n )\n \n def is_compatible(self):\n- '''Is the wheel is compatible with the current platform?'''\n- supported_tags = set(\n- (t.interpreter, t.abi, t.platform) for t in sys_tags())\n- return next((True for t in self.tags() if t in supported_tags), False)\n+ '''Is the wheel compatible with the current platform?'''\n+ return next((True for t in self.tags() if t in _get_supported_tags()), False)\n \n def egg_name(self):\n return _egg_basename(\n", "issue": "[FR] Cache supported tags in Wheel.is_compatible\n### What's the problem this feature will solve?\n\nCalling `is_compatible` on an instance of the `Wheel` class takes mere milliseconds. But when calling this on thousands of wheels, this quickly adds up.\r\n\r\nI think this method is called when setuptools or pip reads an index page, for example https://pypi.org/simple/setuptools/, and for each link checks if the wheel is compatible with the current interpreter and platform. (Not completely sure if this is how it works.)\r\n\r\nMy own use case is with Buildout. If this downloads a distribution, it saves it in a directory. Buildout uses this directory as an extra find-link. So the next time you call buildout, these distributions are available. This can help for the case where you have no internet, or someone has removed a distribution from PyPI (which happens a lot less these days, I am glad to say.) With thousands of wheels in there, and Buildout/setuptools calling `is_compatible` on each wheel, this takes too much time.\r\n\r\nI created an [issue in Buildout](https://github.com/buildout/buildout/issues/626) to track this, so some more details are there. There it seems it is worse with the combination of the very latest setuptools (67.0.0) and pip (23.0.0), and extra worse on Python 3.8 compared to 3.11. But this is a bit unclear.\n\n### Describe the solution you'd like\n\nThis is fixable in the `_is_compatible` method in [`setuptools/wheel.py`](https://github.com/pypa/setuptools/blob/v67.0.0/setuptools/wheel.py#L85-L89) by calculating the supported tags once, outside of the class.\r\nWhen I checked on my system, this gives a set of 1700 supported tags. With 1000 wheels, we would calculate 1.7 million tags. A tad much. ;-)\r\n\r\nThe assumption is that calling `sys_tags` from the vendored packaging returns the same result each time.\r\n\r\nI am preparing a PR.\r\n\n\n### Alternative Solutions\n\nI suppose Buildout could add another to its existing [patches](https://github.com/buildout/buildout/blob/master/src/zc/buildout/patches.py), which already includes `setuptools.package_index.PackageIndex` which is involved here.\r\nBut I think pip would benefit from a faster method as well.\n\n### Additional context\n\nHere is a test file to get some timings on your own system. Prerequisite: a directory with some wheel, the more the better.\r\n\r\n```\r\nfrom setuptools.wheel import Wheel\r\nfrom time import time\r\n\r\nimport os\r\n\r\nDIR = \"/Users/maurits/cached-downloads/dist\"\r\nprint(f\"Looking for compatible wheels in {DIR}...\")\r\nwheels = 0\r\ncompatible = 0\r\nstart = time()\r\nfor filename in os.listdir(DIR):\r\n if not filename.endswith(\".whl\"):\r\n continue\r\n wheel = Wheel(os.path.join(DIR, filename))\r\n wheels += 1\r\n if wheel.is_compatible():\r\n compatible += 1\r\nstop = time()\r\nprint(f\"\"\"\r\nProcessed {wheels} wheels.\r\nThere were {compatible} compatible wheels.\r\nTime taken: {stop - start} seconds.\r\n\"\"\")\r\n```\r\n\r\nSave this as `test.py`.\r\nWith a clone of the pip repo and using the main branch, this is the result:\r\n\r\n```\r\n$ .tox/python/bin/python test.py \r\nLooking for compatible wheels in /Users/maurits/cached-downloads/dist...\r\n\r\nProcessed 2284 wheels.\r\nThere were 1776 compatible wheels.\r\nTime taken: 7.127894639968872 seconds.\r\n```\r\n\r\nWith my branch I get the same numbers, except the time:\r\n\r\n```\r\nTime taken: 0.04627823829650879 seconds.\r\n```\r\n\r\nThat is about 150 times faster.\n\n### Code of Conduct\n\n- [X] I agree to follow the PSF Code of Conduct\n", "before_files": [{"content": "\"\"\"Wheels support.\"\"\"\n\nimport email\nimport itertools\nimport os\nimport posixpath\nimport re\nimport zipfile\nimport contextlib\n\nfrom distutils.util import get_platform\n\nimport setuptools\nfrom setuptools.extern.packaging.version import Version as parse_version\nfrom setuptools.extern.packaging.tags import sys_tags\nfrom setuptools.extern.packaging.utils import canonicalize_name\nfrom setuptools.command.egg_info import write_requirements, _egg_basename\nfrom setuptools.archive_util import _unpack_zipfile_obj\n\n\nWHEEL_NAME = re.compile(\n r\"\"\"^(?P<project_name>.+?)-(?P<version>\\d.*?)\n ((-(?P<build>\\d.*?))?-(?P<py_version>.+?)-(?P<abi>.+?)-(?P<platform>.+?)\n )\\.whl$\"\"\",\n re.VERBOSE).match\n\nNAMESPACE_PACKAGE_INIT = \\\n \"__import__('pkg_resources').declare_namespace(__name__)\\n\"\n\n\ndef unpack(src_dir, dst_dir):\n '''Move everything under `src_dir` to `dst_dir`, and delete the former.'''\n for dirpath, dirnames, filenames in os.walk(src_dir):\n subdir = os.path.relpath(dirpath, src_dir)\n for f in filenames:\n src = os.path.join(dirpath, f)\n dst = os.path.join(dst_dir, subdir, f)\n os.renames(src, dst)\n for n, d in reversed(list(enumerate(dirnames))):\n src = os.path.join(dirpath, d)\n dst = os.path.join(dst_dir, subdir, d)\n if not os.path.exists(dst):\n # Directory does not exist in destination,\n # rename it and prune it from os.walk list.\n os.renames(src, dst)\n del dirnames[n]\n # Cleanup.\n for dirpath, dirnames, filenames in os.walk(src_dir, topdown=True):\n assert not filenames\n os.rmdir(dirpath)\n\n\[email protected]\ndef disable_info_traces():\n \"\"\"\n Temporarily disable info traces.\n \"\"\"\n from distutils import log\n saved = log.set_threshold(log.WARN)\n try:\n yield\n finally:\n log.set_threshold(saved)\n\n\nclass Wheel:\n\n def __init__(self, filename):\n match = WHEEL_NAME(os.path.basename(filename))\n if match is None:\n raise ValueError('invalid wheel name: %r' % filename)\n self.filename = filename\n for k, v in match.groupdict().items():\n setattr(self, k, v)\n\n def tags(self):\n '''List tags (py_version, abi, platform) supported by this wheel.'''\n return itertools.product(\n self.py_version.split('.'),\n self.abi.split('.'),\n self.platform.split('.'),\n )\n\n def is_compatible(self):\n '''Is the wheel is compatible with the current platform?'''\n supported_tags = set(\n (t.interpreter, t.abi, t.platform) for t in sys_tags())\n return next((True for t in self.tags() if t in supported_tags), False)\n\n def egg_name(self):\n return _egg_basename(\n self.project_name,\n self.version,\n platform=(None if self.platform == 'any' else get_platform()),\n ) + \".egg\"\n\n def get_dist_info(self, zf):\n # find the correct name of the .dist-info dir in the wheel file\n for member in zf.namelist():\n dirname = posixpath.dirname(member)\n if (dirname.endswith('.dist-info') and\n canonicalize_name(dirname).startswith(\n canonicalize_name(self.project_name))):\n return dirname\n raise ValueError(\"unsupported wheel format. .dist-info not found\")\n\n def install_as_egg(self, destination_eggdir):\n '''Install wheel as an egg directory.'''\n with zipfile.ZipFile(self.filename) as zf:\n self._install_as_egg(destination_eggdir, zf)\n\n def _install_as_egg(self, destination_eggdir, zf):\n dist_basename = '%s-%s' % (self.project_name, self.version)\n dist_info = self.get_dist_info(zf)\n dist_data = '%s.data' % dist_basename\n egg_info = os.path.join(destination_eggdir, 'EGG-INFO')\n\n self._convert_metadata(zf, destination_eggdir, dist_info, egg_info)\n self._move_data_entries(destination_eggdir, dist_data)\n self._fix_namespace_packages(egg_info, destination_eggdir)\n\n @staticmethod\n def _convert_metadata(zf, destination_eggdir, dist_info, egg_info):\n import pkg_resources\n\n def get_metadata(name):\n with zf.open(posixpath.join(dist_info, name)) as fp:\n value = fp.read().decode('utf-8')\n return email.parser.Parser().parsestr(value)\n\n wheel_metadata = get_metadata('WHEEL')\n # Check wheel format version is supported.\n wheel_version = parse_version(wheel_metadata.get('Wheel-Version'))\n wheel_v1 = (\n parse_version('1.0') <= wheel_version < parse_version('2.0dev0')\n )\n if not wheel_v1:\n raise ValueError(\n 'unsupported wheel format version: %s' % wheel_version)\n # Extract to target directory.\n _unpack_zipfile_obj(zf, destination_eggdir)\n # Convert metadata.\n dist_info = os.path.join(destination_eggdir, dist_info)\n dist = pkg_resources.Distribution.from_location(\n destination_eggdir, dist_info,\n metadata=pkg_resources.PathMetadata(destination_eggdir, dist_info),\n )\n\n # Note: Evaluate and strip markers now,\n # as it's difficult to convert back from the syntax:\n # foobar; \"linux\" in sys_platform and extra == 'test'\n def raw_req(req):\n req.marker = None\n return str(req)\n install_requires = list(map(raw_req, dist.requires()))\n extras_require = {\n extra: [\n req\n for req in map(raw_req, dist.requires((extra,)))\n if req not in install_requires\n ]\n for extra in dist.extras\n }\n os.rename(dist_info, egg_info)\n os.rename(\n os.path.join(egg_info, 'METADATA'),\n os.path.join(egg_info, 'PKG-INFO'),\n )\n setup_dist = setuptools.Distribution(\n attrs=dict(\n install_requires=install_requires,\n extras_require=extras_require,\n ),\n )\n with disable_info_traces():\n write_requirements(\n setup_dist.get_command_obj('egg_info'),\n None,\n os.path.join(egg_info, 'requires.txt'),\n )\n\n @staticmethod\n def _move_data_entries(destination_eggdir, dist_data):\n \"\"\"Move data entries to their correct location.\"\"\"\n dist_data = os.path.join(destination_eggdir, dist_data)\n dist_data_scripts = os.path.join(dist_data, 'scripts')\n if os.path.exists(dist_data_scripts):\n egg_info_scripts = os.path.join(\n destination_eggdir, 'EGG-INFO', 'scripts')\n os.mkdir(egg_info_scripts)\n for entry in os.listdir(dist_data_scripts):\n # Remove bytecode, as it's not properly handled\n # during easy_install scripts install phase.\n if entry.endswith('.pyc'):\n os.unlink(os.path.join(dist_data_scripts, entry))\n else:\n os.rename(\n os.path.join(dist_data_scripts, entry),\n os.path.join(egg_info_scripts, entry),\n )\n os.rmdir(dist_data_scripts)\n for subdir in filter(os.path.exists, (\n os.path.join(dist_data, d)\n for d in ('data', 'headers', 'purelib', 'platlib')\n )):\n unpack(subdir, destination_eggdir)\n if os.path.exists(dist_data):\n os.rmdir(dist_data)\n\n @staticmethod\n def _fix_namespace_packages(egg_info, destination_eggdir):\n namespace_packages = os.path.join(\n egg_info, 'namespace_packages.txt')\n if os.path.exists(namespace_packages):\n with open(namespace_packages) as fp:\n namespace_packages = fp.read().split()\n for mod in namespace_packages:\n mod_dir = os.path.join(destination_eggdir, *mod.split('.'))\n mod_init = os.path.join(mod_dir, '__init__.py')\n if not os.path.exists(mod_dir):\n os.mkdir(mod_dir)\n if not os.path.exists(mod_init):\n with open(mod_init, 'w') as fp:\n fp.write(NAMESPACE_PACKAGE_INIT)\n", "path": "setuptools/wheel.py"}]} | 3,790 | 341 |
gh_patches_debug_40222 | rasdani/github-patches | git_diff | yt-dlp__yt-dlp-2589 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
apt doesn't keep track of latest version
### Checklist
- [X] I'm asking a question and **not** reporting a bug/feature request
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones
### Question
Yesterday I came online and the first thing I did was to run `sudo apt upgrade yt-dlp`. At the time I had a version 2021-12-27 or something similar. Apt then told me I had the latest version and didn't need to upgrade,
I run Linux Mint 20.3 and as it happened the gui Update Manager ran at the same time, and it told me I could upgrade to version 2022.01.21, which I did using the Upgrade Manager.
I have never before experienced the Upgrade Manager telling me about an available upgrade for yt-dlp. Perhaps this is a new interaction. In any case I appreciate it since that's the most dead-sure way of catching new upgrades and being up-to-date.
### Verbose log
_No response_
</issue>
<code>
[start of yt_dlp/extractor/globo.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import base64
5 import hashlib
6 import json
7 import random
8 import re
9
10 from .common import InfoExtractor
11 from ..compat import (
12 compat_str,
13 )
14 from ..utils import (
15 ExtractorError,
16 float_or_none,
17 orderedSet,
18 str_or_none,
19 try_get,
20 )
21
22
23 class GloboIE(InfoExtractor):
24 _VALID_URL = r'(?:globo:|https?://.+?\.globo\.com/(?:[^/]+/)*(?:v/(?:[^/]+/)?|videos/))(?P<id>\d{7,})'
25 _NETRC_MACHINE = 'globo'
26 _TESTS = [{
27 'url': 'http://g1.globo.com/carros/autoesporte/videos/t/exclusivos-do-g1/v/mercedes-benz-gla-passa-por-teste-de-colisao-na-europa/3607726/',
28 'info_dict': {
29 'id': '3607726',
30 'ext': 'mp4',
31 'title': 'Mercedes-Benz GLA passa por teste de colisão na Europa',
32 'duration': 103.204,
33 'uploader': 'G1',
34 'uploader_id': '2015',
35 },
36 'params': {
37 'skip_download': True,
38 },
39 }, {
40 'url': 'http://globoplay.globo.com/v/4581987/',
41 'info_dict': {
42 'id': '4581987',
43 'ext': 'mp4',
44 'title': 'Acidentes de trânsito estão entre as maiores causas de queda de energia em SP',
45 'duration': 137.973,
46 'uploader': 'Rede Globo',
47 'uploader_id': '196',
48 },
49 'params': {
50 'skip_download': True,
51 },
52 }, {
53 'url': 'http://canalbrasil.globo.com/programas/sangue-latino/videos/3928201.html',
54 'only_matching': True,
55 }, {
56 'url': 'http://globosatplay.globo.com/globonews/v/4472924/',
57 'only_matching': True,
58 }, {
59 'url': 'http://globotv.globo.com/t/programa/v/clipe-sexo-e-as-negas-adeus/3836166/',
60 'only_matching': True,
61 }, {
62 'url': 'http://globotv.globo.com/canal-brasil/sangue-latino/t/todos-os-videos/v/ator-e-diretor-argentino-ricado-darin-fala-sobre-utopias-e-suas-perdas/3928201/',
63 'only_matching': True,
64 }, {
65 'url': 'http://canaloff.globo.com/programas/desejar-profundo/videos/4518560.html',
66 'only_matching': True,
67 }, {
68 'url': 'globo:3607726',
69 'only_matching': True,
70 }]
71
72 def _real_extract(self, url):
73 video_id = self._match_id(url)
74
75 video = self._download_json(
76 'http://api.globovideos.com/videos/%s/playlist' % video_id,
77 video_id)['videos'][0]
78 if not self.get_param('allow_unplayable_formats') and video.get('encrypted') is True:
79 self.report_drm(video_id)
80
81 title = video['title']
82
83 formats = []
84 security = self._download_json(
85 'https://playback.video.globo.com/v1/video-session', video_id, 'Downloading security hash for %s' % video_id,
86 headers={'content-type': 'application/json'}, data=json.dumps({
87 "player_type": "desktop",
88 "video_id": video_id,
89 "quality": "max",
90 "content_protection": "widevine",
91 "vsid": "581b986b-4c40-71f0-5a58-803e579d5fa2",
92 "tz": "-3.0:00"
93 }).encode())
94
95 security_hash = security['source']['token']
96 if not security_hash:
97 message = security.get('message')
98 if message:
99 raise ExtractorError(
100 '%s returned error: %s' % (self.IE_NAME, message), expected=True)
101
102 hash_code = security_hash[:2]
103 padding = '%010d' % random.randint(1, 10000000000)
104 if hash_code in ('04', '14'):
105 received_time = security_hash[3:13]
106 received_md5 = security_hash[24:]
107 hash_prefix = security_hash[:23]
108 elif hash_code in ('02', '12', '03', '13'):
109 received_time = security_hash[2:12]
110 received_md5 = security_hash[22:]
111 padding += '1'
112 hash_prefix = '05' + security_hash[:22]
113
114 padded_sign_time = compat_str(int(received_time) + 86400) + padding
115 md5_data = (received_md5 + padded_sign_time + '0xAC10FD').encode()
116 signed_md5 = base64.urlsafe_b64encode(hashlib.md5(md5_data).digest()).decode().strip('=')
117 signed_hash = hash_prefix + padded_sign_time + signed_md5
118 source = security['source']['url_parts']
119 resource_url = source['scheme'] + '://' + source['domain'] + source['path']
120 signed_url = '%s?h=%s&k=html5&a=%s' % (resource_url, signed_hash, 'F' if video.get('subscriber_only') else 'A')
121
122 formats.extend(self._extract_m3u8_formats(
123 signed_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False))
124 self._sort_formats(formats)
125
126 subtitles = {}
127 for resource in video['resources']:
128 if resource.get('type') == 'subtitle':
129 subtitles.setdefault(resource.get('language') or 'por', []).append({
130 'url': resource.get('url'),
131 })
132 subs = try_get(security, lambda x: x['source']['subtitles'], expected_type=dict) or {}
133 for sub_lang, sub_url in subs.items():
134 if sub_url:
135 subtitles.setdefault(sub_lang or 'por', []).append({
136 'url': sub_url,
137 })
138 subs = try_get(security, lambda x: x['source']['subtitles_webvtt'], expected_type=dict) or {}
139 for sub_lang, sub_url in subs.items():
140 if sub_url:
141 subtitles.setdefault(sub_lang or 'por', []).append({
142 'url': sub_url,
143 })
144
145 duration = float_or_none(video.get('duration'), 1000)
146 uploader = video.get('channel')
147 uploader_id = str_or_none(video.get('channel_id'))
148
149 return {
150 'id': video_id,
151 'title': title,
152 'duration': duration,
153 'uploader': uploader,
154 'uploader_id': uploader_id,
155 'formats': formats,
156 'subtitles': subtitles,
157 }
158
159
160 class GloboArticleIE(InfoExtractor):
161 _VALID_URL = r'https?://.+?\.globo\.com/(?:[^/]+/)*(?P<id>[^/.]+)(?:\.html)?'
162
163 _VIDEOID_REGEXES = [
164 r'\bdata-video-id=["\'](\d{7,})',
165 r'\bdata-player-videosids=["\'](\d{7,})',
166 r'\bvideosIDs\s*:\s*["\']?(\d{7,})',
167 r'\bdata-id=["\'](\d{7,})',
168 r'<div[^>]+\bid=["\'](\d{7,})',
169 ]
170
171 _TESTS = [{
172 'url': 'http://g1.globo.com/jornal-nacional/noticia/2014/09/novidade-na-fiscalizacao-de-bagagem-pela-receita-provoca-discussoes.html',
173 'info_dict': {
174 'id': 'novidade-na-fiscalizacao-de-bagagem-pela-receita-provoca-discussoes',
175 'title': 'Novidade na fiscalização de bagagem pela Receita provoca discussões',
176 'description': 'md5:c3c4b4d4c30c32fce460040b1ac46b12',
177 },
178 'playlist_count': 1,
179 }, {
180 'url': 'http://g1.globo.com/pr/parana/noticia/2016/09/mpf-denuncia-lula-marisa-e-mais-seis-na-operacao-lava-jato.html',
181 'info_dict': {
182 'id': 'mpf-denuncia-lula-marisa-e-mais-seis-na-operacao-lava-jato',
183 'title': "Lula era o 'comandante máximo' do esquema da Lava Jato, diz MPF",
184 'description': 'md5:8aa7cc8beda4dc71cc8553e00b77c54c',
185 },
186 'playlist_count': 6,
187 }, {
188 'url': 'http://gq.globo.com/Prazeres/Poder/noticia/2015/10/all-o-desafio-assista-ao-segundo-capitulo-da-serie.html',
189 'only_matching': True,
190 }, {
191 'url': 'http://gshow.globo.com/programas/tv-xuxa/O-Programa/noticia/2014/01/xuxa-e-junno-namoram-muuuito-em-luau-de-zeze-di-camargo-e-luciano.html',
192 'only_matching': True,
193 }, {
194 'url': 'http://oglobo.globo.com/rio/a-amizade-entre-um-entregador-de-farmacia-um-piano-19946271',
195 'only_matching': True,
196 }]
197
198 @classmethod
199 def suitable(cls, url):
200 return False if GloboIE.suitable(url) else super(GloboArticleIE, cls).suitable(url)
201
202 def _real_extract(self, url):
203 display_id = self._match_id(url)
204 webpage = self._download_webpage(url, display_id)
205 video_ids = []
206 for video_regex in self._VIDEOID_REGEXES:
207 video_ids.extend(re.findall(video_regex, webpage))
208 entries = [
209 self.url_result('globo:%s' % video_id, GloboIE.ie_key())
210 for video_id in orderedSet(video_ids)]
211 title = self._og_search_title(webpage, fatal=False)
212 description = self._html_search_meta('description', webpage)
213 return self.playlist_result(entries, display_id, title, description)
214
[end of yt_dlp/extractor/globo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yt_dlp/extractor/globo.py b/yt_dlp/extractor/globo.py
--- a/yt_dlp/extractor/globo.py
+++ b/yt_dlp/extractor/globo.py
@@ -12,6 +12,7 @@
compat_str,
)
from ..utils import (
+ HEADRequest,
ExtractorError,
float_or_none,
orderedSet,
@@ -67,11 +68,28 @@
}, {
'url': 'globo:3607726',
'only_matching': True,
+ }, {
+ 'url': 'https://globoplay.globo.com/v/10248083/',
+ 'info_dict': {
+ 'id': '10248083',
+ 'ext': 'mp4',
+ 'title': 'Melhores momentos: Equador 1 x 1 Brasil pelas Eliminatórias da Copa do Mundo 2022',
+ 'duration': 530.964,
+ 'uploader': 'SporTV',
+ 'uploader_id': '698',
+ },
+ 'params': {
+ 'skip_download': True,
+ },
}]
def _real_extract(self, url):
video_id = self._match_id(url)
+ self._request_webpage(
+ HEADRequest('https://globo-ab.globo.com/v2/selected-alternatives?experiments=player-isolated-experiment-02&skipImpressions=true'),
+ video_id, 'Getting cookies')
+
video = self._download_json(
'http://api.globovideos.com/videos/%s/playlist' % video_id,
video_id)['videos'][0]
@@ -82,7 +100,7 @@
formats = []
security = self._download_json(
- 'https://playback.video.globo.com/v1/video-session', video_id, 'Downloading security hash for %s' % video_id,
+ 'https://playback.video.globo.com/v2/video-session', video_id, 'Downloading security hash for %s' % video_id,
headers={'content-type': 'application/json'}, data=json.dumps({
"player_type": "desktop",
"video_id": video_id,
@@ -92,7 +110,9 @@
"tz": "-3.0:00"
}).encode())
- security_hash = security['source']['token']
+ self._request_webpage(HEADRequest(security['sources'][0]['url_template']), video_id, 'Getting locksession cookie')
+
+ security_hash = security['sources'][0]['token']
if not security_hash:
message = security.get('message')
if message:
@@ -115,7 +135,7 @@
md5_data = (received_md5 + padded_sign_time + '0xAC10FD').encode()
signed_md5 = base64.urlsafe_b64encode(hashlib.md5(md5_data).digest()).decode().strip('=')
signed_hash = hash_prefix + padded_sign_time + signed_md5
- source = security['source']['url_parts']
+ source = security['sources'][0]['url_parts']
resource_url = source['scheme'] + '://' + source['domain'] + source['path']
signed_url = '%s?h=%s&k=html5&a=%s' % (resource_url, signed_hash, 'F' if video.get('subscriber_only') else 'A')
| {"golden_diff": "diff --git a/yt_dlp/extractor/globo.py b/yt_dlp/extractor/globo.py\n--- a/yt_dlp/extractor/globo.py\n+++ b/yt_dlp/extractor/globo.py\n@@ -12,6 +12,7 @@\n compat_str,\n )\n from ..utils import (\n+ HEADRequest,\n ExtractorError,\n float_or_none,\n orderedSet,\n@@ -67,11 +68,28 @@\n }, {\n 'url': 'globo:3607726',\n 'only_matching': True,\n+ }, {\n+ 'url': 'https://globoplay.globo.com/v/10248083/',\n+ 'info_dict': {\n+ 'id': '10248083',\n+ 'ext': 'mp4',\n+ 'title': 'Melhores momentos: Equador 1 x 1 Brasil pelas Eliminat\u00f3rias da Copa do Mundo 2022',\n+ 'duration': 530.964,\n+ 'uploader': 'SporTV',\n+ 'uploader_id': '698',\n+ },\n+ 'params': {\n+ 'skip_download': True,\n+ },\n }]\n \n def _real_extract(self, url):\n video_id = self._match_id(url)\n \n+ self._request_webpage(\n+ HEADRequest('https://globo-ab.globo.com/v2/selected-alternatives?experiments=player-isolated-experiment-02&skipImpressions=true'),\n+ video_id, 'Getting cookies')\n+\n video = self._download_json(\n 'http://api.globovideos.com/videos/%s/playlist' % video_id,\n video_id)['videos'][0]\n@@ -82,7 +100,7 @@\n \n formats = []\n security = self._download_json(\n- 'https://playback.video.globo.com/v1/video-session', video_id, 'Downloading security hash for %s' % video_id,\n+ 'https://playback.video.globo.com/v2/video-session', video_id, 'Downloading security hash for %s' % video_id,\n headers={'content-type': 'application/json'}, data=json.dumps({\n \"player_type\": \"desktop\",\n \"video_id\": video_id,\n@@ -92,7 +110,9 @@\n \"tz\": \"-3.0:00\"\n }).encode())\n \n- security_hash = security['source']['token']\n+ self._request_webpage(HEADRequest(security['sources'][0]['url_template']), video_id, 'Getting locksession cookie')\n+\n+ security_hash = security['sources'][0]['token']\n if not security_hash:\n message = security.get('message')\n if message:\n@@ -115,7 +135,7 @@\n md5_data = (received_md5 + padded_sign_time + '0xAC10FD').encode()\n signed_md5 = base64.urlsafe_b64encode(hashlib.md5(md5_data).digest()).decode().strip('=')\n signed_hash = hash_prefix + padded_sign_time + signed_md5\n- source = security['source']['url_parts']\n+ source = security['sources'][0]['url_parts']\n resource_url = source['scheme'] + '://' + source['domain'] + source['path']\n signed_url = '%s?h=%s&k=html5&a=%s' % (resource_url, signed_hash, 'F' if video.get('subscriber_only') else 'A')\n", "issue": "apt doesn't keep track of latest version\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug/feature request\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions including closed ones\r\n\r\n### Question\r\n\r\nYesterday I came online and the first thing I did was to run `sudo apt upgrade yt-dlp`. At the time I had a version 2021-12-27 or something similar. Apt then told me I had the latest version and didn't need to upgrade,\r\n\r\nI run Linux Mint 20.3 and as it happened the gui Update Manager ran at the same time, and it told me I could upgrade to version 2022.01.21, which I did using the Upgrade Manager.\r\n\r\nI have never before experienced the Upgrade Manager telling me about an available upgrade for yt-dlp. Perhaps this is a new interaction. In any case I appreciate it since that's the most dead-sure way of catching new upgrades and being up-to-date.\r\n\r\n### Verbose log\r\n\r\n_No response_\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport base64\nimport hashlib\nimport json\nimport random\nimport re\n\nfrom .common import InfoExtractor\nfrom ..compat import (\n compat_str,\n)\nfrom ..utils import (\n ExtractorError,\n float_or_none,\n orderedSet,\n str_or_none,\n try_get,\n)\n\n\nclass GloboIE(InfoExtractor):\n _VALID_URL = r'(?:globo:|https?://.+?\\.globo\\.com/(?:[^/]+/)*(?:v/(?:[^/]+/)?|videos/))(?P<id>\\d{7,})'\n _NETRC_MACHINE = 'globo'\n _TESTS = [{\n 'url': 'http://g1.globo.com/carros/autoesporte/videos/t/exclusivos-do-g1/v/mercedes-benz-gla-passa-por-teste-de-colisao-na-europa/3607726/',\n 'info_dict': {\n 'id': '3607726',\n 'ext': 'mp4',\n 'title': 'Mercedes-Benz GLA passa por teste de colis\u00e3o na Europa',\n 'duration': 103.204,\n 'uploader': 'G1',\n 'uploader_id': '2015',\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'http://globoplay.globo.com/v/4581987/',\n 'info_dict': {\n 'id': '4581987',\n 'ext': 'mp4',\n 'title': 'Acidentes de tr\u00e2nsito est\u00e3o entre as maiores causas de queda de energia em SP',\n 'duration': 137.973,\n 'uploader': 'Rede Globo',\n 'uploader_id': '196',\n },\n 'params': {\n 'skip_download': True,\n },\n }, {\n 'url': 'http://canalbrasil.globo.com/programas/sangue-latino/videos/3928201.html',\n 'only_matching': True,\n }, {\n 'url': 'http://globosatplay.globo.com/globonews/v/4472924/',\n 'only_matching': True,\n }, {\n 'url': 'http://globotv.globo.com/t/programa/v/clipe-sexo-e-as-negas-adeus/3836166/',\n 'only_matching': True,\n }, {\n 'url': 'http://globotv.globo.com/canal-brasil/sangue-latino/t/todos-os-videos/v/ator-e-diretor-argentino-ricado-darin-fala-sobre-utopias-e-suas-perdas/3928201/',\n 'only_matching': True,\n }, {\n 'url': 'http://canaloff.globo.com/programas/desejar-profundo/videos/4518560.html',\n 'only_matching': True,\n }, {\n 'url': 'globo:3607726',\n 'only_matching': True,\n }]\n\n def _real_extract(self, url):\n video_id = self._match_id(url)\n\n video = self._download_json(\n 'http://api.globovideos.com/videos/%s/playlist' % video_id,\n video_id)['videos'][0]\n if not self.get_param('allow_unplayable_formats') and video.get('encrypted') is True:\n self.report_drm(video_id)\n\n title = video['title']\n\n formats = []\n security = self._download_json(\n 'https://playback.video.globo.com/v1/video-session', video_id, 'Downloading security hash for %s' % video_id,\n headers={'content-type': 'application/json'}, data=json.dumps({\n \"player_type\": \"desktop\",\n \"video_id\": video_id,\n \"quality\": \"max\",\n \"content_protection\": \"widevine\",\n \"vsid\": \"581b986b-4c40-71f0-5a58-803e579d5fa2\",\n \"tz\": \"-3.0:00\"\n }).encode())\n\n security_hash = security['source']['token']\n if not security_hash:\n message = security.get('message')\n if message:\n raise ExtractorError(\n '%s returned error: %s' % (self.IE_NAME, message), expected=True)\n\n hash_code = security_hash[:2]\n padding = '%010d' % random.randint(1, 10000000000)\n if hash_code in ('04', '14'):\n received_time = security_hash[3:13]\n received_md5 = security_hash[24:]\n hash_prefix = security_hash[:23]\n elif hash_code in ('02', '12', '03', '13'):\n received_time = security_hash[2:12]\n received_md5 = security_hash[22:]\n padding += '1'\n hash_prefix = '05' + security_hash[:22]\n\n padded_sign_time = compat_str(int(received_time) + 86400) + padding\n md5_data = (received_md5 + padded_sign_time + '0xAC10FD').encode()\n signed_md5 = base64.urlsafe_b64encode(hashlib.md5(md5_data).digest()).decode().strip('=')\n signed_hash = hash_prefix + padded_sign_time + signed_md5\n source = security['source']['url_parts']\n resource_url = source['scheme'] + '://' + source['domain'] + source['path']\n signed_url = '%s?h=%s&k=html5&a=%s' % (resource_url, signed_hash, 'F' if video.get('subscriber_only') else 'A')\n\n formats.extend(self._extract_m3u8_formats(\n signed_url, video_id, 'mp4', entry_protocol='m3u8_native', m3u8_id='hls', fatal=False))\n self._sort_formats(formats)\n\n subtitles = {}\n for resource in video['resources']:\n if resource.get('type') == 'subtitle':\n subtitles.setdefault(resource.get('language') or 'por', []).append({\n 'url': resource.get('url'),\n })\n subs = try_get(security, lambda x: x['source']['subtitles'], expected_type=dict) or {}\n for sub_lang, sub_url in subs.items():\n if sub_url:\n subtitles.setdefault(sub_lang or 'por', []).append({\n 'url': sub_url,\n })\n subs = try_get(security, lambda x: x['source']['subtitles_webvtt'], expected_type=dict) or {}\n for sub_lang, sub_url in subs.items():\n if sub_url:\n subtitles.setdefault(sub_lang or 'por', []).append({\n 'url': sub_url,\n })\n\n duration = float_or_none(video.get('duration'), 1000)\n uploader = video.get('channel')\n uploader_id = str_or_none(video.get('channel_id'))\n\n return {\n 'id': video_id,\n 'title': title,\n 'duration': duration,\n 'uploader': uploader,\n 'uploader_id': uploader_id,\n 'formats': formats,\n 'subtitles': subtitles,\n }\n\n\nclass GloboArticleIE(InfoExtractor):\n _VALID_URL = r'https?://.+?\\.globo\\.com/(?:[^/]+/)*(?P<id>[^/.]+)(?:\\.html)?'\n\n _VIDEOID_REGEXES = [\n r'\\bdata-video-id=[\"\\'](\\d{7,})',\n r'\\bdata-player-videosids=[\"\\'](\\d{7,})',\n r'\\bvideosIDs\\s*:\\s*[\"\\']?(\\d{7,})',\n r'\\bdata-id=[\"\\'](\\d{7,})',\n r'<div[^>]+\\bid=[\"\\'](\\d{7,})',\n ]\n\n _TESTS = [{\n 'url': 'http://g1.globo.com/jornal-nacional/noticia/2014/09/novidade-na-fiscalizacao-de-bagagem-pela-receita-provoca-discussoes.html',\n 'info_dict': {\n 'id': 'novidade-na-fiscalizacao-de-bagagem-pela-receita-provoca-discussoes',\n 'title': 'Novidade na fiscaliza\u00e7\u00e3o de bagagem pela Receita provoca discuss\u00f5es',\n 'description': 'md5:c3c4b4d4c30c32fce460040b1ac46b12',\n },\n 'playlist_count': 1,\n }, {\n 'url': 'http://g1.globo.com/pr/parana/noticia/2016/09/mpf-denuncia-lula-marisa-e-mais-seis-na-operacao-lava-jato.html',\n 'info_dict': {\n 'id': 'mpf-denuncia-lula-marisa-e-mais-seis-na-operacao-lava-jato',\n 'title': \"Lula era o 'comandante m\u00e1ximo' do esquema da Lava Jato, diz MPF\",\n 'description': 'md5:8aa7cc8beda4dc71cc8553e00b77c54c',\n },\n 'playlist_count': 6,\n }, {\n 'url': 'http://gq.globo.com/Prazeres/Poder/noticia/2015/10/all-o-desafio-assista-ao-segundo-capitulo-da-serie.html',\n 'only_matching': True,\n }, {\n 'url': 'http://gshow.globo.com/programas/tv-xuxa/O-Programa/noticia/2014/01/xuxa-e-junno-namoram-muuuito-em-luau-de-zeze-di-camargo-e-luciano.html',\n 'only_matching': True,\n }, {\n 'url': 'http://oglobo.globo.com/rio/a-amizade-entre-um-entregador-de-farmacia-um-piano-19946271',\n 'only_matching': True,\n }]\n\n @classmethod\n def suitable(cls, url):\n return False if GloboIE.suitable(url) else super(GloboArticleIE, cls).suitable(url)\n\n def _real_extract(self, url):\n display_id = self._match_id(url)\n webpage = self._download_webpage(url, display_id)\n video_ids = []\n for video_regex in self._VIDEOID_REGEXES:\n video_ids.extend(re.findall(video_regex, webpage))\n entries = [\n self.url_result('globo:%s' % video_id, GloboIE.ie_key())\n for video_id in orderedSet(video_ids)]\n title = self._og_search_title(webpage, fatal=False)\n description = self._html_search_meta('description', webpage)\n return self.playlist_result(entries, display_id, title, description)\n", "path": "yt_dlp/extractor/globo.py"}]} | 3,881 | 794 |
gh_patches_debug_113 | rasdani/github-patches | git_diff | elastic__apm-agent-python-1494 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[META 576] Sanitize `*auth*` instead of `authorization`
[](https://github.com/elastic/apm/issues/576)
[](https://github.com/elastic/apm/issues/577)
Sanitize `*auth*` instead of `authorization`
</issue>
<code>
[start of elasticapm/conf/constants.py]
1 # BSD 3-Clause License
2 #
3 # Copyright (c) 2019, Elasticsearch BV
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are met:
8 #
9 # * Redistributions of source code must retain the above copyright notice, this
10 # list of conditions and the following disclaimer.
11 #
12 # * Redistributions in binary form must reproduce the above copyright notice,
13 # this list of conditions and the following disclaimer in the documentation
14 # and/or other materials provided with the distribution.
15 #
16 # * Neither the name of the copyright holder nor the names of its
17 # contributors may be used to endorse or promote products derived from
18 # this software without specific prior written permission.
19 #
20 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
21 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
22 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
23 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
24 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
25 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
26 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
27 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
28 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31 import decimal
32 import re
33 from collections import namedtuple
34
35
36 def _starmatch_to_regex(pattern):
37 """
38 This is a duplicate of starmatch_to_regex() in utils/__init__.py
39
40 Duplication to avoid circular imports
41 """
42 options = re.DOTALL
43 # check if we are case sensitive
44 if pattern.startswith("(?-i)"):
45 pattern = pattern[5:]
46 else:
47 options |= re.IGNORECASE
48 i, n = 0, len(pattern)
49 res = []
50 while i < n:
51 c = pattern[i]
52 i = i + 1
53 if c == "*":
54 res.append(".*")
55 else:
56 res.append(re.escape(c))
57 return re.compile(r"(?:%s)\Z" % "".join(res), options)
58
59
60 EVENTS_API_PATH = "intake/v2/events"
61 AGENT_CONFIG_PATH = "config/v1/agents"
62 SERVER_INFO_PATH = ""
63
64 TRACE_CONTEXT_VERSION = 0
65 TRACEPARENT_HEADER_NAME = "traceparent"
66 TRACEPARENT_LEGACY_HEADER_NAME = "elastic-apm-traceparent"
67 TRACESTATE_HEADER_NAME = "tracestate"
68
69 TIMESTAMP_FORMAT = "%Y-%m-%dT%H:%M:%S.%fZ"
70
71 KEYWORD_MAX_LENGTH = 1024
72
73 HTTP_WITH_BODY = {"POST", "PUT", "PATCH", "DELETE"}
74
75 MASK = "[REDACTED]"
76
77 EXCEPTION_CHAIN_MAX_DEPTH = 50
78
79 ERROR = "error"
80 TRANSACTION = "transaction"
81 SPAN = "span"
82 METRICSET = "metricset"
83
84 LABEL_RE = re.compile('[.*"]')
85
86 HARDCODED_PROCESSORS = ["elasticapm.processors.add_context_lines_to_frames"]
87
88 BASE_SANITIZE_FIELD_NAMES_UNPROCESSED = [
89 "password",
90 "passwd",
91 "pwd",
92 "secret",
93 "*key",
94 "*token*",
95 "*session*",
96 "*credit*",
97 "*card*",
98 "authorization",
99 "set-cookie",
100 ]
101
102 BASE_SANITIZE_FIELD_NAMES = [_starmatch_to_regex(x) for x in BASE_SANITIZE_FIELD_NAMES_UNPROCESSED]
103
104 OUTCOME = namedtuple("OUTCOME", ["SUCCESS", "FAILURE", "UNKNOWN"])(
105 SUCCESS="success", FAILURE="failure", UNKNOWN="unknown"
106 )
107
108 try:
109 # Python 2
110 LABEL_TYPES = (bool, int, long, float, decimal.Decimal)
111 except NameError:
112 # Python 3
113 LABEL_TYPES = (bool, int, float, decimal.Decimal)
114
115 TRACESTATE = namedtuple("TRACESTATE", ["SAMPLE_RATE"])(SAMPLE_RATE="s")
116
[end of elasticapm/conf/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/elasticapm/conf/constants.py b/elasticapm/conf/constants.py
--- a/elasticapm/conf/constants.py
+++ b/elasticapm/conf/constants.py
@@ -95,7 +95,7 @@
"*session*",
"*credit*",
"*card*",
- "authorization",
+ "*auth*",
"set-cookie",
]
| {"golden_diff": "diff --git a/elasticapm/conf/constants.py b/elasticapm/conf/constants.py\n--- a/elasticapm/conf/constants.py\n+++ b/elasticapm/conf/constants.py\n@@ -95,7 +95,7 @@\n \"*session*\",\n \"*credit*\",\n \"*card*\",\n- \"authorization\",\n+ \"*auth*\",\n \"set-cookie\",\n ]\n", "issue": "[META 576] Sanitize `*auth*` instead of `authorization`\n[](https://github.com/elastic/apm/issues/576)\n\n[](https://github.com/elastic/apm/issues/577)\n\nSanitize `*auth*` instead of `authorization`\n", "before_files": [{"content": "# BSD 3-Clause License\n#\n# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\nimport decimal\nimport re\nfrom collections import namedtuple\n\n\ndef _starmatch_to_regex(pattern):\n \"\"\"\n This is a duplicate of starmatch_to_regex() in utils/__init__.py\n\n Duplication to avoid circular imports\n \"\"\"\n options = re.DOTALL\n # check if we are case sensitive\n if pattern.startswith(\"(?-i)\"):\n pattern = pattern[5:]\n else:\n options |= re.IGNORECASE\n i, n = 0, len(pattern)\n res = []\n while i < n:\n c = pattern[i]\n i = i + 1\n if c == \"*\":\n res.append(\".*\")\n else:\n res.append(re.escape(c))\n return re.compile(r\"(?:%s)\\Z\" % \"\".join(res), options)\n\n\nEVENTS_API_PATH = \"intake/v2/events\"\nAGENT_CONFIG_PATH = \"config/v1/agents\"\nSERVER_INFO_PATH = \"\"\n\nTRACE_CONTEXT_VERSION = 0\nTRACEPARENT_HEADER_NAME = \"traceparent\"\nTRACEPARENT_LEGACY_HEADER_NAME = \"elastic-apm-traceparent\"\nTRACESTATE_HEADER_NAME = \"tracestate\"\n\nTIMESTAMP_FORMAT = \"%Y-%m-%dT%H:%M:%S.%fZ\"\n\nKEYWORD_MAX_LENGTH = 1024\n\nHTTP_WITH_BODY = {\"POST\", \"PUT\", \"PATCH\", \"DELETE\"}\n\nMASK = \"[REDACTED]\"\n\nEXCEPTION_CHAIN_MAX_DEPTH = 50\n\nERROR = \"error\"\nTRANSACTION = \"transaction\"\nSPAN = \"span\"\nMETRICSET = \"metricset\"\n\nLABEL_RE = re.compile('[.*\"]')\n\nHARDCODED_PROCESSORS = [\"elasticapm.processors.add_context_lines_to_frames\"]\n\nBASE_SANITIZE_FIELD_NAMES_UNPROCESSED = [\n \"password\",\n \"passwd\",\n \"pwd\",\n \"secret\",\n \"*key\",\n \"*token*\",\n \"*session*\",\n \"*credit*\",\n \"*card*\",\n \"authorization\",\n \"set-cookie\",\n]\n\nBASE_SANITIZE_FIELD_NAMES = [_starmatch_to_regex(x) for x in BASE_SANITIZE_FIELD_NAMES_UNPROCESSED]\n\nOUTCOME = namedtuple(\"OUTCOME\", [\"SUCCESS\", \"FAILURE\", \"UNKNOWN\"])(\n SUCCESS=\"success\", FAILURE=\"failure\", UNKNOWN=\"unknown\"\n)\n\ntry:\n # Python 2\n LABEL_TYPES = (bool, int, long, float, decimal.Decimal)\nexcept NameError:\n # Python 3\n LABEL_TYPES = (bool, int, float, decimal.Decimal)\n\nTRACESTATE = namedtuple(\"TRACESTATE\", [\"SAMPLE_RATE\"])(SAMPLE_RATE=\"s\")\n", "path": "elasticapm/conf/constants.py"}]} | 1,861 | 84 |
gh_patches_debug_39602 | rasdani/github-patches | git_diff | microsoft__playwright-python-222 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running playwright raises PermissionError on Linux
### Reproducing the error
- Created a virtual environment
- Installed playwright with `python -m pip install playwright`
Tried to run some code
```python
from playwright import sync_playwright
with sync_playwright() as p:
for browser_type in [p.chromium, p.firefox, p.webkit]:
browser = browser_type.launch()
page = browser.newPage()
page.goto('http://whatsmyuseragent.org/')
page.screenshot(path=f'example-{browser_type.name}.png')
browser.close()
```
Then it raised this error
```python
PermissionError: [Errno 13] Permission denied: '/home/leno/Desktop/open-source/pwright/env/lib/python3.8/site-packages/playwright/drivers/driver-linux
```
I think this is not a normal behavior since running Python under sudo is a terrible idea.
**OS**: Ubuntu 20.04
**Python Version**: 3.8.2
**Playwright Version**: 0.142.3
</issue>
<code>
[start of playwright/path_utils.py]
1 import inspect
2 from pathlib import Path
3
4
5 def get_file_dirname() -> Path:
6 """Returns the callee (`__file__`) directory name"""
7 frame = inspect.stack()[1]
8 module = inspect.getmodule(frame[0])
9 assert module
10 return Path(module.__file__).parent.absolute()
11
[end of playwright/path_utils.py]
[start of playwright/main.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import io
17 import os
18 import stat
19 import subprocess
20 import sys
21 from pathlib import Path
22 from typing import Any
23
24 from greenlet import greenlet
25
26 from playwright.async_api import Playwright as AsyncPlaywright
27 from playwright.connection import Connection
28 from playwright.helper import Error
29 from playwright.object_factory import create_remote_object
30 from playwright.path_utils import get_file_dirname
31 from playwright.playwright import Playwright
32 from playwright.sync_api import Playwright as SyncPlaywright
33 from playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber
34
35
36 def compute_driver_executable() -> Path:
37 package_path = get_file_dirname()
38 platform = sys.platform
39 if platform == "darwin":
40 return package_path / "drivers" / "driver-darwin"
41 elif platform == "linux":
42 return package_path / "drivers" / "driver-linux"
43 elif platform == "win32":
44 result = package_path / "drivers" / "driver-win32-amd64.exe"
45 if result.exists():
46 return result
47 return package_path / "drivers" / "driver-win32.exe"
48 return package_path / "drivers" / "driver-linux"
49
50
51 async def run_driver_async() -> Connection:
52 driver_executable = compute_driver_executable()
53
54 # Sourced from: https://github.com/pytest-dev/pytest/blob/49827adcb9256c9c9c06a25729421dcc3c385edc/src/_pytest/faulthandler.py#L73-L80
55 def _get_stderr_fileno() -> int:
56 try:
57 return sys.stderr.fileno()
58 except io.UnsupportedOperation:
59 # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
60 # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
61 # This is potentially dangerous, but the best we can do.
62 return sys.__stderr__.fileno()
63
64 proc = await asyncio.create_subprocess_exec(
65 str(driver_executable),
66 stdin=asyncio.subprocess.PIPE,
67 stdout=asyncio.subprocess.PIPE,
68 stderr=_get_stderr_fileno(),
69 limit=32768,
70 )
71 assert proc.stdout
72 assert proc.stdin
73 connection = Connection(
74 proc.stdout, proc.stdin, create_remote_object, asyncio.get_event_loop()
75 )
76 return connection
77
78
79 def run_driver() -> Connection:
80 loop = asyncio.get_event_loop()
81 if loop.is_running():
82 raise Error("Can only run one Playwright at a time.")
83 return loop.run_until_complete(run_driver_async())
84
85
86 class SyncPlaywrightContextManager:
87 def __init__(self) -> None:
88 self._connection = run_driver()
89 self._playwright: SyncPlaywright
90
91 def __enter__(self) -> SyncPlaywright:
92 g_self = greenlet.getcurrent()
93
94 def callback_wrapper(playwright_impl: Playwright) -> None:
95 self._playwright = SyncPlaywright(playwright_impl)
96 g_self.switch()
97
98 self._connection.call_on_object_with_known_name("Playwright", callback_wrapper)
99 set_dispatcher_fiber(greenlet(lambda: self._connection.run_sync()))
100 dispatcher_fiber().switch()
101 playwright = self._playwright
102 playwright.stop = self.__exit__ # type: ignore
103 return playwright
104
105 def start(self) -> SyncPlaywright:
106 return self.__enter__()
107
108 def __exit__(self, *args: Any) -> None:
109 self._connection.stop_sync()
110
111
112 class AsyncPlaywrightContextManager:
113 def __init__(self) -> None:
114 self._connection: Connection
115
116 async def __aenter__(self) -> AsyncPlaywright:
117 self._connection = await run_driver_async()
118 self._connection.run_async()
119 playwright = AsyncPlaywright(
120 await self._connection.wait_for_object_with_known_name("Playwright")
121 )
122 playwright.stop = self.__aexit__ # type: ignore
123 return playwright
124
125 async def start(self) -> AsyncPlaywright:
126 return await self.__aenter__()
127
128 async def __aexit__(self, *args: Any) -> None:
129 self._connection.stop_async()
130
131
132 if sys.platform == "win32":
133 # Use ProactorEventLoop in 3.7, which is default in 3.8
134 loop = asyncio.ProactorEventLoop()
135 asyncio.set_event_loop(loop)
136
137
138 def main() -> None:
139 if "install" not in sys.argv:
140 print('Run "python -m playwright install" to complete installation')
141 return
142 driver_executable = compute_driver_executable()
143 # Fix the executable bit during the installation.
144 if not sys.platform == "win32":
145 st = os.stat(driver_executable)
146 if st.st_mode & stat.S_IEXEC == 0:
147 os.chmod(driver_executable, st.st_mode | stat.S_IEXEC)
148 print("Installing the browsers...")
149 subprocess.check_call([str(driver_executable), "install"])
150
151 print("Playwright is now ready for use")
152
[end of playwright/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/playwright/main.py b/playwright/main.py
--- a/playwright/main.py
+++ b/playwright/main.py
@@ -14,8 +14,6 @@
import asyncio
import io
-import os
-import stat
import subprocess
import sys
from pathlib import Path
@@ -27,7 +25,7 @@
from playwright.connection import Connection
from playwright.helper import Error
from playwright.object_factory import create_remote_object
-from playwright.path_utils import get_file_dirname
+from playwright.path_utils import get_file_dirname, make_file_executable
from playwright.playwright import Playwright
from playwright.sync_api import Playwright as SyncPlaywright
from playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber
@@ -37,15 +35,19 @@
package_path = get_file_dirname()
platform = sys.platform
if platform == "darwin":
- return package_path / "drivers" / "driver-darwin"
+ path = package_path / "drivers" / "driver-darwin"
+ return make_file_executable(path)
elif platform == "linux":
- return package_path / "drivers" / "driver-linux"
+ path = package_path / "drivers" / "driver-linux"
+ return make_file_executable(path)
elif platform == "win32":
result = package_path / "drivers" / "driver-win32-amd64.exe"
if result.exists():
return result
return package_path / "drivers" / "driver-win32.exe"
- return package_path / "drivers" / "driver-linux"
+
+ path = package_path / "drivers" / "driver-linux"
+ return make_file_executable(path)
async def run_driver_async() -> Connection:
@@ -140,11 +142,7 @@
print('Run "python -m playwright install" to complete installation')
return
driver_executable = compute_driver_executable()
- # Fix the executable bit during the installation.
- if not sys.platform == "win32":
- st = os.stat(driver_executable)
- if st.st_mode & stat.S_IEXEC == 0:
- os.chmod(driver_executable, st.st_mode | stat.S_IEXEC)
+
print("Installing the browsers...")
subprocess.check_call([str(driver_executable), "install"])
diff --git a/playwright/path_utils.py b/playwright/path_utils.py
--- a/playwright/path_utils.py
+++ b/playwright/path_utils.py
@@ -1,4 +1,5 @@
import inspect
+import stat
from pathlib import Path
@@ -8,3 +9,9 @@
module = inspect.getmodule(frame[0])
assert module
return Path(module.__file__).parent.absolute()
+
+
+def make_file_executable(file_path: Path) -> Path:
+ """Makes a file executable."""
+ file_path.chmod(file_path.stat().st_mode | stat.S_IEXEC)
+ return file_path
| {"golden_diff": "diff --git a/playwright/main.py b/playwright/main.py\n--- a/playwright/main.py\n+++ b/playwright/main.py\n@@ -14,8 +14,6 @@\n \n import asyncio\n import io\n-import os\n-import stat\n import subprocess\n import sys\n from pathlib import Path\n@@ -27,7 +25,7 @@\n from playwright.connection import Connection\n from playwright.helper import Error\n from playwright.object_factory import create_remote_object\n-from playwright.path_utils import get_file_dirname\n+from playwright.path_utils import get_file_dirname, make_file_executable\n from playwright.playwright import Playwright\n from playwright.sync_api import Playwright as SyncPlaywright\n from playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber\n@@ -37,15 +35,19 @@\n package_path = get_file_dirname()\n platform = sys.platform\n if platform == \"darwin\":\n- return package_path / \"drivers\" / \"driver-darwin\"\n+ path = package_path / \"drivers\" / \"driver-darwin\"\n+ return make_file_executable(path)\n elif platform == \"linux\":\n- return package_path / \"drivers\" / \"driver-linux\"\n+ path = package_path / \"drivers\" / \"driver-linux\"\n+ return make_file_executable(path)\n elif platform == \"win32\":\n result = package_path / \"drivers\" / \"driver-win32-amd64.exe\"\n if result.exists():\n return result\n return package_path / \"drivers\" / \"driver-win32.exe\"\n- return package_path / \"drivers\" / \"driver-linux\"\n+\n+ path = package_path / \"drivers\" / \"driver-linux\"\n+ return make_file_executable(path)\n \n \n async def run_driver_async() -> Connection:\n@@ -140,11 +142,7 @@\n print('Run \"python -m playwright install\" to complete installation')\n return\n driver_executable = compute_driver_executable()\n- # Fix the executable bit during the installation.\n- if not sys.platform == \"win32\":\n- st = os.stat(driver_executable)\n- if st.st_mode & stat.S_IEXEC == 0:\n- os.chmod(driver_executable, st.st_mode | stat.S_IEXEC)\n+\n print(\"Installing the browsers...\")\n subprocess.check_call([str(driver_executable), \"install\"])\n \ndiff --git a/playwright/path_utils.py b/playwright/path_utils.py\n--- a/playwright/path_utils.py\n+++ b/playwright/path_utils.py\n@@ -1,4 +1,5 @@\n import inspect\n+import stat\n from pathlib import Path\n \n \n@@ -8,3 +9,9 @@\n module = inspect.getmodule(frame[0])\n assert module\n return Path(module.__file__).parent.absolute()\n+\n+\n+def make_file_executable(file_path: Path) -> Path:\n+ \"\"\"Makes a file executable.\"\"\"\n+ file_path.chmod(file_path.stat().st_mode | stat.S_IEXEC)\n+ return file_path\n", "issue": "Running playwright raises PermissionError on Linux\n### Reproducing the error\r\n\r\n- Created a virtual environment \r\n- Installed playwright with `python -m pip install playwright`\r\n\r\nTried to run some code\r\n\r\n```python\r\nfrom playwright import sync_playwright\r\n\r\nwith sync_playwright() as p:\r\n for browser_type in [p.chromium, p.firefox, p.webkit]:\r\n browser = browser_type.launch()\r\n page = browser.newPage()\r\n page.goto('http://whatsmyuseragent.org/')\r\n page.screenshot(path=f'example-{browser_type.name}.png')\r\n browser.close()\r\n```\r\n\r\nThen it raised this error\r\n\r\n```python\r\nPermissionError: [Errno 13] Permission denied: '/home/leno/Desktop/open-source/pwright/env/lib/python3.8/site-packages/playwright/drivers/driver-linux\r\n```\r\n\r\nI think this is not a normal behavior since running Python under sudo is a terrible idea.\r\n\r\n**OS**: Ubuntu 20.04\r\n**Python Version**: 3.8.2\r\n**Playwright Version**: 0.142.3\r\n\r\n\n", "before_files": [{"content": "import inspect\nfrom pathlib import Path\n\n\ndef get_file_dirname() -> Path:\n \"\"\"Returns the callee (`__file__`) directory name\"\"\"\n frame = inspect.stack()[1]\n module = inspect.getmodule(frame[0])\n assert module\n return Path(module.__file__).parent.absolute()\n", "path": "playwright/path_utils.py"}, {"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport io\nimport os\nimport stat\nimport subprocess\nimport sys\nfrom pathlib import Path\nfrom typing import Any\n\nfrom greenlet import greenlet\n\nfrom playwright.async_api import Playwright as AsyncPlaywright\nfrom playwright.connection import Connection\nfrom playwright.helper import Error\nfrom playwright.object_factory import create_remote_object\nfrom playwright.path_utils import get_file_dirname\nfrom playwright.playwright import Playwright\nfrom playwright.sync_api import Playwright as SyncPlaywright\nfrom playwright.sync_base import dispatcher_fiber, set_dispatcher_fiber\n\n\ndef compute_driver_executable() -> Path:\n package_path = get_file_dirname()\n platform = sys.platform\n if platform == \"darwin\":\n return package_path / \"drivers\" / \"driver-darwin\"\n elif platform == \"linux\":\n return package_path / \"drivers\" / \"driver-linux\"\n elif platform == \"win32\":\n result = package_path / \"drivers\" / \"driver-win32-amd64.exe\"\n if result.exists():\n return result\n return package_path / \"drivers\" / \"driver-win32.exe\"\n return package_path / \"drivers\" / \"driver-linux\"\n\n\nasync def run_driver_async() -> Connection:\n driver_executable = compute_driver_executable()\n\n # Sourced from: https://github.com/pytest-dev/pytest/blob/49827adcb9256c9c9c06a25729421dcc3c385edc/src/_pytest/faulthandler.py#L73-L80\n def _get_stderr_fileno() -> int:\n try:\n return sys.stderr.fileno()\n except io.UnsupportedOperation:\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n return sys.__stderr__.fileno()\n\n proc = await asyncio.create_subprocess_exec(\n str(driver_executable),\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n connection = Connection(\n proc.stdout, proc.stdin, create_remote_object, asyncio.get_event_loop()\n )\n return connection\n\n\ndef run_driver() -> Connection:\n loop = asyncio.get_event_loop()\n if loop.is_running():\n raise Error(\"Can only run one Playwright at a time.\")\n return loop.run_until_complete(run_driver_async())\n\n\nclass SyncPlaywrightContextManager:\n def __init__(self) -> None:\n self._connection = run_driver()\n self._playwright: SyncPlaywright\n\n def __enter__(self) -> SyncPlaywright:\n g_self = greenlet.getcurrent()\n\n def callback_wrapper(playwright_impl: Playwright) -> None:\n self._playwright = SyncPlaywright(playwright_impl)\n g_self.switch()\n\n self._connection.call_on_object_with_known_name(\"Playwright\", callback_wrapper)\n set_dispatcher_fiber(greenlet(lambda: self._connection.run_sync()))\n dispatcher_fiber().switch()\n playwright = self._playwright\n playwright.stop = self.__exit__ # type: ignore\n return playwright\n\n def start(self) -> SyncPlaywright:\n return self.__enter__()\n\n def __exit__(self, *args: Any) -> None:\n self._connection.stop_sync()\n\n\nclass AsyncPlaywrightContextManager:\n def __init__(self) -> None:\n self._connection: Connection\n\n async def __aenter__(self) -> AsyncPlaywright:\n self._connection = await run_driver_async()\n self._connection.run_async()\n playwright = AsyncPlaywright(\n await self._connection.wait_for_object_with_known_name(\"Playwright\")\n )\n playwright.stop = self.__aexit__ # type: ignore\n return playwright\n\n async def start(self) -> AsyncPlaywright:\n return await self.__aenter__()\n\n async def __aexit__(self, *args: Any) -> None:\n self._connection.stop_async()\n\n\nif sys.platform == \"win32\":\n # Use ProactorEventLoop in 3.7, which is default in 3.8\n loop = asyncio.ProactorEventLoop()\n asyncio.set_event_loop(loop)\n\n\ndef main() -> None:\n if \"install\" not in sys.argv:\n print('Run \"python -m playwright install\" to complete installation')\n return\n driver_executable = compute_driver_executable()\n # Fix the executable bit during the installation.\n if not sys.platform == \"win32\":\n st = os.stat(driver_executable)\n if st.st_mode & stat.S_IEXEC == 0:\n os.chmod(driver_executable, st.st_mode | stat.S_IEXEC)\n print(\"Installing the browsers...\")\n subprocess.check_call([str(driver_executable), \"install\"])\n\n print(\"Playwright is now ready for use\")\n", "path": "playwright/main.py"}]} | 2,431 | 652 |
gh_patches_debug_12070 | rasdani/github-patches | git_diff | Kinto__kinto-2011 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DELETE /v1/accounts raises a 500
```
$ http DELETE https://natim.alwaysdata.net/v1/accounts --auth admin:admin
HTTP/1.1 500 Internal Server Error
Access-Control-Expose-Headers: Retry-After, Alert, Backoff, Content-Length
Content-Length: 177
Content-Type: application/json
Date: Mon, 28 Jan 2019 20:45:56 GMT
Via: 1.1 alproxy
X-Content-Type-Options: nosniff
```
```
File "/home/natim/kinto/kinto/kinto/plugins/accounts/views.py", line 221, in on_account_changed
username = request.matchdict["id"]
KeyError: 'id'
```
DELETE /v1/accounts raises a 500
```
$ http DELETE https://natim.alwaysdata.net/v1/accounts --auth admin:admin
HTTP/1.1 500 Internal Server Error
Access-Control-Expose-Headers: Retry-After, Alert, Backoff, Content-Length
Content-Length: 177
Content-Type: application/json
Date: Mon, 28 Jan 2019 20:45:56 GMT
Via: 1.1 alproxy
X-Content-Type-Options: nosniff
```
```
File "/home/natim/kinto/kinto/kinto/plugins/accounts/views.py", line 221, in on_account_changed
username = request.matchdict["id"]
KeyError: 'id'
```
</issue>
<code>
[start of kinto/plugins/accounts/views.py]
1 import colander
2 from pyramid import httpexceptions
3 from pyramid.decorator import reify
4 from pyramid.security import Authenticated, Everyone
5 from pyramid.settings import aslist
6 from pyramid.events import subscriber
7
8 from kinto.views import NameGenerator
9 from kinto.core import resource, utils
10 from kinto.core.errors import raise_invalid, http_error
11 from kinto.core.events import ResourceChanged, ACTIONS
12
13 from .utils import hash_password, ACCOUNT_CACHE_KEY, ACCOUNT_POLICY_NAME
14
15
16 def _extract_posted_body_id(request):
17 try:
18 # Anonymous creation with POST.
19 return request.json["data"]["id"]
20 except (ValueError, KeyError):
21 # Bad POST data.
22 if request.method.lower() == "post":
23 error_details = {"name": "data.id", "description": "data.id in body: Required"}
24 raise_invalid(request, **error_details)
25 # Anonymous GET
26 error_msg = "Cannot read accounts."
27 raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)
28
29
30 class AccountIdGenerator(NameGenerator):
31 """Allow @ signs in account IDs."""
32
33 regexp = r"^[a-zA-Z0-9][.@a-zA-Z0-9_-]*$"
34
35
36 class AccountSchema(resource.ResourceSchema):
37 password = colander.SchemaNode(colander.String())
38
39
40 @resource.register()
41 class Account(resource.Resource):
42
43 schema = AccountSchema
44
45 def __init__(self, request, context):
46 # Store if current user is administrator (before accessing get_parent_id())
47 allowed_from_settings = request.registry.settings.get("account_write_principals", [])
48 context.is_administrator = (
49 len(set(aslist(allowed_from_settings)) & set(request.prefixed_principals)) > 0
50 )
51 # Shortcut to check if current is anonymous (before get_parent_id()).
52 context.is_anonymous = Authenticated not in request.effective_principals
53
54 super().__init__(request, context)
55
56 # Overwrite the current principal set by Resource.
57 if self.model.current_principal == Everyone or context.is_administrator:
58 # Creation is anonymous, but author with write perm is this:
59 self.model.current_principal = f"{ACCOUNT_POLICY_NAME}:{self.model.parent_id}"
60
61 @reify
62 def id_generator(self):
63 # This generator is used for ID validation.
64 return AccountIdGenerator()
65
66 def get_parent_id(self, request):
67 # The whole challenge here is that we want to isolate what
68 # authenticated users can list, but give access to everything to
69 # administrators.
70 # Plus when anonymous create accounts, we have to set their parent id
71 # to the same value they would obtain when authenticated.
72 if self.context.is_administrator:
73 if self.context.on_plural_endpoint:
74 # Accounts created by admin should have userid as parent.
75 if request.method.lower() == "post":
76 return _extract_posted_body_id(request)
77 else:
78 # Admin see all accounts.
79 return "*"
80 else:
81 # No pattern matching for admin on single record.
82 return request.matchdict["id"]
83
84 if not self.context.is_anonymous:
85 # Authenticated users see their own account only.
86 return request.selected_userid
87
88 # Anonymous creation with PUT.
89 if "id" in request.matchdict:
90 return request.matchdict["id"]
91
92 return _extract_posted_body_id(request)
93
94 def plural_post(self):
95 result = super(Account, self).plural_post()
96 if self.context.is_anonymous and self.request.response.status_code == 200:
97 error_details = {"message": "Account ID %r already exists" % result["data"]["id"]}
98 raise http_error(httpexceptions.HTTPForbidden(), **error_details)
99 return result
100
101 def process_object(self, new, old=None):
102 new = super(Account, self).process_object(new, old)
103
104 new["password"] = hash_password(new["password"])
105
106 # Administrators can reach other accounts and anonymous have no
107 # selected_userid. So do not try to enforce.
108 if self.context.is_administrator or self.context.is_anonymous:
109 return new
110
111 # Do not let accounts be created without usernames.
112 if self.model.id_field not in new:
113 error_details = {"name": "data.id", "description": "Accounts must have an ID."}
114 raise_invalid(self.request, **error_details)
115
116 # Otherwise, we force the id to match the authenticated username.
117 if new[self.model.id_field] != self.request.selected_userid:
118 error_details = {
119 "name": "data.id",
120 "description": "Username and account ID do not match.",
121 }
122 raise_invalid(self.request, **error_details)
123
124 return new
125
126
127 # Clear cache on account change
128 @subscriber(
129 ResourceChanged, for_resources=("account",), for_actions=(ACTIONS.UPDATE, ACTIONS.DELETE)
130 )
131 def on_account_changed(event):
132 request = event.request
133 cache = request.registry.cache
134 settings = request.registry.settings
135 # Extract username and password from current user
136 username = request.matchdict["id"]
137 hmac_secret = settings["userid_hmac_secret"]
138 cache_key = utils.hmac_digest(hmac_secret, ACCOUNT_CACHE_KEY.format(username))
139 # Delete cache
140 cache.delete(cache_key)
141
[end of kinto/plugins/accounts/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/plugins/accounts/views.py b/kinto/plugins/accounts/views.py
--- a/kinto/plugins/accounts/views.py
+++ b/kinto/plugins/accounts/views.py
@@ -132,9 +132,11 @@
request = event.request
cache = request.registry.cache
settings = request.registry.settings
- # Extract username and password from current user
- username = request.matchdict["id"]
hmac_secret = settings["userid_hmac_secret"]
- cache_key = utils.hmac_digest(hmac_secret, ACCOUNT_CACHE_KEY.format(username))
- # Delete cache
- cache.delete(cache_key)
+
+ for obj in event.impacted_objects:
+ # Extract username and password from current user
+ username = obj["old"]["id"]
+ cache_key = utils.hmac_digest(hmac_secret, ACCOUNT_CACHE_KEY.format(username))
+ # Delete cache
+ cache.delete(cache_key)
| {"golden_diff": "diff --git a/kinto/plugins/accounts/views.py b/kinto/plugins/accounts/views.py\n--- a/kinto/plugins/accounts/views.py\n+++ b/kinto/plugins/accounts/views.py\n@@ -132,9 +132,11 @@\n request = event.request\n cache = request.registry.cache\n settings = request.registry.settings\n- # Extract username and password from current user\n- username = request.matchdict[\"id\"]\n hmac_secret = settings[\"userid_hmac_secret\"]\n- cache_key = utils.hmac_digest(hmac_secret, ACCOUNT_CACHE_KEY.format(username))\n- # Delete cache\n- cache.delete(cache_key)\n+\n+ for obj in event.impacted_objects:\n+ # Extract username and password from current user\n+ username = obj[\"old\"][\"id\"]\n+ cache_key = utils.hmac_digest(hmac_secret, ACCOUNT_CACHE_KEY.format(username))\n+ # Delete cache\n+ cache.delete(cache_key)\n", "issue": "DELETE /v1/accounts raises a 500\n```\r\n$ http DELETE https://natim.alwaysdata.net/v1/accounts --auth admin:admin\r\nHTTP/1.1 500 Internal Server Error\r\nAccess-Control-Expose-Headers: Retry-After, Alert, Backoff, Content-Length\r\nContent-Length: 177\r\nContent-Type: application/json\r\nDate: Mon, 28 Jan 2019 20:45:56 GMT\r\nVia: 1.1 alproxy\r\nX-Content-Type-Options: nosniff\r\n```\r\n\r\n```\r\n File \"/home/natim/kinto/kinto/kinto/plugins/accounts/views.py\", line 221, in on_account_changed\r\n username = request.matchdict[\"id\"]\r\nKeyError: 'id'\r\n```\nDELETE /v1/accounts raises a 500\n```\r\n$ http DELETE https://natim.alwaysdata.net/v1/accounts --auth admin:admin\r\nHTTP/1.1 500 Internal Server Error\r\nAccess-Control-Expose-Headers: Retry-After, Alert, Backoff, Content-Length\r\nContent-Length: 177\r\nContent-Type: application/json\r\nDate: Mon, 28 Jan 2019 20:45:56 GMT\r\nVia: 1.1 alproxy\r\nX-Content-Type-Options: nosniff\r\n```\r\n\r\n```\r\n File \"/home/natim/kinto/kinto/kinto/plugins/accounts/views.py\", line 221, in on_account_changed\r\n username = request.matchdict[\"id\"]\r\nKeyError: 'id'\r\n```\n", "before_files": [{"content": "import colander\nfrom pyramid import httpexceptions\nfrom pyramid.decorator import reify\nfrom pyramid.security import Authenticated, Everyone\nfrom pyramid.settings import aslist\nfrom pyramid.events import subscriber\n\nfrom kinto.views import NameGenerator\nfrom kinto.core import resource, utils\nfrom kinto.core.errors import raise_invalid, http_error\nfrom kinto.core.events import ResourceChanged, ACTIONS\n\nfrom .utils import hash_password, ACCOUNT_CACHE_KEY, ACCOUNT_POLICY_NAME\n\n\ndef _extract_posted_body_id(request):\n try:\n # Anonymous creation with POST.\n return request.json[\"data\"][\"id\"]\n except (ValueError, KeyError):\n # Bad POST data.\n if request.method.lower() == \"post\":\n error_details = {\"name\": \"data.id\", \"description\": \"data.id in body: Required\"}\n raise_invalid(request, **error_details)\n # Anonymous GET\n error_msg = \"Cannot read accounts.\"\n raise http_error(httpexceptions.HTTPUnauthorized(), error=error_msg)\n\n\nclass AccountIdGenerator(NameGenerator):\n \"\"\"Allow @ signs in account IDs.\"\"\"\n\n regexp = r\"^[a-zA-Z0-9][.@a-zA-Z0-9_-]*$\"\n\n\nclass AccountSchema(resource.ResourceSchema):\n password = colander.SchemaNode(colander.String())\n\n\[email protected]()\nclass Account(resource.Resource):\n\n schema = AccountSchema\n\n def __init__(self, request, context):\n # Store if current user is administrator (before accessing get_parent_id())\n allowed_from_settings = request.registry.settings.get(\"account_write_principals\", [])\n context.is_administrator = (\n len(set(aslist(allowed_from_settings)) & set(request.prefixed_principals)) > 0\n )\n # Shortcut to check if current is anonymous (before get_parent_id()).\n context.is_anonymous = Authenticated not in request.effective_principals\n\n super().__init__(request, context)\n\n # Overwrite the current principal set by Resource.\n if self.model.current_principal == Everyone or context.is_administrator:\n # Creation is anonymous, but author with write perm is this:\n self.model.current_principal = f\"{ACCOUNT_POLICY_NAME}:{self.model.parent_id}\"\n\n @reify\n def id_generator(self):\n # This generator is used for ID validation.\n return AccountIdGenerator()\n\n def get_parent_id(self, request):\n # The whole challenge here is that we want to isolate what\n # authenticated users can list, but give access to everything to\n # administrators.\n # Plus when anonymous create accounts, we have to set their parent id\n # to the same value they would obtain when authenticated.\n if self.context.is_administrator:\n if self.context.on_plural_endpoint:\n # Accounts created by admin should have userid as parent.\n if request.method.lower() == \"post\":\n return _extract_posted_body_id(request)\n else:\n # Admin see all accounts.\n return \"*\"\n else:\n # No pattern matching for admin on single record.\n return request.matchdict[\"id\"]\n\n if not self.context.is_anonymous:\n # Authenticated users see their own account only.\n return request.selected_userid\n\n # Anonymous creation with PUT.\n if \"id\" in request.matchdict:\n return request.matchdict[\"id\"]\n\n return _extract_posted_body_id(request)\n\n def plural_post(self):\n result = super(Account, self).plural_post()\n if self.context.is_anonymous and self.request.response.status_code == 200:\n error_details = {\"message\": \"Account ID %r already exists\" % result[\"data\"][\"id\"]}\n raise http_error(httpexceptions.HTTPForbidden(), **error_details)\n return result\n\n def process_object(self, new, old=None):\n new = super(Account, self).process_object(new, old)\n\n new[\"password\"] = hash_password(new[\"password\"])\n\n # Administrators can reach other accounts and anonymous have no\n # selected_userid. So do not try to enforce.\n if self.context.is_administrator or self.context.is_anonymous:\n return new\n\n # Do not let accounts be created without usernames.\n if self.model.id_field not in new:\n error_details = {\"name\": \"data.id\", \"description\": \"Accounts must have an ID.\"}\n raise_invalid(self.request, **error_details)\n\n # Otherwise, we force the id to match the authenticated username.\n if new[self.model.id_field] != self.request.selected_userid:\n error_details = {\n \"name\": \"data.id\",\n \"description\": \"Username and account ID do not match.\",\n }\n raise_invalid(self.request, **error_details)\n\n return new\n\n\n# Clear cache on account change\n@subscriber(\n ResourceChanged, for_resources=(\"account\",), for_actions=(ACTIONS.UPDATE, ACTIONS.DELETE)\n)\ndef on_account_changed(event):\n request = event.request\n cache = request.registry.cache\n settings = request.registry.settings\n # Extract username and password from current user\n username = request.matchdict[\"id\"]\n hmac_secret = settings[\"userid_hmac_secret\"]\n cache_key = utils.hmac_digest(hmac_secret, ACCOUNT_CACHE_KEY.format(username))\n # Delete cache\n cache.delete(cache_key)\n", "path": "kinto/plugins/accounts/views.py"}]} | 2,314 | 199 |
gh_patches_debug_39351 | rasdani/github-patches | git_diff | pyodide__pyodide-1457 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Version selection for packages availble both on PyPi and in Pyodide
For packages not built in pyodide, version selection works as expected. For instance,
```py
>>> import micropip
>>> micropip.install('idna==2.9') # version before last on PyPi, package not in pyodide
Installed idna
>>> import idna
>>> idna.__version__
2.9
```
However, when one specifies the version for a package available in the pyodide distribution, it is ignored and the version from pyodide is installed regardless if PyPi includes the requested version,
```py
>>> import micropip
>>> micropip.install('pytz==2020.1')
Installed pytz
>>> import pytz
>>> pytz.__version__
2019.3
```
</issue>
<code>
[start of packages/micropip/micropip/micropip.py]
1 import asyncio
2 import hashlib
3 import importlib
4 import io
5 import json
6 from pathlib import Path
7 import zipfile
8 from typing import Dict, Any, Union, List, Tuple
9
10 from distlib import markers, util, version
11
12 # Provide stubs for testing in native python
13 try:
14 import pyodide_js
15
16 IN_BROWSER = True
17 except ImportError:
18 IN_BROWSER = False
19
20 if IN_BROWSER:
21 # In practice, this is the `site-packages` directory.
22 WHEEL_BASE = Path(__file__).parent
23 else:
24 WHEEL_BASE = Path(".") / "wheels"
25
26 if IN_BROWSER:
27 from js import fetch
28
29 async def _get_url(url):
30 resp = await fetch(url)
31 if not resp.ok:
32 raise OSError(
33 f"Request for {url} failed with status {resp.status}: {resp.statusText}"
34 )
35 return io.BytesIO(await resp.arrayBuffer())
36
37
38 else:
39 from urllib.request import urlopen
40
41 async def _get_url(url):
42 with urlopen(url) as fd:
43 content = fd.read()
44 return io.BytesIO(content)
45
46
47 if IN_BROWSER:
48 from asyncio import gather
49 else:
50 # asyncio.gather will schedule any coroutines to run on the event loop but
51 # we want to avoid using the event loop at all. Instead just run the
52 # coroutines in sequence.
53 async def gather(*coroutines): # type: ignore
54 result = []
55 for coroutine in coroutines:
56 result.append(await coroutine)
57 return result
58
59
60 async def _get_pypi_json(pkgname):
61 url = f"https://pypi.org/pypi/{pkgname}/json"
62 fd = await _get_url(url)
63 return json.load(fd)
64
65
66 def _parse_wheel_url(url: str) -> Tuple[str, Dict[str, Any], str]:
67 """Parse wheels URL and extract available metadata
68
69 See https://www.python.org/dev/peps/pep-0427/#file-name-convention
70 """
71 file_name = Path(url).name
72 # also strip '.whl' extension.
73 wheel_name = Path(url).stem
74 tokens = wheel_name.split("-")
75 # TODO: support optional build tags in the filename (cf PEP 427)
76 if len(tokens) < 5:
77 raise ValueError(f"{file_name} is not a valid wheel file name.")
78 version, python_tag, abi_tag, platform = tokens[-4:]
79 name = "-".join(tokens[:-4])
80 wheel = {
81 "digests": None, # checksums not available
82 "filename": file_name,
83 "packagetype": "bdist_wheel",
84 "python_version": python_tag,
85 "abi_tag": abi_tag,
86 "platform": platform,
87 "url": url,
88 }
89
90 return name, wheel, version
91
92
93 def _extract_wheel(fd):
94 with zipfile.ZipFile(fd) as zf:
95 zf.extractall(WHEEL_BASE)
96
97
98 def _validate_wheel(data, fileinfo):
99 if fileinfo.get("digests") is None:
100 # No checksums available, e.g. because installing
101 # from a different location than PyPi.
102 return
103 sha256 = fileinfo["digests"]["sha256"]
104 m = hashlib.sha256()
105 m.update(data.getvalue())
106 if m.hexdigest() != sha256:
107 raise ValueError("Contents don't match hash")
108
109
110 async def _install_wheel(name, fileinfo):
111 url = fileinfo["url"]
112 wheel = await _get_url(url)
113 _validate_wheel(wheel, fileinfo)
114 _extract_wheel(wheel)
115
116
117 class _PackageManager:
118 version_scheme = version.get_scheme("normalized")
119
120 def __init__(self):
121 if IN_BROWSER:
122 self.builtin_packages = pyodide_js._module.packages.dependencies.to_py()
123 else:
124 self.builtin_packages = {}
125 self.installed_packages = {}
126
127 async def install(self, requirements: Union[str, List[str]], ctx=None):
128 if ctx is None:
129 ctx = {"extra": None}
130
131 complete_ctx = dict(markers.DEFAULT_CONTEXT)
132 complete_ctx.update(ctx)
133
134 if isinstance(requirements, str):
135 requirements = [requirements]
136
137 transaction: Dict[str, Any] = {
138 "wheels": [],
139 "pyodide_packages": set(),
140 "locked": dict(self.installed_packages),
141 }
142 requirement_promises = []
143 for requirement in requirements:
144 requirement_promises.append(
145 self.add_requirement(requirement, complete_ctx, transaction)
146 )
147
148 await gather(*requirement_promises)
149
150 wheel_promises = []
151
152 # Install built-in packages
153 pyodide_packages = transaction["pyodide_packages"]
154 if len(pyodide_packages):
155 # Note: branch never happens in out-of-browser testing because we
156 # report that all dependencies are empty.
157 self.installed_packages.update(dict((k, None) for k in pyodide_packages))
158 wheel_promises.append(pyodide_js.loadPackage(list(pyodide_packages)))
159
160 # Now install PyPI packages
161 for name, wheel, ver in transaction["wheels"]:
162 wheel_promises.append(_install_wheel(name, wheel))
163 self.installed_packages[name] = ver
164 await gather(*wheel_promises)
165 return f'Installed {", ".join(self.installed_packages.keys())}'
166
167 async def add_requirement(self, requirement: str, ctx, transaction):
168 if requirement.endswith(".whl"):
169 # custom download location
170 name, wheel, version = _parse_wheel_url(requirement)
171 transaction["wheels"].append((name, wheel, version))
172 return
173
174 req = util.parse_requirement(requirement)
175
176 # If it's a Pyodide package, use that instead of the one on PyPI
177 if req.name in self.builtin_packages:
178 transaction["pyodide_packages"].add(req.name)
179 return
180
181 if req.marker:
182 if not markers.evaluator.evaluate(req.marker, ctx):
183 return
184
185 matcher = self.version_scheme.matcher(req.requirement)
186
187 # If we already have something that will work, don't
188 # fetch again
189 for name, ver in transaction["locked"].items():
190 if name == req.name:
191 if matcher.match(ver):
192 break
193 else:
194 raise ValueError(
195 f"Requested '{requirement}', "
196 f"but {name}=={ver} is already installed"
197 )
198 else:
199 metadata = await _get_pypi_json(req.name)
200 wheel, ver = self.find_wheel(metadata, req)
201 transaction["locked"][req.name] = ver
202
203 recurs_reqs = metadata.get("info", {}).get("requires_dist") or []
204 for recurs_req in recurs_reqs:
205 await self.add_requirement(recurs_req, ctx, transaction)
206
207 transaction["wheels"].append((req.name, wheel, ver))
208
209 def find_wheel(self, metadata, req):
210 releases = []
211 for ver, files in metadata.get("releases", {}).items():
212 ver = self.version_scheme.suggest(ver)
213 if ver is not None:
214 releases.append((ver, files))
215
216 def version_number(release):
217 return version.NormalizedVersion(release[0])
218
219 releases = sorted(releases, key=version_number, reverse=True)
220 matcher = self.version_scheme.matcher(req.requirement)
221 for ver, meta in releases:
222 if matcher.match(ver):
223 for fileinfo in meta:
224 if fileinfo["filename"].endswith("py3-none-any.whl"):
225 return fileinfo, ver
226
227 raise ValueError(f"Couldn't find a pure Python 3 wheel for '{req.requirement}'")
228
229
230 # Make PACKAGE_MANAGER singleton
231 PACKAGE_MANAGER = _PackageManager()
232 del _PackageManager
233
234
235 def install(requirements: Union[str, List[str]]):
236 """Install the given package and all of its dependencies.
237
238 See :ref:`loading packages <loading_packages>` for more information.
239
240 This only works for packages that are either pure Python or for packages
241 with C extensions that are built in Pyodide. If a pure Python package is not
242 found in the Pyodide repository it will be loaded from PyPi.
243
244 Parameters
245 ----------
246 requirements : ``str | List[str]``
247
248 A requirement or list of requirements to install. Each requirement is a string, which should be either
249 a package name or URL to a wheel:
250
251 - If the requirement ends in ``.whl`` it will be interpreted as a URL.
252 The file must be a wheel named in compliance with the
253 `PEP 427 naming convention <https://www.python.org/dev/peps/pep-0427/#file-format>`_.
254
255 - If the requirement does not end in ``.whl``, it will interpreted as the
256 name of a package. A package by this name must either be present in the
257 Pyodide repository at `indexURL <globalThis.loadPyodide>` or on PyPi
258
259 Returns
260 -------
261 ``Future``
262
263 A ``Future`` that resolves to ``None`` when all packages have
264 been downloaded and installed.
265 """
266 importlib.invalidate_caches()
267 return asyncio.ensure_future(PACKAGE_MANAGER.install(requirements))
268
269
270 __all__ = ["install"]
271
272
273 if __name__ == "__main__":
274 install("snowballstemmer")
275
[end of packages/micropip/micropip/micropip.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py
--- a/packages/micropip/micropip/micropip.py
+++ b/packages/micropip/micropip/micropip.py
@@ -57,6 +57,14 @@
return result
+if IN_BROWSER:
+ from pyodide_js import loadedPackages
+else:
+
+ class loadedPackages: # type: ignore
+ pass
+
+
async def _get_pypi_json(pkgname):
url = f"https://pypi.org/pypi/{pkgname}/json"
fd = await _get_url(url)
@@ -112,6 +120,7 @@
wheel = await _get_url(url)
_validate_wheel(wheel, fileinfo)
_extract_wheel(wheel)
+ setattr(loadedPackages, name, url)
class _PackageManager:
@@ -155,14 +164,15 @@
# Note: branch never happens in out-of-browser testing because we
# report that all dependencies are empty.
self.installed_packages.update(dict((k, None) for k in pyodide_packages))
- wheel_promises.append(pyodide_js.loadPackage(list(pyodide_packages)))
+ wheel_promises.append(
+ asyncio.ensure_future(pyodide_js.loadPackage(list(pyodide_packages)))
+ )
# Now install PyPI packages
for name, wheel, ver in transaction["wheels"]:
wheel_promises.append(_install_wheel(name, wheel))
self.installed_packages[name] = ver
await gather(*wheel_promises)
- return f'Installed {", ".join(self.installed_packages.keys())}'
async def add_requirement(self, requirement: str, ctx, transaction):
if requirement.endswith(".whl"):
@@ -245,8 +255,8 @@
----------
requirements : ``str | List[str]``
- A requirement or list of requirements to install. Each requirement is a string, which should be either
- a package name or URL to a wheel:
+ A requirement or list of requirements to install. Each requirement is a
+ string, which should be either a package name or URL to a wheel:
- If the requirement ends in ``.whl`` it will be interpreted as a URL.
The file must be a wheel named in compliance with the
@@ -260,8 +270,8 @@
-------
``Future``
- A ``Future`` that resolves to ``None`` when all packages have
- been downloaded and installed.
+ A ``Future`` that resolves to ``None`` when all packages have been
+ downloaded and installed.
"""
importlib.invalidate_caches()
return asyncio.ensure_future(PACKAGE_MANAGER.install(requirements))
| {"golden_diff": "diff --git a/packages/micropip/micropip/micropip.py b/packages/micropip/micropip/micropip.py\n--- a/packages/micropip/micropip/micropip.py\n+++ b/packages/micropip/micropip/micropip.py\n@@ -57,6 +57,14 @@\n return result\n \n \n+if IN_BROWSER:\n+ from pyodide_js import loadedPackages\n+else:\n+\n+ class loadedPackages: # type: ignore\n+ pass\n+\n+\n async def _get_pypi_json(pkgname):\n url = f\"https://pypi.org/pypi/{pkgname}/json\"\n fd = await _get_url(url)\n@@ -112,6 +120,7 @@\n wheel = await _get_url(url)\n _validate_wheel(wheel, fileinfo)\n _extract_wheel(wheel)\n+ setattr(loadedPackages, name, url)\n \n \n class _PackageManager:\n@@ -155,14 +164,15 @@\n # Note: branch never happens in out-of-browser testing because we\n # report that all dependencies are empty.\n self.installed_packages.update(dict((k, None) for k in pyodide_packages))\n- wheel_promises.append(pyodide_js.loadPackage(list(pyodide_packages)))\n+ wheel_promises.append(\n+ asyncio.ensure_future(pyodide_js.loadPackage(list(pyodide_packages)))\n+ )\n \n # Now install PyPI packages\n for name, wheel, ver in transaction[\"wheels\"]:\n wheel_promises.append(_install_wheel(name, wheel))\n self.installed_packages[name] = ver\n await gather(*wheel_promises)\n- return f'Installed {\", \".join(self.installed_packages.keys())}'\n \n async def add_requirement(self, requirement: str, ctx, transaction):\n if requirement.endswith(\".whl\"):\n@@ -245,8 +255,8 @@\n ----------\n requirements : ``str | List[str]``\n \n- A requirement or list of requirements to install. Each requirement is a string, which should be either\n- a package name or URL to a wheel:\n+ A requirement or list of requirements to install. Each requirement is a\n+ string, which should be either a package name or URL to a wheel:\n \n - If the requirement ends in ``.whl`` it will be interpreted as a URL.\n The file must be a wheel named in compliance with the\n@@ -260,8 +270,8 @@\n -------\n ``Future``\n \n- A ``Future`` that resolves to ``None`` when all packages have\n- been downloaded and installed.\n+ A ``Future`` that resolves to ``None`` when all packages have been\n+ downloaded and installed.\n \"\"\"\n importlib.invalidate_caches()\n return asyncio.ensure_future(PACKAGE_MANAGER.install(requirements))\n", "issue": "Version selection for packages availble both on PyPi and in Pyodide\nFor packages not built in pyodide, version selection works as expected. For instance,\r\n```py\r\n>>> import micropip\r\n>>> micropip.install('idna==2.9') # version before last on PyPi, package not in pyodide\r\nInstalled idna\r\n>>> import idna\r\n>>> idna.__version__\r\n2.9\r\n```\r\n\r\nHowever, when one specifies the version for a package available in the pyodide distribution, it is ignored and the version from pyodide is installed regardless if PyPi includes the requested version,\r\n```py\r\n>>> import micropip\r\n>>> micropip.install('pytz==2020.1')\r\nInstalled pytz\r\n>>> import pytz\r\n>>> pytz.__version__\r\n2019.3\r\n```\n", "before_files": [{"content": "import asyncio\nimport hashlib\nimport importlib\nimport io\nimport json\nfrom pathlib import Path\nimport zipfile\nfrom typing import Dict, Any, Union, List, Tuple\n\nfrom distlib import markers, util, version\n\n# Provide stubs for testing in native python\ntry:\n import pyodide_js\n\n IN_BROWSER = True\nexcept ImportError:\n IN_BROWSER = False\n\nif IN_BROWSER:\n # In practice, this is the `site-packages` directory.\n WHEEL_BASE = Path(__file__).parent\nelse:\n WHEEL_BASE = Path(\".\") / \"wheels\"\n\nif IN_BROWSER:\n from js import fetch\n\n async def _get_url(url):\n resp = await fetch(url)\n if not resp.ok:\n raise OSError(\n f\"Request for {url} failed with status {resp.status}: {resp.statusText}\"\n )\n return io.BytesIO(await resp.arrayBuffer())\n\n\nelse:\n from urllib.request import urlopen\n\n async def _get_url(url):\n with urlopen(url) as fd:\n content = fd.read()\n return io.BytesIO(content)\n\n\nif IN_BROWSER:\n from asyncio import gather\nelse:\n # asyncio.gather will schedule any coroutines to run on the event loop but\n # we want to avoid using the event loop at all. Instead just run the\n # coroutines in sequence.\n async def gather(*coroutines): # type: ignore\n result = []\n for coroutine in coroutines:\n result.append(await coroutine)\n return result\n\n\nasync def _get_pypi_json(pkgname):\n url = f\"https://pypi.org/pypi/{pkgname}/json\"\n fd = await _get_url(url)\n return json.load(fd)\n\n\ndef _parse_wheel_url(url: str) -> Tuple[str, Dict[str, Any], str]:\n \"\"\"Parse wheels URL and extract available metadata\n\n See https://www.python.org/dev/peps/pep-0427/#file-name-convention\n \"\"\"\n file_name = Path(url).name\n # also strip '.whl' extension.\n wheel_name = Path(url).stem\n tokens = wheel_name.split(\"-\")\n # TODO: support optional build tags in the filename (cf PEP 427)\n if len(tokens) < 5:\n raise ValueError(f\"{file_name} is not a valid wheel file name.\")\n version, python_tag, abi_tag, platform = tokens[-4:]\n name = \"-\".join(tokens[:-4])\n wheel = {\n \"digests\": None, # checksums not available\n \"filename\": file_name,\n \"packagetype\": \"bdist_wheel\",\n \"python_version\": python_tag,\n \"abi_tag\": abi_tag,\n \"platform\": platform,\n \"url\": url,\n }\n\n return name, wheel, version\n\n\ndef _extract_wheel(fd):\n with zipfile.ZipFile(fd) as zf:\n zf.extractall(WHEEL_BASE)\n\n\ndef _validate_wheel(data, fileinfo):\n if fileinfo.get(\"digests\") is None:\n # No checksums available, e.g. because installing\n # from a different location than PyPi.\n return\n sha256 = fileinfo[\"digests\"][\"sha256\"]\n m = hashlib.sha256()\n m.update(data.getvalue())\n if m.hexdigest() != sha256:\n raise ValueError(\"Contents don't match hash\")\n\n\nasync def _install_wheel(name, fileinfo):\n url = fileinfo[\"url\"]\n wheel = await _get_url(url)\n _validate_wheel(wheel, fileinfo)\n _extract_wheel(wheel)\n\n\nclass _PackageManager:\n version_scheme = version.get_scheme(\"normalized\")\n\n def __init__(self):\n if IN_BROWSER:\n self.builtin_packages = pyodide_js._module.packages.dependencies.to_py()\n else:\n self.builtin_packages = {}\n self.installed_packages = {}\n\n async def install(self, requirements: Union[str, List[str]], ctx=None):\n if ctx is None:\n ctx = {\"extra\": None}\n\n complete_ctx = dict(markers.DEFAULT_CONTEXT)\n complete_ctx.update(ctx)\n\n if isinstance(requirements, str):\n requirements = [requirements]\n\n transaction: Dict[str, Any] = {\n \"wheels\": [],\n \"pyodide_packages\": set(),\n \"locked\": dict(self.installed_packages),\n }\n requirement_promises = []\n for requirement in requirements:\n requirement_promises.append(\n self.add_requirement(requirement, complete_ctx, transaction)\n )\n\n await gather(*requirement_promises)\n\n wheel_promises = []\n\n # Install built-in packages\n pyodide_packages = transaction[\"pyodide_packages\"]\n if len(pyodide_packages):\n # Note: branch never happens in out-of-browser testing because we\n # report that all dependencies are empty.\n self.installed_packages.update(dict((k, None) for k in pyodide_packages))\n wheel_promises.append(pyodide_js.loadPackage(list(pyodide_packages)))\n\n # Now install PyPI packages\n for name, wheel, ver in transaction[\"wheels\"]:\n wheel_promises.append(_install_wheel(name, wheel))\n self.installed_packages[name] = ver\n await gather(*wheel_promises)\n return f'Installed {\", \".join(self.installed_packages.keys())}'\n\n async def add_requirement(self, requirement: str, ctx, transaction):\n if requirement.endswith(\".whl\"):\n # custom download location\n name, wheel, version = _parse_wheel_url(requirement)\n transaction[\"wheels\"].append((name, wheel, version))\n return\n\n req = util.parse_requirement(requirement)\n\n # If it's a Pyodide package, use that instead of the one on PyPI\n if req.name in self.builtin_packages:\n transaction[\"pyodide_packages\"].add(req.name)\n return\n\n if req.marker:\n if not markers.evaluator.evaluate(req.marker, ctx):\n return\n\n matcher = self.version_scheme.matcher(req.requirement)\n\n # If we already have something that will work, don't\n # fetch again\n for name, ver in transaction[\"locked\"].items():\n if name == req.name:\n if matcher.match(ver):\n break\n else:\n raise ValueError(\n f\"Requested '{requirement}', \"\n f\"but {name}=={ver} is already installed\"\n )\n else:\n metadata = await _get_pypi_json(req.name)\n wheel, ver = self.find_wheel(metadata, req)\n transaction[\"locked\"][req.name] = ver\n\n recurs_reqs = metadata.get(\"info\", {}).get(\"requires_dist\") or []\n for recurs_req in recurs_reqs:\n await self.add_requirement(recurs_req, ctx, transaction)\n\n transaction[\"wheels\"].append((req.name, wheel, ver))\n\n def find_wheel(self, metadata, req):\n releases = []\n for ver, files in metadata.get(\"releases\", {}).items():\n ver = self.version_scheme.suggest(ver)\n if ver is not None:\n releases.append((ver, files))\n\n def version_number(release):\n return version.NormalizedVersion(release[0])\n\n releases = sorted(releases, key=version_number, reverse=True)\n matcher = self.version_scheme.matcher(req.requirement)\n for ver, meta in releases:\n if matcher.match(ver):\n for fileinfo in meta:\n if fileinfo[\"filename\"].endswith(\"py3-none-any.whl\"):\n return fileinfo, ver\n\n raise ValueError(f\"Couldn't find a pure Python 3 wheel for '{req.requirement}'\")\n\n\n# Make PACKAGE_MANAGER singleton\nPACKAGE_MANAGER = _PackageManager()\ndel _PackageManager\n\n\ndef install(requirements: Union[str, List[str]]):\n \"\"\"Install the given package and all of its dependencies.\n\n See :ref:`loading packages <loading_packages>` for more information.\n\n This only works for packages that are either pure Python or for packages\n with C extensions that are built in Pyodide. If a pure Python package is not\n found in the Pyodide repository it will be loaded from PyPi.\n\n Parameters\n ----------\n requirements : ``str | List[str]``\n\n A requirement or list of requirements to install. Each requirement is a string, which should be either\n a package name or URL to a wheel:\n\n - If the requirement ends in ``.whl`` it will be interpreted as a URL.\n The file must be a wheel named in compliance with the\n `PEP 427 naming convention <https://www.python.org/dev/peps/pep-0427/#file-format>`_.\n\n - If the requirement does not end in ``.whl``, it will interpreted as the\n name of a package. A package by this name must either be present in the\n Pyodide repository at `indexURL <globalThis.loadPyodide>` or on PyPi\n\n Returns\n -------\n ``Future``\n\n A ``Future`` that resolves to ``None`` when all packages have\n been downloaded and installed.\n \"\"\"\n importlib.invalidate_caches()\n return asyncio.ensure_future(PACKAGE_MANAGER.install(requirements))\n\n\n__all__ = [\"install\"]\n\n\nif __name__ == \"__main__\":\n install(\"snowballstemmer\")\n", "path": "packages/micropip/micropip/micropip.py"}]} | 3,485 | 633 |
gh_patches_debug_8411 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-2964 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
SyntaxWarnings
While running an `apt upgrade` I noticed:
```
/usr/lib/python3/dist-packages/pyqtgraph/examples/SpinBox.py:38: SyntaxWarning: invalid escape sequence '\$'
regex='\$?(?P<number>(-?\d+(\.\d+)?)|(-?\.\d+))$')),
```
The `\$` should be written `\\$` or `r'\$'` since a few Python releases (same for all backslash escape that have no meanings). I don't have the time to search for other occurrences of this fact, but running the tests with `PYTHONDEVMODE=1` should help spotting them :)
</issue>
<code>
[start of pyqtgraph/examples/SpinBox.py]
1 """
2 This example demonstrates the SpinBox widget, which is an extension of
3 QDoubleSpinBox providing some advanced features:
4
5 * SI-prefixed units
6 * Non-linear stepping modes
7 * Bounded/unbounded values
8
9 """
10
11 import ast
12
13 import pyqtgraph as pg
14 from pyqtgraph.Qt import QtWidgets
15
16 app = pg.mkQApp("SpinBox Example")
17
18
19 spins = [
20 ("Floating-point spin box, min=0, no maximum.<br>Non-finite values (nan, inf) are permitted.",
21 pg.SpinBox(value=5.0, bounds=[0, None], finite=False)),
22 ("Integer spin box, dec stepping<br>(1-9, 10-90, 100-900, etc), decimals=4",
23 pg.SpinBox(value=10, int=True, dec=True, minStep=1, step=1, decimals=4)),
24 ("Float with SI-prefixed units<br>(n, u, m, k, M, etc)",
25 pg.SpinBox(value=0.9, suffix='V', siPrefix=True)),
26 ("Float with SI-prefixed units,<br>dec step=0.1, minStep=0.1",
27 pg.SpinBox(value=1.0, suffix='PSI', siPrefix=True, dec=True, step=0.1, minStep=0.1)),
28 ("Float with SI-prefixed units,<br>dec step=0.5, minStep=0.01",
29 pg.SpinBox(value=1.0, suffix='V', siPrefix=True, dec=True, step=0.5, minStep=0.01)),
30 ("Float with SI-prefixed units,<br>dec step=1.0, minStep=0.001",
31 pg.SpinBox(value=1.0, suffix='V', siPrefix=True, dec=True, step=1.0, minStep=0.001)),
32 ("Float with SI-prefixed units,<br>scaleAtZero=1e-6, step=1e-9",
33 pg.SpinBox(value=0, suffix='V', siPrefix=True, scaleAtZero=1e-6, step=1e-9)),
34 ("Float with SI prefix but no suffix",
35 pg.SpinBox(value=1e9, siPrefix=True)),
36 ("Float with custom formatting",
37 pg.SpinBox(value=23.07, format='${value:0.02f}',
38 regex='\$?(?P<number>(-?\d+(\.\d+)?)|(-?\.\d+))$')),
39 ("Int with suffix",
40 pg.SpinBox(value=999, step=1, int=True, suffix="V")),
41 ("Int with custom formatting",
42 pg.SpinBox(value=4567, step=1, int=True, bounds=[0,None], format='0x{value:X}',
43 regex='(0x)?(?P<number>[0-9a-fA-F]+)$',
44 evalFunc=lambda s: ast.literal_eval('0x'+s))),
45 ("Integer with bounds=[10, 20] and wrapping",
46 pg.SpinBox(value=10, bounds=[10, 20], int=True, minStep=1, step=1, wrapping=True)),
47 ]
48
49
50 win = QtWidgets.QMainWindow()
51 win.setWindowTitle('pyqtgraph example: SpinBox')
52 cw = QtWidgets.QWidget()
53 layout = QtWidgets.QGridLayout()
54 cw.setLayout(layout)
55 win.setCentralWidget(cw)
56 win.show()
57 #win.resize(300, 600)
58 changingLabel = QtWidgets.QLabel() ## updated immediately
59 changedLabel = QtWidgets.QLabel() ## updated only when editing is finished or mouse wheel has stopped for 0.3sec
60 changingLabel.setMinimumWidth(200)
61 font = changingLabel.font()
62 font.setBold(True)
63 font.setPointSize(14)
64 changingLabel.setFont(font)
65 changedLabel.setFont(font)
66 labels = []
67
68
69 def valueChanged(sb):
70 changedLabel.setText("Final value: %s" % str(sb.value()))
71
72 def valueChanging(sb, value):
73 changingLabel.setText("Value changing: %s" % str(sb.value()))
74
75
76 for text, spin in spins:
77 label = QtWidgets.QLabel(text)
78 labels.append(label)
79 layout.addWidget(label)
80 layout.addWidget(spin)
81 spin.sigValueChanged.connect(valueChanged)
82 spin.sigValueChanging.connect(valueChanging)
83
84 layout.addWidget(changingLabel, 0, 1)
85 layout.addWidget(changedLabel, 2, 1)
86
87
88 #def mkWin():
89 #win = QtWidgets.QMainWindow()
90 #g = QtWidgets.QFormLayout()
91 #w = QtWidgets.QWidget()
92 #w.setLayout(g)
93 #win.setCentralWidget(w)
94 #s1 = SpinBox(value=5, step=0.1, bounds=[-1.5, None], suffix='units')
95 #t1 = QtWidgets.QLineEdit()
96 #g.addRow(s1, t1)
97 #s2 = SpinBox(value=10e-6, dec=True, step=0.1, minStep=1e-6, suffix='A', siPrefix=True)
98 #t2 = QtWidgets.QLineEdit()
99 #g.addRow(s2, t2)
100 #s3 = SpinBox(value=1000, dec=True, step=0.5, minStep=1e-6, bounds=[1, 1e9], suffix='Hz', siPrefix=True)
101 #t3 = QtWidgets.QLineEdit()
102 #g.addRow(s3, t3)
103 #s4 = SpinBox(int=True, dec=True, step=1, minStep=1, bounds=[-10, 1000])
104 #t4 = QtWidgets.QLineEdit()
105 #g.addRow(s4, t4)
106
107 #win.show()
108
109 #import sys
110 #for sb in [s1, s2, s3,s4]:
111
112 ##QtCore.QObject.connect(sb, QtCore.SIGNAL('valueChanged(double)'), lambda v: sys.stdout.write(str(sb) + " valueChanged\n"))
113 ##QtCore.QObject.connect(sb, QtCore.SIGNAL('editingFinished()'), lambda: sys.stdout.write(str(sb) + " editingFinished\n"))
114 #sb.sigValueChanged.connect(valueChanged)
115 #sb.sigValueChanging.connect(valueChanging)
116 #sb.editingFinished.connect(lambda: sys.stdout.write(str(sb) + " editingFinished\n"))
117 #return win, w, [s1, s2, s3, s4]
118 #a = mkWin()
119
120
121 #def test(n=100):
122 #for i in range(n):
123 #win, w, sb = mkWin()
124 #for s in sb:
125 #w.setParent(None)
126 #s.setParent(None)
127 #s.valueChanged.disconnect()
128 #s.editingFinished.disconnect()
129
130
131 if __name__ == '__main__':
132 pg.exec()
133
[end of pyqtgraph/examples/SpinBox.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyqtgraph/examples/SpinBox.py b/pyqtgraph/examples/SpinBox.py
--- a/pyqtgraph/examples/SpinBox.py
+++ b/pyqtgraph/examples/SpinBox.py
@@ -35,7 +35,7 @@
pg.SpinBox(value=1e9, siPrefix=True)),
("Float with custom formatting",
pg.SpinBox(value=23.07, format='${value:0.02f}',
- regex='\$?(?P<number>(-?\d+(\.\d+)?)|(-?\.\d+))$')),
+ regex = r'\$?(?P<number>(-?\d+(\.\d+)?)|(-?\.\d+))$')),
("Int with suffix",
pg.SpinBox(value=999, step=1, int=True, suffix="V")),
("Int with custom formatting",
| {"golden_diff": "diff --git a/pyqtgraph/examples/SpinBox.py b/pyqtgraph/examples/SpinBox.py\n--- a/pyqtgraph/examples/SpinBox.py\n+++ b/pyqtgraph/examples/SpinBox.py\n@@ -35,7 +35,7 @@\n pg.SpinBox(value=1e9, siPrefix=True)),\n (\"Float with custom formatting\", \n pg.SpinBox(value=23.07, format='${value:0.02f}',\n- regex='\\$?(?P<number>(-?\\d+(\\.\\d+)?)|(-?\\.\\d+))$')),\n+ regex = r'\\$?(?P<number>(-?\\d+(\\.\\d+)?)|(-?\\.\\d+))$')),\n (\"Int with suffix\",\n pg.SpinBox(value=999, step=1, int=True, suffix=\"V\")),\n (\"Int with custom formatting\",\n", "issue": "SyntaxWarnings\nWhile running an `apt upgrade` I noticed:\r\n\r\n```\r\n/usr/lib/python3/dist-packages/pyqtgraph/examples/SpinBox.py:38: SyntaxWarning: invalid escape sequence '\\$'\r\n regex='\\$?(?P<number>(-?\\d+(\\.\\d+)?)|(-?\\.\\d+))$')),\r\n```\r\n\r\nThe `\\$` should be written `\\\\$` or `r'\\$'` since a few Python releases (same for all backslash escape that have no meanings). I don't have the time to search for other occurrences of this fact, but running the tests with `PYTHONDEVMODE=1` should help spotting them :)\n", "before_files": [{"content": "\"\"\"\nThis example demonstrates the SpinBox widget, which is an extension of \nQDoubleSpinBox providing some advanced features:\n\n * SI-prefixed units\n * Non-linear stepping modes\n * Bounded/unbounded values\n\n\"\"\"\n\nimport ast\n\nimport pyqtgraph as pg\nfrom pyqtgraph.Qt import QtWidgets\n\napp = pg.mkQApp(\"SpinBox Example\")\n\n\nspins = [\n (\"Floating-point spin box, min=0, no maximum.<br>Non-finite values (nan, inf) are permitted.\",\n pg.SpinBox(value=5.0, bounds=[0, None], finite=False)),\n (\"Integer spin box, dec stepping<br>(1-9, 10-90, 100-900, etc), decimals=4\", \n pg.SpinBox(value=10, int=True, dec=True, minStep=1, step=1, decimals=4)),\n (\"Float with SI-prefixed units<br>(n, u, m, k, M, etc)\", \n pg.SpinBox(value=0.9, suffix='V', siPrefix=True)),\n (\"Float with SI-prefixed units,<br>dec step=0.1, minStep=0.1\", \n pg.SpinBox(value=1.0, suffix='PSI', siPrefix=True, dec=True, step=0.1, minStep=0.1)),\n (\"Float with SI-prefixed units,<br>dec step=0.5, minStep=0.01\", \n pg.SpinBox(value=1.0, suffix='V', siPrefix=True, dec=True, step=0.5, minStep=0.01)),\n (\"Float with SI-prefixed units,<br>dec step=1.0, minStep=0.001\", \n pg.SpinBox(value=1.0, suffix='V', siPrefix=True, dec=True, step=1.0, minStep=0.001)),\n (\"Float with SI-prefixed units,<br>scaleAtZero=1e-6, step=1e-9\",\n pg.SpinBox(value=0, suffix='V', siPrefix=True, scaleAtZero=1e-6, step=1e-9)),\n (\"Float with SI prefix but no suffix\",\n pg.SpinBox(value=1e9, siPrefix=True)),\n (\"Float with custom formatting\", \n pg.SpinBox(value=23.07, format='${value:0.02f}',\n regex='\\$?(?P<number>(-?\\d+(\\.\\d+)?)|(-?\\.\\d+))$')),\n (\"Int with suffix\",\n pg.SpinBox(value=999, step=1, int=True, suffix=\"V\")),\n (\"Int with custom formatting\", \n pg.SpinBox(value=4567, step=1, int=True, bounds=[0,None], format='0x{value:X}', \n regex='(0x)?(?P<number>[0-9a-fA-F]+)$',\n evalFunc=lambda s: ast.literal_eval('0x'+s))),\n (\"Integer with bounds=[10, 20] and wrapping\",\n pg.SpinBox(value=10, bounds=[10, 20], int=True, minStep=1, step=1, wrapping=True)),\n]\n\n\nwin = QtWidgets.QMainWindow()\nwin.setWindowTitle('pyqtgraph example: SpinBox')\ncw = QtWidgets.QWidget()\nlayout = QtWidgets.QGridLayout()\ncw.setLayout(layout)\nwin.setCentralWidget(cw)\nwin.show()\n#win.resize(300, 600)\nchangingLabel = QtWidgets.QLabel() ## updated immediately\nchangedLabel = QtWidgets.QLabel() ## updated only when editing is finished or mouse wheel has stopped for 0.3sec\nchangingLabel.setMinimumWidth(200)\nfont = changingLabel.font()\nfont.setBold(True)\nfont.setPointSize(14)\nchangingLabel.setFont(font)\nchangedLabel.setFont(font)\nlabels = []\n\n\ndef valueChanged(sb):\n changedLabel.setText(\"Final value: %s\" % str(sb.value()))\n\ndef valueChanging(sb, value):\n changingLabel.setText(\"Value changing: %s\" % str(sb.value()))\n\n \nfor text, spin in spins:\n label = QtWidgets.QLabel(text)\n labels.append(label)\n layout.addWidget(label)\n layout.addWidget(spin)\n spin.sigValueChanged.connect(valueChanged)\n spin.sigValueChanging.connect(valueChanging)\n\nlayout.addWidget(changingLabel, 0, 1)\nlayout.addWidget(changedLabel, 2, 1)\n\n\n#def mkWin():\n #win = QtWidgets.QMainWindow()\n #g = QtWidgets.QFormLayout()\n #w = QtWidgets.QWidget()\n #w.setLayout(g)\n #win.setCentralWidget(w)\n #s1 = SpinBox(value=5, step=0.1, bounds=[-1.5, None], suffix='units')\n #t1 = QtWidgets.QLineEdit()\n #g.addRow(s1, t1)\n #s2 = SpinBox(value=10e-6, dec=True, step=0.1, minStep=1e-6, suffix='A', siPrefix=True)\n #t2 = QtWidgets.QLineEdit()\n #g.addRow(s2, t2)\n #s3 = SpinBox(value=1000, dec=True, step=0.5, minStep=1e-6, bounds=[1, 1e9], suffix='Hz', siPrefix=True)\n #t3 = QtWidgets.QLineEdit()\n #g.addRow(s3, t3)\n #s4 = SpinBox(int=True, dec=True, step=1, minStep=1, bounds=[-10, 1000])\n #t4 = QtWidgets.QLineEdit()\n #g.addRow(s4, t4)\n\n #win.show()\n\n #import sys\n #for sb in [s1, s2, s3,s4]:\n\n ##QtCore.QObject.connect(sb, QtCore.SIGNAL('valueChanged(double)'), lambda v: sys.stdout.write(str(sb) + \" valueChanged\\n\"))\n ##QtCore.QObject.connect(sb, QtCore.SIGNAL('editingFinished()'), lambda: sys.stdout.write(str(sb) + \" editingFinished\\n\"))\n #sb.sigValueChanged.connect(valueChanged)\n #sb.sigValueChanging.connect(valueChanging)\n #sb.editingFinished.connect(lambda: sys.stdout.write(str(sb) + \" editingFinished\\n\"))\n #return win, w, [s1, s2, s3, s4]\n#a = mkWin()\n\n\n#def test(n=100):\n #for i in range(n):\n #win, w, sb = mkWin()\n #for s in sb:\n #w.setParent(None)\n #s.setParent(None)\n #s.valueChanged.disconnect()\n #s.editingFinished.disconnect()\n\n\nif __name__ == '__main__':\n pg.exec()\n", "path": "pyqtgraph/examples/SpinBox.py"}]} | 2,467 | 189 |
gh_patches_debug_27192 | rasdani/github-patches | git_diff | nipy__nipype-3154 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Test on Python 3.8
### Summary
Python 3.8 has been out for a month or so, and upstream libraries are mostly providing wheels. We should start 3.8 tests when it's feasible.
Necessary steps:
* [ ] Update `.travis.yml` to use Python 3.8
* [ ] Note any failures, identify whether they exist on our end or in a dependency
* [ ] Find blocking upstream issues and link to them so we can track
* [ ] When tests pass and urgent `FutureWarning`/`DeprecationWarning`s are dealt with, update the classifiers in `nipype/info.py` to indicate we support Python 3.8.
I marked this [](https://github.com/nipy/nipype/labels/good-first-issue) but it's less for a new developer than someone looking to get more involved in project maintenance.
</issue>
<code>
[start of nipype/info.py]
1 """ This file contains defines parameters for nipy that we use to fill
2 settings in setup.py, the nipy top-level docstring, and for building the
3 docs. In setup.py in particular, we exec this file, so it cannot import nipy
4 """
5
6 # nipype version information
7 # Remove -dev for release
8 __version__ = "1.5.0-rc1.post-dev"
9
10
11 def get_nipype_gitversion():
12 """Nipype version as reported by the last commit in git
13
14 Returns
15 -------
16 None or str
17 Version of Nipype according to git.
18 """
19 import os
20 import subprocess
21
22 try:
23 import nipype
24
25 gitpath = os.path.realpath(
26 os.path.join(os.path.dirname(nipype.__file__), os.path.pardir)
27 )
28 except:
29 gitpath = os.getcwd()
30 gitpathgit = os.path.join(gitpath, ".git")
31 if not os.path.exists(gitpathgit):
32 return None
33 ver = None
34 try:
35 o, _ = subprocess.Popen(
36 "git describe", shell=True, cwd=gitpath, stdout=subprocess.PIPE
37 ).communicate()
38 except Exception:
39 pass
40 else:
41 ver = o.decode().strip().split("-")[-1]
42 return ver
43
44
45 if __version__.endswith("-dev"):
46 gitversion = get_nipype_gitversion()
47 if gitversion:
48 __version__ = "{}+{}".format(__version__, gitversion)
49
50 CLASSIFIERS = [
51 "Development Status :: 5 - Production/Stable",
52 "Environment :: Console",
53 "Intended Audience :: Science/Research",
54 "License :: OSI Approved :: Apache Software License",
55 "Operating System :: MacOS :: MacOS X",
56 "Operating System :: POSIX :: Linux",
57 "Programming Language :: Python :: 3.6",
58 "Programming Language :: Python :: 3.7",
59 "Topic :: Scientific/Engineering",
60 ]
61 PYTHON_REQUIRES = ">= 3.6"
62
63 description = "Neuroimaging in Python: Pipelines and Interfaces"
64
65 # Note: this long_description is actually a copy/paste from the top-level
66 # README.txt, so that it shows up nicely on PyPI. So please remember to edit
67 # it only in one place and sync it correctly.
68 long_description = """========================================================
69 NIPYPE: Neuroimaging in Python: Pipelines and Interfaces
70 ========================================================
71
72 Current neuroimaging software offer users an incredible opportunity to
73 analyze data using a variety of different algorithms. However, this has
74 resulted in a heterogeneous collection of specialized applications
75 without transparent interoperability or a uniform operating interface.
76
77 *Nipype*, an open-source, community-developed initiative under the
78 umbrella of `NiPy <http://nipy.org>`_, is a Python project that provides a
79 uniform interface to existing neuroimaging software and facilitates interaction
80 between these packages within a single workflow. Nipype provides an environment
81 that encourages interactive exploration of algorithms from different
82 packages (e.g., AFNI, ANTS, BRAINS, BrainSuite, Camino, FreeSurfer, FSL, MNE,
83 MRtrix, MNE, Nipy, Slicer, SPM), eases the design of workflows within and
84 between packages, and reduces the learning curve necessary to use different \
85 packages. Nipype is creating a collaborative platform for neuroimaging \
86 software development in a high-level language and addressing limitations of \
87 existing pipeline systems.
88
89 *Nipype* allows you to:
90
91 * easily interact with tools from different software packages
92 * combine processing steps from different software packages
93 * develop new workflows faster by reusing common steps from old ones
94 * process data faster by running it in parallel on many cores/machines
95 * make your research easily reproducible
96 * share your processing workflows with the community
97 """
98
99 # versions
100 NIBABEL_MIN_VERSION = "2.1.0"
101 NETWORKX_MIN_VERSION = "1.9"
102 NUMPY_MIN_VERSION = "1.13"
103 # Numpy bug in python 3.7:
104 # https://www.opensourceanswers.com/blog/you-shouldnt-use-python-37-for-data-science-right-now.html
105 NUMPY_MIN_VERSION_37 = "1.15.3"
106 SCIPY_MIN_VERSION = "0.14"
107 TRAITS_MIN_VERSION = "4.6"
108 DATEUTIL_MIN_VERSION = "2.2"
109 FUTURE_MIN_VERSION = "0.16.0"
110 SIMPLEJSON_MIN_VERSION = "3.8.0"
111 PROV_VERSION = "1.5.2"
112 CLICK_MIN_VERSION = "6.6.0"
113 PYDOT_MIN_VERSION = "1.2.3"
114
115 NAME = "nipype"
116 MAINTAINER = "nipype developers"
117 MAINTAINER_EMAIL = "[email protected]"
118 DESCRIPTION = description
119 LONG_DESCRIPTION = long_description
120 URL = "http://nipy.org/nipype"
121 DOWNLOAD_URL = "http://github.com/nipy/nipype/archives/master"
122 LICENSE = "Apache License, 2.0"
123 AUTHOR = "nipype developers"
124 AUTHOR_EMAIL = "[email protected]"
125 PLATFORMS = "OS Independent"
126 MAJOR = __version__.split(".")[0]
127 MINOR = __version__.split(".")[1]
128 MICRO = __version__.replace("-", ".").split(".")[2]
129 ISRELEASE = (
130 len(__version__.replace("-", ".").split(".")) == 3
131 or "post" in __version__.replace("-", ".").split(".")[-1]
132 )
133 VERSION = __version__
134 PROVIDES = ["nipype"]
135 REQUIRES = [
136 "click>=%s" % CLICK_MIN_VERSION,
137 "networkx>=%s" % NETWORKX_MIN_VERSION,
138 "nibabel>=%s" % NIBABEL_MIN_VERSION,
139 'numpy>=%s ; python_version < "3.7"' % NUMPY_MIN_VERSION,
140 'numpy>=%s ; python_version >= "3.7"' % NUMPY_MIN_VERSION_37,
141 "packaging",
142 "prov>=%s" % PROV_VERSION,
143 "pydot>=%s" % PYDOT_MIN_VERSION,
144 "pydotplus",
145 "python-dateutil>=%s" % DATEUTIL_MIN_VERSION,
146 "scipy>=%s" % SCIPY_MIN_VERSION,
147 "simplejson>=%s" % SIMPLEJSON_MIN_VERSION,
148 "traits>=%s,!=5.0" % TRAITS_MIN_VERSION,
149 "filelock>=3.0.0",
150 "etelemetry>=0.2.0",
151 ]
152
153 # neurdflib has to come after prov
154 # https://github.com/nipy/nipype/pull/2961#issuecomment-512035484
155 REQUIRES += ["neurdflib"]
156
157 TESTS_REQUIRES = [
158 "codecov",
159 "coverage<5",
160 "pytest",
161 "pytest-cov",
162 "pytest-env",
163 "pytest-timeout",
164 ]
165
166 EXTRA_REQUIRES = {
167 "data": ["datalad"],
168 "doc": [
169 "dipy",
170 "ipython",
171 "matplotlib",
172 "nbsphinx",
173 "sphinx-argparse",
174 "sphinx>=2.1.2",
175 "sphinxcontrib-apidoc",
176 "sphinxcontrib-napoleon",
177 ],
178 "duecredit": ["duecredit"],
179 "nipy": ["nitime", "nilearn<0.5.0", "dipy", "nipy", "matplotlib"],
180 "profiler": ["psutil>=5.0"],
181 "pybids": ["pybids>=0.7.0"],
182 "specs": ["black"],
183 "ssh": ["paramiko"],
184 "tests": TESTS_REQUIRES,
185 "xvfbwrapper": ["xvfbwrapper"],
186 # 'mesh': ['mayavi'] # Enable when it works
187 }
188
189
190 def _list_union(iterable):
191 return list(set(sum(iterable, [])))
192
193
194 # Enable a handle to install all extra dependencies at once
195 EXTRA_REQUIRES["all"] = _list_union(EXTRA_REQUIRES.values())
196 # dev = doc + tests + specs
197 EXTRA_REQUIRES["dev"] = _list_union(
198 val for key, val in EXTRA_REQUIRES.items() if key in ("doc", "tests", "specs")
199 )
200
201 STATUS = "stable"
202
[end of nipype/info.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nipype/info.py b/nipype/info.py
--- a/nipype/info.py
+++ b/nipype/info.py
@@ -56,6 +56,7 @@
"Operating System :: POSIX :: Linux",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
+ "Programming Language :: Python :: 3.8",
"Topic :: Scientific/Engineering",
]
PYTHON_REQUIRES = ">= 3.6"
@@ -109,6 +110,7 @@
FUTURE_MIN_VERSION = "0.16.0"
SIMPLEJSON_MIN_VERSION = "3.8.0"
PROV_VERSION = "1.5.2"
+RDFLIB_MIN_VERSION = "5.0.0"
CLICK_MIN_VERSION = "6.6.0"
PYDOT_MIN_VERSION = "1.2.3"
@@ -143,6 +145,7 @@
"pydot>=%s" % PYDOT_MIN_VERSION,
"pydotplus",
"python-dateutil>=%s" % DATEUTIL_MIN_VERSION,
+ "rdflib>=%s" % RDFLIB_MIN_VERSION,
"scipy>=%s" % SCIPY_MIN_VERSION,
"simplejson>=%s" % SIMPLEJSON_MIN_VERSION,
"traits>=%s,!=5.0" % TRAITS_MIN_VERSION,
@@ -150,10 +153,6 @@
"etelemetry>=0.2.0",
]
-# neurdflib has to come after prov
-# https://github.com/nipy/nipype/pull/2961#issuecomment-512035484
-REQUIRES += ["neurdflib"]
-
TESTS_REQUIRES = [
"codecov",
"coverage<5",
| {"golden_diff": "diff --git a/nipype/info.py b/nipype/info.py\n--- a/nipype/info.py\n+++ b/nipype/info.py\n@@ -56,6 +56,7 @@\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n+ \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering\",\n ]\n PYTHON_REQUIRES = \">= 3.6\"\n@@ -109,6 +110,7 @@\n FUTURE_MIN_VERSION = \"0.16.0\"\n SIMPLEJSON_MIN_VERSION = \"3.8.0\"\n PROV_VERSION = \"1.5.2\"\n+RDFLIB_MIN_VERSION = \"5.0.0\"\n CLICK_MIN_VERSION = \"6.6.0\"\n PYDOT_MIN_VERSION = \"1.2.3\"\n \n@@ -143,6 +145,7 @@\n \"pydot>=%s\" % PYDOT_MIN_VERSION,\n \"pydotplus\",\n \"python-dateutil>=%s\" % DATEUTIL_MIN_VERSION,\n+ \"rdflib>=%s\" % RDFLIB_MIN_VERSION,\n \"scipy>=%s\" % SCIPY_MIN_VERSION,\n \"simplejson>=%s\" % SIMPLEJSON_MIN_VERSION,\n \"traits>=%s,!=5.0\" % TRAITS_MIN_VERSION,\n@@ -150,10 +153,6 @@\n \"etelemetry>=0.2.0\",\n ]\n \n-# neurdflib has to come after prov\n-# https://github.com/nipy/nipype/pull/2961#issuecomment-512035484\n-REQUIRES += [\"neurdflib\"]\n-\n TESTS_REQUIRES = [\n \"codecov\",\n \"coverage<5\",\n", "issue": "Test on Python 3.8\n### Summary\r\n\r\nPython 3.8 has been out for a month or so, and upstream libraries are mostly providing wheels. We should start 3.8 tests when it's feasible.\r\n\r\nNecessary steps:\r\n\r\n* [ ] Update `.travis.yml` to use Python 3.8\r\n* [ ] Note any failures, identify whether they exist on our end or in a dependency\r\n* [ ] Find blocking upstream issues and link to them so we can track\r\n* [ ] When tests pass and urgent `FutureWarning`/`DeprecationWarning`s are dealt with, update the classifiers in `nipype/info.py` to indicate we support Python 3.8.\r\n\r\nI marked this [](https://github.com/nipy/nipype/labels/good-first-issue) but it's less for a new developer than someone looking to get more involved in project maintenance.\n", "before_files": [{"content": "\"\"\" This file contains defines parameters for nipy that we use to fill\nsettings in setup.py, the nipy top-level docstring, and for building the\ndocs. In setup.py in particular, we exec this file, so it cannot import nipy\n\"\"\"\n\n# nipype version information\n# Remove -dev for release\n__version__ = \"1.5.0-rc1.post-dev\"\n\n\ndef get_nipype_gitversion():\n \"\"\"Nipype version as reported by the last commit in git\n\n Returns\n -------\n None or str\n Version of Nipype according to git.\n \"\"\"\n import os\n import subprocess\n\n try:\n import nipype\n\n gitpath = os.path.realpath(\n os.path.join(os.path.dirname(nipype.__file__), os.path.pardir)\n )\n except:\n gitpath = os.getcwd()\n gitpathgit = os.path.join(gitpath, \".git\")\n if not os.path.exists(gitpathgit):\n return None\n ver = None\n try:\n o, _ = subprocess.Popen(\n \"git describe\", shell=True, cwd=gitpath, stdout=subprocess.PIPE\n ).communicate()\n except Exception:\n pass\n else:\n ver = o.decode().strip().split(\"-\")[-1]\n return ver\n\n\nif __version__.endswith(\"-dev\"):\n gitversion = get_nipype_gitversion()\n if gitversion:\n __version__ = \"{}+{}\".format(__version__, gitversion)\n\nCLASSIFIERS = [\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering\",\n]\nPYTHON_REQUIRES = \">= 3.6\"\n\ndescription = \"Neuroimaging in Python: Pipelines and Interfaces\"\n\n# Note: this long_description is actually a copy/paste from the top-level\n# README.txt, so that it shows up nicely on PyPI. So please remember to edit\n# it only in one place and sync it correctly.\nlong_description = \"\"\"========================================================\nNIPYPE: Neuroimaging in Python: Pipelines and Interfaces\n========================================================\n\nCurrent neuroimaging software offer users an incredible opportunity to\nanalyze data using a variety of different algorithms. However, this has\nresulted in a heterogeneous collection of specialized applications\nwithout transparent interoperability or a uniform operating interface.\n\n*Nipype*, an open-source, community-developed initiative under the\numbrella of `NiPy <http://nipy.org>`_, is a Python project that provides a\nuniform interface to existing neuroimaging software and facilitates interaction\nbetween these packages within a single workflow. Nipype provides an environment\nthat encourages interactive exploration of algorithms from different\npackages (e.g., AFNI, ANTS, BRAINS, BrainSuite, Camino, FreeSurfer, FSL, MNE,\nMRtrix, MNE, Nipy, Slicer, SPM), eases the design of workflows within and\nbetween packages, and reduces the learning curve necessary to use different \\\npackages. Nipype is creating a collaborative platform for neuroimaging \\\nsoftware development in a high-level language and addressing limitations of \\\nexisting pipeline systems.\n\n*Nipype* allows you to:\n\n* easily interact with tools from different software packages\n* combine processing steps from different software packages\n* develop new workflows faster by reusing common steps from old ones\n* process data faster by running it in parallel on many cores/machines\n* make your research easily reproducible\n* share your processing workflows with the community\n\"\"\"\n\n# versions\nNIBABEL_MIN_VERSION = \"2.1.0\"\nNETWORKX_MIN_VERSION = \"1.9\"\nNUMPY_MIN_VERSION = \"1.13\"\n# Numpy bug in python 3.7:\n# https://www.opensourceanswers.com/blog/you-shouldnt-use-python-37-for-data-science-right-now.html\nNUMPY_MIN_VERSION_37 = \"1.15.3\"\nSCIPY_MIN_VERSION = \"0.14\"\nTRAITS_MIN_VERSION = \"4.6\"\nDATEUTIL_MIN_VERSION = \"2.2\"\nFUTURE_MIN_VERSION = \"0.16.0\"\nSIMPLEJSON_MIN_VERSION = \"3.8.0\"\nPROV_VERSION = \"1.5.2\"\nCLICK_MIN_VERSION = \"6.6.0\"\nPYDOT_MIN_VERSION = \"1.2.3\"\n\nNAME = \"nipype\"\nMAINTAINER = \"nipype developers\"\nMAINTAINER_EMAIL = \"[email protected]\"\nDESCRIPTION = description\nLONG_DESCRIPTION = long_description\nURL = \"http://nipy.org/nipype\"\nDOWNLOAD_URL = \"http://github.com/nipy/nipype/archives/master\"\nLICENSE = \"Apache License, 2.0\"\nAUTHOR = \"nipype developers\"\nAUTHOR_EMAIL = \"[email protected]\"\nPLATFORMS = \"OS Independent\"\nMAJOR = __version__.split(\".\")[0]\nMINOR = __version__.split(\".\")[1]\nMICRO = __version__.replace(\"-\", \".\").split(\".\")[2]\nISRELEASE = (\n len(__version__.replace(\"-\", \".\").split(\".\")) == 3\n or \"post\" in __version__.replace(\"-\", \".\").split(\".\")[-1]\n)\nVERSION = __version__\nPROVIDES = [\"nipype\"]\nREQUIRES = [\n \"click>=%s\" % CLICK_MIN_VERSION,\n \"networkx>=%s\" % NETWORKX_MIN_VERSION,\n \"nibabel>=%s\" % NIBABEL_MIN_VERSION,\n 'numpy>=%s ; python_version < \"3.7\"' % NUMPY_MIN_VERSION,\n 'numpy>=%s ; python_version >= \"3.7\"' % NUMPY_MIN_VERSION_37,\n \"packaging\",\n \"prov>=%s\" % PROV_VERSION,\n \"pydot>=%s\" % PYDOT_MIN_VERSION,\n \"pydotplus\",\n \"python-dateutil>=%s\" % DATEUTIL_MIN_VERSION,\n \"scipy>=%s\" % SCIPY_MIN_VERSION,\n \"simplejson>=%s\" % SIMPLEJSON_MIN_VERSION,\n \"traits>=%s,!=5.0\" % TRAITS_MIN_VERSION,\n \"filelock>=3.0.0\",\n \"etelemetry>=0.2.0\",\n]\n\n# neurdflib has to come after prov\n# https://github.com/nipy/nipype/pull/2961#issuecomment-512035484\nREQUIRES += [\"neurdflib\"]\n\nTESTS_REQUIRES = [\n \"codecov\",\n \"coverage<5\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-env\",\n \"pytest-timeout\",\n]\n\nEXTRA_REQUIRES = {\n \"data\": [\"datalad\"],\n \"doc\": [\n \"dipy\",\n \"ipython\",\n \"matplotlib\",\n \"nbsphinx\",\n \"sphinx-argparse\",\n \"sphinx>=2.1.2\",\n \"sphinxcontrib-apidoc\",\n \"sphinxcontrib-napoleon\",\n ],\n \"duecredit\": [\"duecredit\"],\n \"nipy\": [\"nitime\", \"nilearn<0.5.0\", \"dipy\", \"nipy\", \"matplotlib\"],\n \"profiler\": [\"psutil>=5.0\"],\n \"pybids\": [\"pybids>=0.7.0\"],\n \"specs\": [\"black\"],\n \"ssh\": [\"paramiko\"],\n \"tests\": TESTS_REQUIRES,\n \"xvfbwrapper\": [\"xvfbwrapper\"],\n # 'mesh': ['mayavi'] # Enable when it works\n}\n\n\ndef _list_union(iterable):\n return list(set(sum(iterable, [])))\n\n\n# Enable a handle to install all extra dependencies at once\nEXTRA_REQUIRES[\"all\"] = _list_union(EXTRA_REQUIRES.values())\n# dev = doc + tests + specs\nEXTRA_REQUIRES[\"dev\"] = _list_union(\n val for key, val in EXTRA_REQUIRES.items() if key in (\"doc\", \"tests\", \"specs\")\n)\n\nSTATUS = \"stable\"\n", "path": "nipype/info.py"}]} | 3,043 | 410 |
gh_patches_debug_26194 | rasdani/github-patches | git_diff | streamlink__streamlink-95 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Connectcast stream fails with "invalid url"
Attempting to load an active connectcast stream via `streamlink connectcast.tv/streamname` results in an error:
`error: Unable to open URL: (Invalid URL '': No schema supplied. Perhaps you mean http://?)`
Similarly, using `http://connectcast.tv/streamname` for the url also fails.
Running on Windows, built with python 3.5.0rc2
</issue>
<code>
[start of src/streamlink/plugins/connectcast.py]
1 import re
2 import json
3
4 from streamlink.plugin import Plugin
5 from streamlink.plugin.api import http, validate
6 from streamlink.stream import HDSStream
7
8 SWF_URL = "https://www.connectcast.tv/jwplayer/jwplayer.flash.swf"
9
10 _url_re = re.compile("http(s)?://(\w+\.)?connectcast.tv/")
11 _manifest_re = re.compile(".*data-playback=\"([^\"]*)\".*")
12
13
14 class ConnectCast(Plugin):
15 @classmethod
16 def can_handle_url(self, url):
17 return _url_re.match(url)
18
19 def _get_streams(self):
20 res = http.get(self.url)
21 match = _manifest_re.search(res.text)
22 manifest = match.group(1)
23 streams = {}
24 streams.update(
25 HDSStream.parse_manifest(self.session, manifest, pvswf=SWF_URL)
26 )
27
28 return streams
29
30 __plugin__ = ConnectCast
31
[end of src/streamlink/plugins/connectcast.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/connectcast.py b/src/streamlink/plugins/connectcast.py
--- a/src/streamlink/plugins/connectcast.py
+++ b/src/streamlink/plugins/connectcast.py
@@ -3,13 +3,11 @@
from streamlink.plugin import Plugin
from streamlink.plugin.api import http, validate
-from streamlink.stream import HDSStream
-
-SWF_URL = "https://www.connectcast.tv/jwplayer/jwplayer.flash.swf"
-
-_url_re = re.compile("http(s)?://(\w+\.)?connectcast.tv/")
-_manifest_re = re.compile(".*data-playback=\"([^\"]*)\".*")
+from streamlink.stream import RTMPStream
+_url_re = re.compile(r"http(?:s)?://connectcast.tv/(\w+)?")
+_stream_re = re.compile(r'<video src="mp4:(.*?)"')
+_stream_url = "http://connectcast.tv/channel/stream/{channel}"
class ConnectCast(Plugin):
@classmethod
@@ -17,14 +15,15 @@
return _url_re.match(url)
def _get_streams(self):
- res = http.get(self.url)
- match = _manifest_re.search(res.text)
- manifest = match.group(1)
- streams = {}
- streams.update(
- HDSStream.parse_manifest(self.session, manifest, pvswf=SWF_URL)
- )
-
- return streams
+ url_match = _url_re.match(self.url)
+ stream_url = _stream_url.format(channel=url_match.group(1))
+ res = self.session.http.get(stream_url)
+ match = _stream_re.search(res.content)
+ if match:
+ params = dict(rtmp="rtmp://stream.connectcast.tv/live",
+ playpath=match.group(1),
+ live=True)
+
+ return dict(live=RTMPStream(self.session, params))
__plugin__ = ConnectCast
| {"golden_diff": "diff --git a/src/streamlink/plugins/connectcast.py b/src/streamlink/plugins/connectcast.py\n--- a/src/streamlink/plugins/connectcast.py\n+++ b/src/streamlink/plugins/connectcast.py\n@@ -3,13 +3,11 @@\n \n from streamlink.plugin import Plugin\n from streamlink.plugin.api import http, validate\n-from streamlink.stream import HDSStream\n-\n-SWF_URL = \"https://www.connectcast.tv/jwplayer/jwplayer.flash.swf\"\n-\n-_url_re = re.compile(\"http(s)?://(\\w+\\.)?connectcast.tv/\")\n-_manifest_re = re.compile(\".*data-playback=\\\"([^\\\"]*)\\\".*\")\n+from streamlink.stream import RTMPStream\n \n+_url_re = re.compile(r\"http(?:s)?://connectcast.tv/(\\w+)?\")\n+_stream_re = re.compile(r'<video src=\"mp4:(.*?)\"')\n+_stream_url = \"http://connectcast.tv/channel/stream/{channel}\"\n \n class ConnectCast(Plugin):\n @classmethod\n@@ -17,14 +15,15 @@\n return _url_re.match(url)\n \n def _get_streams(self):\n- res = http.get(self.url)\n- match = _manifest_re.search(res.text)\n- manifest = match.group(1)\n- streams = {}\n- streams.update(\n- HDSStream.parse_manifest(self.session, manifest, pvswf=SWF_URL)\n- )\n- \n- return streams\n+ url_match = _url_re.match(self.url)\n+ stream_url = _stream_url.format(channel=url_match.group(1))\n+ res = self.session.http.get(stream_url)\n+ match = _stream_re.search(res.content)\n+ if match:\n+ params = dict(rtmp=\"rtmp://stream.connectcast.tv/live\",\n+ playpath=match.group(1),\n+ live=True)\n+\n+ return dict(live=RTMPStream(self.session, params))\n \n __plugin__ = ConnectCast\n", "issue": "Connectcast stream fails with \"invalid url\"\nAttempting to load an active connectcast stream via `streamlink connectcast.tv/streamname` results in an error:\n`error: Unable to open URL: (Invalid URL '': No schema supplied. Perhaps you mean http://?)`\n\nSimilarly, using `http://connectcast.tv/streamname` for the url also fails.\n\nRunning on Windows, built with python 3.5.0rc2\n\n", "before_files": [{"content": "import re\nimport json\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate\nfrom streamlink.stream import HDSStream\n\nSWF_URL = \"https://www.connectcast.tv/jwplayer/jwplayer.flash.swf\"\n\n_url_re = re.compile(\"http(s)?://(\\w+\\.)?connectcast.tv/\")\n_manifest_re = re.compile(\".*data-playback=\\\"([^\\\"]*)\\\".*\")\n\n\nclass ConnectCast(Plugin):\n @classmethod\n def can_handle_url(self, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n res = http.get(self.url)\n match = _manifest_re.search(res.text)\n manifest = match.group(1)\n streams = {}\n streams.update(\n HDSStream.parse_manifest(self.session, manifest, pvswf=SWF_URL)\n )\n \n return streams\n\n__plugin__ = ConnectCast\n", "path": "src/streamlink/plugins/connectcast.py"}]} | 883 | 426 |
gh_patches_debug_25428 | rasdani/github-patches | git_diff | scikit-image__scikit-image-3210 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`watershed_demo` fails on mouse click
## Description
```
[egor@host scikit-image]$ python viewer_examples/plugins/watershed_demo.py
Watershed plugin
----------------
Use mouse to paint each region with a different label.
Press OK to display segmented image.
Traceback (most recent call last):
File "/home/egor/.local/lib/python3.6/site-packages/matplotlib/cbook/__init__.py", line 388, in process
proxy(*args, **kwargs)
File "/home/egor/.local/lib/python3.6/site-packages/matplotlib/cbook/__init__.py", line 228, in __call__
return mtd(*args, **kwargs)
File "/home/egor/Workspace/_contrib/scikit-image/skimage/viewer/utils/canvas.py", line 75, in on_mouse_press
self.active_tool.on_mouse_press(event)
File "/home/egor/Workspace/_contrib/scikit-image/skimage/viewer/canvastools/painttool.py", line 149, in on_mouse_press
self.update_overlay(event.xdata, event.ydata)
File "/home/egor/Workspace/_contrib/scikit-image/skimage/viewer/canvastools/painttool.py", line 172, in update_overlay
overlay[self.window.at(y, x)] = self.label
TypeError: slice indices must be integers or None or have an __index__ method
```
## Way to reproduce
[If reporting a bug, please include the following important information:]
- [x] Code example
- [x] Relevant images (if any)
- [x] Operating system and version: `Linux 4.14.49-1-lts #1 SMP Tue Jun 12 16:32:50 CEST 2018 x86_64 GNU/Linux`
- [x] Python version: 3.6.5
- [x] scikit-image version (run `skimage.__version__`): https://github.com/scikit-image/scikit-image/commit/18f97d864a9468555851aac08c731b6813db2091
- [x] matplotlib: 2.2.2
</issue>
<code>
[start of viewer_examples/plugins/watershed_demo.py]
1 import matplotlib.pyplot as plt
2
3 from skimage import data
4 from skimage import filters
5 from skimage import morphology
6 from skimage.viewer import ImageViewer
7 from skimage.viewer.widgets import history
8 from skimage.viewer.plugins.labelplugin import LabelPainter
9
10
11 class OKCancelButtons(history.OKCancelButtons):
12
13 def update_original_image(self):
14 # OKCancelButtons updates the original image with the filtered image
15 # by default. Override this method to update the overlay.
16 self.plugin._show_watershed()
17 self.plugin.close()
18
19
20 class WatershedPlugin(LabelPainter):
21
22 def help(self):
23 helpstr = ("Watershed plugin",
24 "----------------",
25 "Use mouse to paint each region with a different label.",
26 "Press OK to display segmented image.")
27 return '\n'.join(helpstr)
28
29 def _show_watershed(self):
30 viewer = self.image_viewer
31 edge_image = filter.sobel(viewer.image)
32 labels = morphology.watershed(edge_image, self.paint_tool.overlay)
33 viewer.ax.imshow(labels, cmap=plt.cm.jet, alpha=0.5)
34 viewer.redraw()
35
36
37 image = data.coins()
38 plugin = WatershedPlugin()
39 plugin += OKCancelButtons()
40
41 viewer = ImageViewer(image)
42 viewer += plugin
43 viewer.show()
44
[end of viewer_examples/plugins/watershed_demo.py]
[start of skimage/viewer/canvastools/painttool.py]
1 import numpy as np
2 import matplotlib.pyplot as plt
3 import matplotlib.colors as mcolors
4 LABELS_CMAP = mcolors.ListedColormap(['white', 'red', 'dodgerblue', 'gold',
5 'greenyellow', 'blueviolet'])
6 from ...viewer.canvastools.base import CanvasToolBase
7
8
9 __all__ = ['PaintTool']
10
11
12 class PaintTool(CanvasToolBase):
13 """Widget for painting on top of a plot.
14
15 Parameters
16 ----------
17 manager : Viewer or PlotPlugin.
18 Skimage viewer or plot plugin object.
19 overlay_shape : shape tuple
20 2D shape tuple used to initialize overlay image.
21 alpha : float (between [0, 1])
22 Opacity of overlay
23 on_move : function
24 Function called whenever a control handle is moved.
25 This function must accept the end points of line as the only argument.
26 on_release : function
27 Function called whenever the control handle is released.
28 on_enter : function
29 Function called whenever the "enter" key is pressed.
30 rect_props : dict
31 Properties for :class:`matplotlib.patches.Rectangle`. This class
32 redefines defaults in :class:`matplotlib.widgets.RectangleSelector`.
33
34 Attributes
35 ----------
36 overlay : array
37 Overlay of painted labels displayed on top of image.
38 label : int
39 Current paint color.
40
41 Examples
42 ----------
43 >>> from skimage.data import camera
44 >>> import matplotlib.pyplot as plt
45 >>> from skimage.viewer.canvastools import PaintTool
46 >>> import numpy as np
47
48 >>> img = camera() #doctest: +SKIP
49
50 >>> ax = plt.subplot(111) #doctest: +SKIP
51 >>> plt.imshow(img, cmap=plt.cm.gray) #doctest: +SKIP
52 >>> p = PaintTool(ax,np.shape(img[:-1]),10,0.2) #doctest: +SKIP
53 >>> plt.show() #doctest: +SKIP
54
55 >>> mask = p.overlay #doctest: +SKIP
56 >>> plt.imshow(mask,cmap=plt.cm.gray) #doctest: +SKIP
57 >>> plt.show() #doctest: +SKIP
58 """
59 def __init__(self, manager, overlay_shape, radius=5, alpha=0.3,
60 on_move=None, on_release=None, on_enter=None,
61 rect_props=None):
62 super(PaintTool, self).__init__(manager, on_move=on_move,
63 on_enter=on_enter,
64 on_release=on_release)
65
66 props = dict(edgecolor='r', facecolor='0.7', alpha=0.5, animated=True)
67 props.update(rect_props if rect_props is not None else {})
68
69 self.alpha = alpha
70 self.cmap = LABELS_CMAP
71 self._overlay_plot = None
72 self.shape = overlay_shape
73
74 self._cursor = plt.Rectangle((0, 0), 0, 0, **props)
75 self._cursor.set_visible(False)
76 self.ax.add_patch(self._cursor)
77
78 # `label` and `radius` can only be set after initializing `_cursor`
79 self.label = 1
80 self.radius = radius
81
82 # Note that the order is important: Redraw cursor *after* overlay
83 self.artists = [self._overlay_plot, self._cursor]
84 self.manager.add_tool(self)
85
86 @property
87 def label(self):
88 return self._label
89
90 @label.setter
91 def label(self, value):
92 if value >= self.cmap.N:
93 raise ValueError('Maximum label value = %s' % len(self.cmap - 1))
94 self._label = value
95 self._cursor.set_edgecolor(self.cmap(value))
96
97 @property
98 def radius(self):
99 return self._radius
100
101 @radius.setter
102 def radius(self, r):
103 self._radius = r
104 self._width = 2 * r + 1
105 self._cursor.set_width(self._width)
106 self._cursor.set_height(self._width)
107 self.window = CenteredWindow(r, self._shape)
108
109 @property
110 def overlay(self):
111 return self._overlay
112
113 @overlay.setter
114 def overlay(self, image):
115 self._overlay = image
116 if image is None:
117 self.ax.images.remove(self._overlay_plot)
118 self._overlay_plot = None
119 elif self._overlay_plot is None:
120 props = dict(cmap=self.cmap, alpha=self.alpha,
121 norm=mcolors.NoNorm(), animated=True)
122 self._overlay_plot = self.ax.imshow(image, **props)
123 else:
124 self._overlay_plot.set_data(image)
125 self.redraw()
126
127 @property
128 def shape(self):
129 return self._shape
130
131 @shape.setter
132 def shape(self, shape):
133 self._shape = shape
134 if not self._overlay_plot is None:
135 self._overlay_plot.set_extent((-0.5, shape[1] + 0.5,
136 shape[0] + 0.5, -0.5))
137 self.radius = self._radius
138 self.overlay = np.zeros(shape, dtype='uint8')
139
140 def on_key_press(self, event):
141 if event.key == 'enter':
142 self.callback_on_enter(self.geometry)
143 self.redraw()
144
145 def on_mouse_press(self, event):
146 if event.button != 1 or not self.ax.in_axes(event):
147 return
148 self.update_cursor(event.xdata, event.ydata)
149 self.update_overlay(event.xdata, event.ydata)
150
151 def on_mouse_release(self, event):
152 if event.button != 1:
153 return
154 self.callback_on_release(self.geometry)
155
156 def on_move(self, event):
157 if not self.ax.in_axes(event):
158 self._cursor.set_visible(False)
159 self.redraw() # make sure cursor is not visible
160 return
161 self._cursor.set_visible(True)
162
163 self.update_cursor(event.xdata, event.ydata)
164 if event.button != 1:
165 self.redraw() # update cursor position
166 return
167 self.update_overlay(event.xdata, event.ydata)
168 self.callback_on_move(self.geometry)
169
170 def update_overlay(self, x, y):
171 overlay = self.overlay
172 overlay[self.window.at(y, x)] = self.label
173 # Note that overlay calls `redraw`
174 self.overlay = overlay
175
176 def update_cursor(self, x, y):
177 x = x - self.radius - 1
178 y = y - self.radius - 1
179 self._cursor.set_xy((x, y))
180
181 @property
182 def geometry(self):
183 return self.overlay
184
185
186 class CenteredWindow(object):
187 """Window that create slices numpy arrays over 2D windows.
188
189 Examples
190 --------
191 >>> a = np.arange(16).reshape(4, 4)
192 >>> w = CenteredWindow(1, a.shape)
193 >>> a[w.at(1, 1)]
194 array([[ 0, 1, 2],
195 [ 4, 5, 6],
196 [ 8, 9, 10]])
197 >>> a[w.at(0, 0)]
198 array([[0, 1],
199 [4, 5]])
200 >>> a[w.at(4, 3)]
201 array([[14, 15]])
202 """
203 def __init__(self, radius, array_shape):
204 self.radius = radius
205 self.array_shape = array_shape
206
207 def at(self, row, col):
208 h, w = self.array_shape
209 r = self.radius
210 xmin = max(0, col - r)
211 xmax = min(w, col + r + 1)
212 ymin = max(0, row - r)
213 ymax = min(h, row + r + 1)
214 return [slice(ymin, ymax), slice(xmin, xmax)]
215
216
217 if __name__ == '__main__': # pragma: no cover
218 np.testing.rundocs()
219 from ... import data
220 from ...viewer import ImageViewer
221
222 image = data.camera()
223
224 viewer = ImageViewer(image)
225 paint_tool = PaintTool(viewer, image.shape)
226 viewer.show()
227
[end of skimage/viewer/canvastools/painttool.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/viewer/canvastools/painttool.py b/skimage/viewer/canvastools/painttool.py
--- a/skimage/viewer/canvastools/painttool.py
+++ b/skimage/viewer/canvastools/painttool.py
@@ -206,12 +206,21 @@
def at(self, row, col):
h, w = self.array_shape
- r = self.radius
+ r = round(self.radius)
+ # Note: the int() cast is necessary because row and col are np.float64,
+ # which does not get cast by round(), unlike a normal Python float:
+ # >>> round(4.5)
+ # 4
+ # >>> round(np.float64(4.5))
+ # 4.0
+ # >>> int(round(np.float64(4.5)))
+ # 4
+ row, col = int(round(row)), int(round(col))
xmin = max(0, col - r)
xmax = min(w, col + r + 1)
ymin = max(0, row - r)
ymax = min(h, row + r + 1)
- return [slice(ymin, ymax), slice(xmin, xmax)]
+ return (slice(ymin, ymax), slice(xmin, xmax))
if __name__ == '__main__': # pragma: no cover
diff --git a/viewer_examples/plugins/watershed_demo.py b/viewer_examples/plugins/watershed_demo.py
--- a/viewer_examples/plugins/watershed_demo.py
+++ b/viewer_examples/plugins/watershed_demo.py
@@ -28,7 +28,7 @@
def _show_watershed(self):
viewer = self.image_viewer
- edge_image = filter.sobel(viewer.image)
+ edge_image = filters.sobel(viewer.image)
labels = morphology.watershed(edge_image, self.paint_tool.overlay)
viewer.ax.imshow(labels, cmap=plt.cm.jet, alpha=0.5)
viewer.redraw()
| {"golden_diff": "diff --git a/skimage/viewer/canvastools/painttool.py b/skimage/viewer/canvastools/painttool.py\n--- a/skimage/viewer/canvastools/painttool.py\n+++ b/skimage/viewer/canvastools/painttool.py\n@@ -206,12 +206,21 @@\n \n def at(self, row, col):\n h, w = self.array_shape\n- r = self.radius\n+ r = round(self.radius)\n+ # Note: the int() cast is necessary because row and col are np.float64,\n+ # which does not get cast by round(), unlike a normal Python float:\n+ # >>> round(4.5)\n+ # 4\n+ # >>> round(np.float64(4.5))\n+ # 4.0\n+ # >>> int(round(np.float64(4.5)))\n+ # 4\n+ row, col = int(round(row)), int(round(col))\n xmin = max(0, col - r)\n xmax = min(w, col + r + 1)\n ymin = max(0, row - r)\n ymax = min(h, row + r + 1)\n- return [slice(ymin, ymax), slice(xmin, xmax)]\n+ return (slice(ymin, ymax), slice(xmin, xmax))\n \n \n if __name__ == '__main__': # pragma: no cover\ndiff --git a/viewer_examples/plugins/watershed_demo.py b/viewer_examples/plugins/watershed_demo.py\n--- a/viewer_examples/plugins/watershed_demo.py\n+++ b/viewer_examples/plugins/watershed_demo.py\n@@ -28,7 +28,7 @@\n \n def _show_watershed(self):\n viewer = self.image_viewer\n- edge_image = filter.sobel(viewer.image)\n+ edge_image = filters.sobel(viewer.image)\n labels = morphology.watershed(edge_image, self.paint_tool.overlay)\n viewer.ax.imshow(labels, cmap=plt.cm.jet, alpha=0.5)\n viewer.redraw()\n", "issue": "`watershed_demo` fails on mouse click\n## Description\r\n```\r\n[egor@host scikit-image]$ python viewer_examples/plugins/watershed_demo.py \r\nWatershed plugin\r\n----------------\r\nUse mouse to paint each region with a different label.\r\nPress OK to display segmented image.\r\nTraceback (most recent call last):\r\n File \"/home/egor/.local/lib/python3.6/site-packages/matplotlib/cbook/__init__.py\", line 388, in process\r\n proxy(*args, **kwargs)\r\n File \"/home/egor/.local/lib/python3.6/site-packages/matplotlib/cbook/__init__.py\", line 228, in __call__\r\n return mtd(*args, **kwargs)\r\n File \"/home/egor/Workspace/_contrib/scikit-image/skimage/viewer/utils/canvas.py\", line 75, in on_mouse_press\r\n self.active_tool.on_mouse_press(event)\r\n File \"/home/egor/Workspace/_contrib/scikit-image/skimage/viewer/canvastools/painttool.py\", line 149, in on_mouse_press\r\n self.update_overlay(event.xdata, event.ydata)\r\n File \"/home/egor/Workspace/_contrib/scikit-image/skimage/viewer/canvastools/painttool.py\", line 172, in update_overlay\r\n overlay[self.window.at(y, x)] = self.label\r\nTypeError: slice indices must be integers or None or have an __index__ method\r\n```\r\n\r\n## Way to reproduce\r\n[If reporting a bug, please include the following important information:]\r\n- [x] Code example\r\n- [x] Relevant images (if any)\r\n- [x] Operating system and version: `Linux 4.14.49-1-lts #1 SMP Tue Jun 12 16:32:50 CEST 2018 x86_64 GNU/Linux`\r\n- [x] Python version: 3.6.5\r\n- [x] scikit-image version (run `skimage.__version__`): https://github.com/scikit-image/scikit-image/commit/18f97d864a9468555851aac08c731b6813db2091\r\n- [x] matplotlib: 2.2.2\r\n\r\n\n", "before_files": [{"content": "import matplotlib.pyplot as plt\n\nfrom skimage import data\nfrom skimage import filters\nfrom skimage import morphology\nfrom skimage.viewer import ImageViewer\nfrom skimage.viewer.widgets import history\nfrom skimage.viewer.plugins.labelplugin import LabelPainter\n\n\nclass OKCancelButtons(history.OKCancelButtons):\n\n def update_original_image(self):\n # OKCancelButtons updates the original image with the filtered image\n # by default. Override this method to update the overlay.\n self.plugin._show_watershed()\n self.plugin.close()\n\n\nclass WatershedPlugin(LabelPainter):\n\n def help(self):\n helpstr = (\"Watershed plugin\",\n \"----------------\",\n \"Use mouse to paint each region with a different label.\",\n \"Press OK to display segmented image.\")\n return '\\n'.join(helpstr)\n\n def _show_watershed(self):\n viewer = self.image_viewer\n edge_image = filter.sobel(viewer.image)\n labels = morphology.watershed(edge_image, self.paint_tool.overlay)\n viewer.ax.imshow(labels, cmap=plt.cm.jet, alpha=0.5)\n viewer.redraw()\n\n\nimage = data.coins()\nplugin = WatershedPlugin()\nplugin += OKCancelButtons()\n\nviewer = ImageViewer(image)\nviewer += plugin\nviewer.show()\n", "path": "viewer_examples/plugins/watershed_demo.py"}, {"content": "import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.colors as mcolors\nLABELS_CMAP = mcolors.ListedColormap(['white', 'red', 'dodgerblue', 'gold',\n 'greenyellow', 'blueviolet'])\nfrom ...viewer.canvastools.base import CanvasToolBase\n\n\n__all__ = ['PaintTool']\n\n\nclass PaintTool(CanvasToolBase):\n \"\"\"Widget for painting on top of a plot.\n\n Parameters\n ----------\n manager : Viewer or PlotPlugin.\n Skimage viewer or plot plugin object.\n overlay_shape : shape tuple\n 2D shape tuple used to initialize overlay image.\n alpha : float (between [0, 1])\n Opacity of overlay\n on_move : function\n Function called whenever a control handle is moved.\n This function must accept the end points of line as the only argument.\n on_release : function\n Function called whenever the control handle is released.\n on_enter : function\n Function called whenever the \"enter\" key is pressed.\n rect_props : dict\n Properties for :class:`matplotlib.patches.Rectangle`. This class\n redefines defaults in :class:`matplotlib.widgets.RectangleSelector`.\n\n Attributes\n ----------\n overlay : array\n Overlay of painted labels displayed on top of image.\n label : int\n Current paint color.\n\n Examples\n ----------\n >>> from skimage.data import camera\n >>> import matplotlib.pyplot as plt\n >>> from skimage.viewer.canvastools import PaintTool\n >>> import numpy as np\n\n >>> img = camera() #doctest: +SKIP\n\n >>> ax = plt.subplot(111) #doctest: +SKIP \n >>> plt.imshow(img, cmap=plt.cm.gray) #doctest: +SKIP\n >>> p = PaintTool(ax,np.shape(img[:-1]),10,0.2) #doctest: +SKIP\n >>> plt.show() #doctest: +SKIP\n\n >>> mask = p.overlay #doctest: +SKIP\n >>> plt.imshow(mask,cmap=plt.cm.gray) #doctest: +SKIP\n >>> plt.show() #doctest: +SKIP\n \"\"\"\n def __init__(self, manager, overlay_shape, radius=5, alpha=0.3,\n on_move=None, on_release=None, on_enter=None,\n rect_props=None):\n super(PaintTool, self).__init__(manager, on_move=on_move,\n on_enter=on_enter,\n on_release=on_release)\n\n props = dict(edgecolor='r', facecolor='0.7', alpha=0.5, animated=True)\n props.update(rect_props if rect_props is not None else {})\n\n self.alpha = alpha\n self.cmap = LABELS_CMAP\n self._overlay_plot = None\n self.shape = overlay_shape\n\n self._cursor = plt.Rectangle((0, 0), 0, 0, **props)\n self._cursor.set_visible(False)\n self.ax.add_patch(self._cursor)\n\n # `label` and `radius` can only be set after initializing `_cursor`\n self.label = 1\n self.radius = radius\n\n # Note that the order is important: Redraw cursor *after* overlay\n self.artists = [self._overlay_plot, self._cursor]\n self.manager.add_tool(self)\n\n @property\n def label(self):\n return self._label\n\n @label.setter\n def label(self, value):\n if value >= self.cmap.N:\n raise ValueError('Maximum label value = %s' % len(self.cmap - 1))\n self._label = value\n self._cursor.set_edgecolor(self.cmap(value))\n\n @property\n def radius(self):\n return self._radius\n\n @radius.setter\n def radius(self, r):\n self._radius = r\n self._width = 2 * r + 1\n self._cursor.set_width(self._width)\n self._cursor.set_height(self._width)\n self.window = CenteredWindow(r, self._shape)\n\n @property\n def overlay(self):\n return self._overlay\n\n @overlay.setter\n def overlay(self, image):\n self._overlay = image\n if image is None:\n self.ax.images.remove(self._overlay_plot)\n self._overlay_plot = None\n elif self._overlay_plot is None:\n props = dict(cmap=self.cmap, alpha=self.alpha,\n norm=mcolors.NoNorm(), animated=True)\n self._overlay_plot = self.ax.imshow(image, **props)\n else:\n self._overlay_plot.set_data(image)\n self.redraw()\n\n @property\n def shape(self):\n return self._shape\n\n @shape.setter\n def shape(self, shape):\n self._shape = shape\n if not self._overlay_plot is None:\n self._overlay_plot.set_extent((-0.5, shape[1] + 0.5,\n shape[0] + 0.5, -0.5))\n self.radius = self._radius\n self.overlay = np.zeros(shape, dtype='uint8')\n\n def on_key_press(self, event):\n if event.key == 'enter':\n self.callback_on_enter(self.geometry)\n self.redraw()\n\n def on_mouse_press(self, event):\n if event.button != 1 or not self.ax.in_axes(event):\n return\n self.update_cursor(event.xdata, event.ydata)\n self.update_overlay(event.xdata, event.ydata)\n\n def on_mouse_release(self, event):\n if event.button != 1:\n return\n self.callback_on_release(self.geometry)\n\n def on_move(self, event):\n if not self.ax.in_axes(event):\n self._cursor.set_visible(False)\n self.redraw() # make sure cursor is not visible\n return\n self._cursor.set_visible(True)\n\n self.update_cursor(event.xdata, event.ydata)\n if event.button != 1:\n self.redraw() # update cursor position\n return\n self.update_overlay(event.xdata, event.ydata)\n self.callback_on_move(self.geometry)\n\n def update_overlay(self, x, y):\n overlay = self.overlay\n overlay[self.window.at(y, x)] = self.label\n # Note that overlay calls `redraw`\n self.overlay = overlay\n\n def update_cursor(self, x, y):\n x = x - self.radius - 1\n y = y - self.radius - 1\n self._cursor.set_xy((x, y))\n\n @property\n def geometry(self):\n return self.overlay\n\n\nclass CenteredWindow(object):\n \"\"\"Window that create slices numpy arrays over 2D windows.\n\n Examples\n --------\n >>> a = np.arange(16).reshape(4, 4)\n >>> w = CenteredWindow(1, a.shape)\n >>> a[w.at(1, 1)]\n array([[ 0, 1, 2],\n [ 4, 5, 6],\n [ 8, 9, 10]])\n >>> a[w.at(0, 0)]\n array([[0, 1],\n [4, 5]])\n >>> a[w.at(4, 3)]\n array([[14, 15]])\n \"\"\"\n def __init__(self, radius, array_shape):\n self.radius = radius\n self.array_shape = array_shape\n\n def at(self, row, col):\n h, w = self.array_shape\n r = self.radius\n xmin = max(0, col - r)\n xmax = min(w, col + r + 1)\n ymin = max(0, row - r)\n ymax = min(h, row + r + 1)\n return [slice(ymin, ymax), slice(xmin, xmax)]\n\n\nif __name__ == '__main__': # pragma: no cover\n np.testing.rundocs()\n from ... import data\n from ...viewer import ImageViewer\n\n image = data.camera()\n\n viewer = ImageViewer(image)\n paint_tool = PaintTool(viewer, image.shape)\n viewer.show()\n", "path": "skimage/viewer/canvastools/painttool.py"}]} | 3,769 | 464 |
gh_patches_debug_24662 | rasdani/github-patches | git_diff | hi-primus__optimus-1012 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip install not working
**Describe the bug**
I am unable to install the optimuspyspark using pip for version 2.2.29
**To Reproduce**
Steps to reproduce the behavior:
pip install error with message " No such file or directory requirement.txt"
**Expected behavior**
pip install should not fail
</issue>
<code>
[start of optimus/version.py]
1 def _safe_int(string):
2 try:
3 return int(string)
4 except ValueError:
5 return string
6
7
8 __version__ = '2.2.31'
9 VERSION = tuple(_safe_int(x) for x in __version__.split('.'))
10
[end of optimus/version.py]
[start of setup.py]
1 import os
2 import re
3 import sys
4
5 from setuptools import setup, find_packages
6
7
8 # from optimus.version import __version__
9
10 # Get version without importing, which avoids dependency issues
11 def get_version():
12 with open('optimus/version.py') as version_file:
13 return re.search(r"""__version__\s+=\s+(['"])(?P<version>.+?)\1""",
14 version_file.read()).group('version')
15
16
17 # Requirements
18 try:
19 import google.colab
20
21 IN_COLAB = True
22 except ImportError:
23 IN_COLAB = False
24
25 if "DATABRICKS_RUNTIME_VERSION" in os.environ:
26 with open('requirements-databricks.txt') as f:
27 required = f.read().splitlines()
28 elif IN_COLAB:
29 with open('requirements-google-colab.txt') as f:
30 required = f.read().splitlines()
31 else:
32 with open('requirements.txt') as f:
33 required = f.read().splitlines()
34
35 if sys.version_info < (3, 6):
36 raise RuntimeError('This version requires Python 3.6+') # pragma: no cover
37
38
39 def readme():
40 with open('README.md') as f:
41 return f.read()
42
43
44 lint_requires = [
45 'pep8',
46 'pyflakes'
47 ]
48
49 tests_require = ['pytest', 'mock', 'nose']
50
51 dependency_links = []
52 setup_requires = ['pytest-runner']
53 if 'nosetests' in sys.argv[1:]:
54 setup_requires.append('nose')
55
56 setup(
57 name='optimuspyspark',
58 version=get_version(),
59 author='Favio Vazquez and Argenis Leon',
60 author_email='[email protected]',
61 url='https://github.com/ironmussa/Optimus/',
62 download_url='https://github.com/ironmussa/Optimus/archive/2.2.31.tar.gz',
63 description=('Optimus is the missing framework for cleaning and pre-processing data in a distributed fashion with '
64 'pyspark.'),
65 long_description=readme(),
66 long_description_content_type='text/markdown',
67 license='APACHE',
68 packages=find_packages(),
69 install_requires=required,
70 tests_require=tests_require,
71 setup_requires=setup_requires,
72 extras_require={
73 'test': tests_require,
74 'all': required + tests_require,
75 'docs': ['sphinx'] + tests_require,
76 'lint': lint_requires
77 },
78 dependency_links=dependency_links,
79 test_suite='nose.collector',
80 include_package_data=True,
81 classifiers=[
82 'Development Status :: 5 - Production/Stable',
83 'Intended Audience :: Developers',
84 'Topic :: Scientific/Engineering :: Artificial Intelligence',
85 'License :: OSI Approved :: Apache Software License',
86 'Programming Language :: Python :: 3.5',
87 'Programming Language :: Python :: 3.6',
88 ],
89 keywords=['datacleaner', 'apachespark', 'spark', 'pyspark', 'data-wrangling', 'data-cleansing', 'data-profiling'],
90 )
91
[end of setup.py]
[start of docs/source/conf.py]
1 #!/usr/bin/env python3
2 # -*- coding: utf-8 -*-
3 #
4 # Optimus documentation build configuration file, created by
5 # sphinx-quickstart on Wed Oct 11 19:21:00 2017.
6 #
7 # This file is execfile()d with the current directory set to its
8 # containing dir.
9 #
10 # Note that not all possible configuration values are present in this
11 # autogenerated file.
12 #
13 # All configuration values have a default; values that are commented out
14 # serve to show the default.
15
16 # If extensions (or modules to document with autodoc) are in another directory,
17 # add these directories to sys.path here. If the directory is relative to the
18 # documentation root, use os.path.abspath to make it absolute, like shown here.
19 #
20 # import os
21 # import sys
22 # sys.path.insert(0, os.path.abspath('.'))
23
24
25 # -- General configuration ------------------------------------------------
26
27 # If your documentation needs a minimal Sphinx version, state it here.
28 #
29 # needs_sphinx = '1.0'
30
31 # Add any Sphinx extension module names here, as strings. They can be
32 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
33 # ones.
34 extensions = ['sphinx.ext.autodoc',
35 'sphinx.ext.doctest',
36 'sphinx.ext.intersphinx',
37 'sphinx.ext.mathjax']
38
39 # Add any paths that contain templates here, relative to this directory.
40 templates_path = ['_templates']
41
42 # The suffix(es) of source filenames.
43 # You can specify multiple suffix as a list of string:
44 #
45 # source_suffix = ['.rst', '.md']
46 source_suffix = '.rst'
47
48 # The master toctree document.
49 master_doc = 'index'
50
51 # General information about the project.
52 project = 'Optimus'
53 copyright = '2017, Iron Mussa'
54 author = 'Argenis León and Favio Vázquez'
55
56 # The version info for the project you're documenting, acts as replacement for
57 # |version| and |release|, also used in various other places throughout the
58 # built documents.
59 #
60 # The short X.Y version.
61 version = '2.2'
62 # The full version, including alpha/beta/rc tags.
63 release = "2.2.31"
64
65 # The language for content autogenerated by Sphinx. Refer to documentation
66 # for a list of supported languages.
67 #
68 # This is also used if you do content translation via gettext catalogs.
69 # Usually you set "language" from the command line for these cases.
70 language = None
71
72 # List of patterns, relative to source directory, that match files and
73 # directories to ignore when looking for source files.
74 # This patterns also effect to html_static_path and html_extra_path
75 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
76
77 # The name of the Pygments (syntax highlighting) style to use.
78 pygments_style = 'sphinx'
79
80 # If true, `todo` and `todoList` produce output, else they produce nothing.
81 todo_include_todos = False
82
83
84 # -- Options for HTML output ----------------------------------------------
85
86 # The theme to use for HTML and HTML Help pages. See the documentation for
87 # a list of builtin themes.
88 #
89 html_theme = "sphinx_rtd_theme"
90
91 # Theme options are theme-specific and customize the look and feel of a theme
92 # further. For a list of options available for each theme, see the
93 # documentation.
94 #
95 # html_theme_options = {}
96
97 # Add any paths that contain custom static files (such as style sheets) here,
98 # relative to this directory. They are copied after the builtin static files,
99 # so a file named "default.css" will overwrite the builtin "default.css".
100 html_static_path = ['_static']
101
102
103 # -- Options for HTMLHelp output ------------------------------------------
104
105 # Output file base name for HTML help builder.
106 htmlhelp_basename = 'Optimusdoc'
107
108
109 # -- Options for LaTeX output ---------------------------------------------
110
111 latex_elements = {
112 # The paper size ('letterpaper' or 'a4paper').
113 #
114 # 'papersize': 'letterpaper',
115
116 # The font size ('10pt', '11pt' or '12pt').
117 #
118 # 'pointsize': '10pt',
119
120 # Additional stuff for the LaTeX preamble.
121 #
122 # 'preamble': '',
123
124 # Latex figure (float) alignment
125 #
126 # 'figure_align': 'htbp',
127 }
128
129 # Grouping the document tree into LaTeX files. List of tuples
130 # (source start file, target name, title,
131 # author, documentclass [howto, manual, or own class]).
132 latex_documents = [
133 (master_doc, 'Optimus.tex', 'Optimus Documentation',
134 'Favio Vazquez', 'manual'),
135 ]
136
137
138 # -- Options for manual page output ---------------------------------------
139
140 # One entry per manual page. List of tuples
141 # (source start file, name, description, authors, manual section).
142 man_pages = [
143 (master_doc, 'optimus', 'Optimus Documentation',
144 [author], 1)
145 ]
146
147
148 # -- Options for Texinfo output -------------------------------------------
149
150 # Grouping the document tree into Texinfo files. List of tuples
151 # (source start file, target name, title, author,
152 # dir menu entry, description, category)
153 texinfo_documents = [
154 (master_doc, 'Optimus', 'Optimus Documentation',
155 author, 'Optimus', 'One line description of project.',
156 'Miscellaneous'),
157 ]
158
159 # Example configuration for intersphinx: refer to the Python standard library.
160 intersphinx_mapping = {'https://docs.python.org/': None}
161
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -60,7 +60,7 @@
# The short X.Y version.
version = '2.2'
# The full version, including alpha/beta/rc tags.
-release = "2.2.31"
+release = "2.2.32"
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
diff --git a/optimus/version.py b/optimus/version.py
--- a/optimus/version.py
+++ b/optimus/version.py
@@ -5,5 +5,5 @@
return string
-__version__ = '2.2.31'
+__version__ = '2.2.32'
VERSION = tuple(_safe_int(x) for x in __version__.split('.'))
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -59,7 +59,7 @@
author='Favio Vazquez and Argenis Leon',
author_email='[email protected]',
url='https://github.com/ironmussa/Optimus/',
- download_url='https://github.com/ironmussa/Optimus/archive/2.2.31.tar.gz',
+ download_url='https://github.com/ironmussa/Optimus/archive/2.2.32.tar.gz',
description=('Optimus is the missing framework for cleaning and pre-processing data in a distributed fashion with '
'pyspark.'),
long_description=readme(),
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -60,7 +60,7 @@\n # The short X.Y version.\n version = '2.2'\n # The full version, including alpha/beta/rc tags.\n-release = \"2.2.31\"\n+release = \"2.2.32\"\n \n # The language for content autogenerated by Sphinx. Refer to documentation\n # for a list of supported languages.\ndiff --git a/optimus/version.py b/optimus/version.py\n--- a/optimus/version.py\n+++ b/optimus/version.py\n@@ -5,5 +5,5 @@\n return string\n \n \n-__version__ = '2.2.31'\n+__version__ = '2.2.32'\n VERSION = tuple(_safe_int(x) for x in __version__.split('.'))\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -59,7 +59,7 @@\n author='Favio Vazquez and Argenis Leon',\n author_email='[email protected]',\n url='https://github.com/ironmussa/Optimus/',\n- download_url='https://github.com/ironmussa/Optimus/archive/2.2.31.tar.gz',\n+ download_url='https://github.com/ironmussa/Optimus/archive/2.2.32.tar.gz',\n description=('Optimus is the missing framework for cleaning and pre-processing data in a distributed fashion with '\n 'pyspark.'),\n long_description=readme(),\n", "issue": "pip install not working\n**Describe the bug**\r\nI am unable to install the optimuspyspark using pip for version 2.2.29\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\npip install error with message \" No such file or directory requirement.txt\"\r\n\r\n**Expected behavior**\r\npip install should not fail\n", "before_files": [{"content": "def _safe_int(string):\n try:\n return int(string)\n except ValueError:\n return string\n\n\n__version__ = '2.2.31'\nVERSION = tuple(_safe_int(x) for x in __version__.split('.'))\n", "path": "optimus/version.py"}, {"content": "import os\nimport re\nimport sys\n\nfrom setuptools import setup, find_packages\n\n\n# from optimus.version import __version__\n\n# Get version without importing, which avoids dependency issues\ndef get_version():\n with open('optimus/version.py') as version_file:\n return re.search(r\"\"\"__version__\\s+=\\s+(['\"])(?P<version>.+?)\\1\"\"\",\n version_file.read()).group('version')\n\n\n# Requirements\ntry:\n import google.colab\n\n IN_COLAB = True\nexcept ImportError:\n IN_COLAB = False\n\nif \"DATABRICKS_RUNTIME_VERSION\" in os.environ:\n with open('requirements-databricks.txt') as f:\n required = f.read().splitlines()\nelif IN_COLAB:\n with open('requirements-google-colab.txt') as f:\n required = f.read().splitlines()\nelse:\n with open('requirements.txt') as f:\n required = f.read().splitlines()\n\nif sys.version_info < (3, 6):\n raise RuntimeError('This version requires Python 3.6+') # pragma: no cover\n\n\ndef readme():\n with open('README.md') as f:\n return f.read()\n\n\nlint_requires = [\n 'pep8',\n 'pyflakes'\n]\n\ntests_require = ['pytest', 'mock', 'nose']\n\ndependency_links = []\nsetup_requires = ['pytest-runner']\nif 'nosetests' in sys.argv[1:]:\n setup_requires.append('nose')\n\nsetup(\n name='optimuspyspark',\n version=get_version(),\n author='Favio Vazquez and Argenis Leon',\n author_email='[email protected]',\n url='https://github.com/ironmussa/Optimus/',\n download_url='https://github.com/ironmussa/Optimus/archive/2.2.31.tar.gz',\n description=('Optimus is the missing framework for cleaning and pre-processing data in a distributed fashion with '\n 'pyspark.'),\n long_description=readme(),\n long_description_content_type='text/markdown',\n license='APACHE',\n packages=find_packages(),\n install_requires=required,\n tests_require=tests_require,\n setup_requires=setup_requires,\n extras_require={\n 'test': tests_require,\n 'all': required + tests_require,\n 'docs': ['sphinx'] + tests_require,\n 'lint': lint_requires\n },\n dependency_links=dependency_links,\n test_suite='nose.collector',\n include_package_data=True,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n keywords=['datacleaner', 'apachespark', 'spark', 'pyspark', 'data-wrangling', 'data-cleansing', 'data-profiling'],\n)\n", "path": "setup.py"}, {"content": "#!/usr/bin/env python3\n# -*- coding: utf-8 -*-\n#\n# Optimus documentation build configuration file, created by\n# sphinx-quickstart on Wed Oct 11 19:21:00 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.autodoc',\n 'sphinx.ext.doctest',\n 'sphinx.ext.intersphinx',\n 'sphinx.ext.mathjax']\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Optimus'\ncopyright = '2017, Iron Mussa'\nauthor = 'Argenis Le\u00f3n and Favio V\u00e1zquez'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = '2.2'\n# The full version, including alpha/beta/rc tags.\nrelease = \"2.2.31\"\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = \"sphinx_rtd_theme\"\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Optimusdoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'Optimus.tex', 'Optimus Documentation',\n 'Favio Vazquez', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'optimus', 'Optimus Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Optimus', 'Optimus Documentation',\n author, 'Optimus', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n# Example configuration for intersphinx: refer to the Python standard library.\nintersphinx_mapping = {'https://docs.python.org/': None}\n", "path": "docs/source/conf.py"}]} | 3,105 | 361 |
gh_patches_debug_6398 | rasdani/github-patches | git_diff | yt-dlp__yt-dlp-1858 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Jamando: 'JamendoIE' object has no attribute '_VALID_URL_RE'
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2021.12.01**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
wordwide
### Description
```
yt-dlp https://www.jamendo.com/track/1885651/to-aurora
ERROR: 'JamendoIE' object has no attribute '_VALID_URL_RE'
yt-dlp https://www.jamendo.com/track/1848421/badly
ERROR: 'JamendoIE' object has no attribute '_VALID_URL_RE'
```
## Relevant lines
https://github.com/yt-dlp/yt-dlp/blob/1117579b9457f8fbf7a4d7433a92b67ac802bdea/yt_dlp/extractor/jamendo.py#L17
https://github.com/yt-dlp/yt-dlp/blob/1117579b9457f8fbf7a4d7433a92b67ac802bdea/yt_dlp/extractor/jamendo.py#L62
https://github.com/yt-dlp/yt-dlp/blob/ee8dd27a7351841e1de8cebf8311b69fbef09eab/yt_dlp/extractor/common.py#L463-L470
### Verbose log
```shell
yt-dlp -v https://www.jamendo.com/track/1848421/badly[debug] Command-line config: ['-v', 'https://www.jamendo.com/track/1848421/badly']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8
[debug] yt-dlp version 2021.12.01 [91f071af6]
[debug] Python version 3.9.7 (CPython 64bit) - Linux-5.10.79-1-MANJARO-x86_64-with-glibc2.33
[debug] exe versions: ffmpeg 4.4.1 (setts), ffprobe 4.4.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets
[debug] Proxy map: {}
[debug] [Jamendo] Extracting URL: https://www.jamendo.com/track/1848421/badly
ERROR: 'JamendoIE' object has no attribute '_VALID_URL_RE'
Traceback (most recent call last):
File "/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 1329, in wrapper
return func(self, *args, **kwargs)
File "/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py", line 1398, in __extract_info
ie_result = ie.extract(url)
File "/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/extractor/common.py", line 597, in extract
ie_result = self._real_extract(url)
File "/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/extractor/jamendo.py", line 62, in _real_extract
track_id, display_id = self._VALID_URL_RE.match(url).groups()
AttributeError: 'JamendoIE' object has no attribute '_VALID_URL_RE'
```
</issue>
<code>
[start of yt_dlp/extractor/jamendo.py]
1 # coding: utf-8
2 from __future__ import unicode_literals
3
4 import hashlib
5 import random
6
7 from ..compat import compat_str
8 from .common import InfoExtractor
9 from ..utils import (
10 clean_html,
11 int_or_none,
12 try_get,
13 )
14
15
16 class JamendoIE(InfoExtractor):
17 _VALID_URL = r'''(?x)
18 https?://
19 (?:
20 licensing\.jamendo\.com/[^/]+|
21 (?:www\.)?jamendo\.com
22 )
23 /track/(?P<id>[0-9]+)(?:/(?P<display_id>[^/?#&]+))?
24 '''
25 _TESTS = [{
26 'url': 'https://www.jamendo.com/track/196219/stories-from-emona-i',
27 'md5': '6e9e82ed6db98678f171c25a8ed09ffd',
28 'info_dict': {
29 'id': '196219',
30 'display_id': 'stories-from-emona-i',
31 'ext': 'flac',
32 # 'title': 'Maya Filipič - Stories from Emona I',
33 'title': 'Stories from Emona I',
34 # 'artist': 'Maya Filipič',
35 'track': 'Stories from Emona I',
36 'duration': 210,
37 'thumbnail': r're:^https?://.*\.jpg',
38 'timestamp': 1217438117,
39 'upload_date': '20080730',
40 'license': 'by-nc-nd',
41 'view_count': int,
42 'like_count': int,
43 'average_rating': int,
44 'tags': ['piano', 'peaceful', 'newage', 'strings', 'upbeat'],
45 }
46 }, {
47 'url': 'https://licensing.jamendo.com/en/track/1496667/energetic-rock',
48 'only_matching': True,
49 }]
50
51 def _call_api(self, resource, resource_id):
52 path = '/api/%ss' % resource
53 rand = compat_str(random.random())
54 return self._download_json(
55 'https://www.jamendo.com' + path, resource_id, query={
56 'id[]': resource_id,
57 }, headers={
58 'X-Jam-Call': '$%s*%s~' % (hashlib.sha1((path + rand).encode()).hexdigest(), rand)
59 })[0]
60
61 def _real_extract(self, url):
62 track_id, display_id = self._VALID_URL_RE.match(url).groups()
63 # webpage = self._download_webpage(
64 # 'https://www.jamendo.com/track/' + track_id, track_id)
65 # models = self._parse_json(self._html_search_regex(
66 # r"data-bundled-models='([^']+)",
67 # webpage, 'bundled models'), track_id)
68 # track = models['track']['models'][0]
69 track = self._call_api('track', track_id)
70 title = track_name = track['name']
71 # get_model = lambda x: try_get(models, lambda y: y[x]['models'][0], dict) or {}
72 # artist = get_model('artist')
73 # artist_name = artist.get('name')
74 # if artist_name:
75 # title = '%s - %s' % (artist_name, title)
76 # album = get_model('album')
77
78 formats = [{
79 'url': 'https://%s.jamendo.com/?trackid=%s&format=%s&from=app-97dab294'
80 % (sub_domain, track_id, format_id),
81 'format_id': format_id,
82 'ext': ext,
83 'quality': quality,
84 } for quality, (format_id, sub_domain, ext) in enumerate((
85 ('mp31', 'mp3l', 'mp3'),
86 ('mp32', 'mp3d', 'mp3'),
87 ('ogg1', 'ogg', 'ogg'),
88 ('flac', 'flac', 'flac'),
89 ))]
90 self._sort_formats(formats)
91
92 urls = []
93 thumbnails = []
94 for covers in (track.get('cover') or {}).values():
95 for cover_id, cover_url in covers.items():
96 if not cover_url or cover_url in urls:
97 continue
98 urls.append(cover_url)
99 size = int_or_none(cover_id.lstrip('size'))
100 thumbnails.append({
101 'id': cover_id,
102 'url': cover_url,
103 'width': size,
104 'height': size,
105 })
106
107 tags = []
108 for tag in (track.get('tags') or []):
109 tag_name = tag.get('name')
110 if not tag_name:
111 continue
112 tags.append(tag_name)
113
114 stats = track.get('stats') or {}
115 license = track.get('licenseCC') or []
116
117 return {
118 'id': track_id,
119 'display_id': display_id,
120 'thumbnails': thumbnails,
121 'title': title,
122 'description': track.get('description'),
123 'duration': int_or_none(track.get('duration')),
124 # 'artist': artist_name,
125 'track': track_name,
126 # 'album': album.get('name'),
127 'formats': formats,
128 'license': '-'.join(license) if license else None,
129 'timestamp': int_or_none(track.get('dateCreated')),
130 'view_count': int_or_none(stats.get('listenedAll')),
131 'like_count': int_or_none(stats.get('favorited')),
132 'average_rating': int_or_none(stats.get('averageNote')),
133 'tags': tags,
134 }
135
136
137 class JamendoAlbumIE(JamendoIE):
138 _VALID_URL = r'https?://(?:www\.)?jamendo\.com/album/(?P<id>[0-9]+)'
139 _TESTS = [{
140 'url': 'https://www.jamendo.com/album/121486/duck-on-cover',
141 'info_dict': {
142 'id': '121486',
143 'title': 'Duck On Cover',
144 'description': 'md5:c2920eaeef07d7af5b96d7c64daf1239',
145 },
146 'playlist': [{
147 'md5': 'e1a2fcb42bda30dfac990212924149a8',
148 'info_dict': {
149 'id': '1032333',
150 'ext': 'flac',
151 'title': 'Shearer - Warmachine',
152 'artist': 'Shearer',
153 'track': 'Warmachine',
154 'timestamp': 1368089771,
155 'upload_date': '20130509',
156 }
157 }, {
158 'md5': '1f358d7b2f98edfe90fd55dac0799d50',
159 'info_dict': {
160 'id': '1032330',
161 'ext': 'flac',
162 'title': 'Shearer - Without Your Ghost',
163 'artist': 'Shearer',
164 'track': 'Without Your Ghost',
165 'timestamp': 1368089771,
166 'upload_date': '20130509',
167 }
168 }],
169 'params': {
170 'playlistend': 2
171 }
172 }]
173
174 def _real_extract(self, url):
175 album_id = self._match_id(url)
176 album = self._call_api('album', album_id)
177 album_name = album.get('name')
178
179 entries = []
180 for track in (album.get('tracks') or []):
181 track_id = track.get('id')
182 if not track_id:
183 continue
184 track_id = compat_str(track_id)
185 entries.append({
186 '_type': 'url_transparent',
187 'url': 'https://www.jamendo.com/track/' + track_id,
188 'ie_key': JamendoIE.ie_key(),
189 'id': track_id,
190 'album': album_name,
191 })
192
193 return self.playlist_result(
194 entries, album_id, album_name,
195 clean_html(try_get(album, lambda x: x['description']['en'], compat_str)))
196
[end of yt_dlp/extractor/jamendo.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/yt_dlp/extractor/jamendo.py b/yt_dlp/extractor/jamendo.py
--- a/yt_dlp/extractor/jamendo.py
+++ b/yt_dlp/extractor/jamendo.py
@@ -59,7 +59,7 @@
})[0]
def _real_extract(self, url):
- track_id, display_id = self._VALID_URL_RE.match(url).groups()
+ track_id, display_id = self._match_valid_url(url).groups()
# webpage = self._download_webpage(
# 'https://www.jamendo.com/track/' + track_id, track_id)
# models = self._parse_json(self._html_search_regex(
| {"golden_diff": "diff --git a/yt_dlp/extractor/jamendo.py b/yt_dlp/extractor/jamendo.py\n--- a/yt_dlp/extractor/jamendo.py\n+++ b/yt_dlp/extractor/jamendo.py\n@@ -59,7 +59,7 @@\n })[0]\n \n def _real_extract(self, url):\n- track_id, display_id = self._VALID_URL_RE.match(url).groups()\n+ track_id, display_id = self._match_valid_url(url).groups()\n # webpage = self._download_webpage(\n # 'https://www.jamendo.com/track/' + track_id, track_id)\n # models = self._parse_json(self._html_search_regex(\n", "issue": "Jamando: 'JamendoIE' object has no attribute '_VALID_URL_RE'\n### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2021.12.01**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nwordwide\n\n### Description\n\n```\r\nyt-dlp https://www.jamendo.com/track/1885651/to-aurora\r\nERROR: 'JamendoIE' object has no attribute '_VALID_URL_RE'\r\n\r\nyt-dlp https://www.jamendo.com/track/1848421/badly\r\nERROR: 'JamendoIE' object has no attribute '_VALID_URL_RE'\r\n```\r\n\r\n## Relevant lines\r\nhttps://github.com/yt-dlp/yt-dlp/blob/1117579b9457f8fbf7a4d7433a92b67ac802bdea/yt_dlp/extractor/jamendo.py#L17\r\n\r\nhttps://github.com/yt-dlp/yt-dlp/blob/1117579b9457f8fbf7a4d7433a92b67ac802bdea/yt_dlp/extractor/jamendo.py#L62\r\n\r\nhttps://github.com/yt-dlp/yt-dlp/blob/ee8dd27a7351841e1de8cebf8311b69fbef09eab/yt_dlp/extractor/common.py#L463-L470\n\n### Verbose log\n\n```shell\nyt-dlp -v https://www.jamendo.com/track/1848421/badly[debug] Command-line config: ['-v', 'https://www.jamendo.com/track/1848421/badly']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.12.01 [91f071af6]\r\n[debug] Python version 3.9.7 (CPython 64bit) - Linux-5.10.79-1-MANJARO-x86_64-with-glibc2.33\r\n[debug] exe versions: ffmpeg 4.4.1 (setts), ffprobe 4.4.1, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] [Jamendo] Extracting URL: https://www.jamendo.com/track/1848421/badly\r\nERROR: 'JamendoIE' object has no attribute '_VALID_URL_RE'\r\nTraceback (most recent call last):\r\n File \"/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py\", line 1329, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/YoutubeDL.py\", line 1398, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/extractor/common.py\", line 597, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/jaller94/.local/lib/python3.9/site-packages/yt_dlp/extractor/jamendo.py\", line 62, in _real_extract\r\n track_id, display_id = self._VALID_URL_RE.match(url).groups()\r\nAttributeError: 'JamendoIE' object has no attribute '_VALID_URL_RE'\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import unicode_literals\n\nimport hashlib\nimport random\n\nfrom ..compat import compat_str\nfrom .common import InfoExtractor\nfrom ..utils import (\n clean_html,\n int_or_none,\n try_get,\n)\n\n\nclass JamendoIE(InfoExtractor):\n _VALID_URL = r'''(?x)\n https?://\n (?:\n licensing\\.jamendo\\.com/[^/]+|\n (?:www\\.)?jamendo\\.com\n )\n /track/(?P<id>[0-9]+)(?:/(?P<display_id>[^/?#&]+))?\n '''\n _TESTS = [{\n 'url': 'https://www.jamendo.com/track/196219/stories-from-emona-i',\n 'md5': '6e9e82ed6db98678f171c25a8ed09ffd',\n 'info_dict': {\n 'id': '196219',\n 'display_id': 'stories-from-emona-i',\n 'ext': 'flac',\n # 'title': 'Maya Filipi\u010d - Stories from Emona I',\n 'title': 'Stories from Emona I',\n # 'artist': 'Maya Filipi\u010d',\n 'track': 'Stories from Emona I',\n 'duration': 210,\n 'thumbnail': r're:^https?://.*\\.jpg',\n 'timestamp': 1217438117,\n 'upload_date': '20080730',\n 'license': 'by-nc-nd',\n 'view_count': int,\n 'like_count': int,\n 'average_rating': int,\n 'tags': ['piano', 'peaceful', 'newage', 'strings', 'upbeat'],\n }\n }, {\n 'url': 'https://licensing.jamendo.com/en/track/1496667/energetic-rock',\n 'only_matching': True,\n }]\n\n def _call_api(self, resource, resource_id):\n path = '/api/%ss' % resource\n rand = compat_str(random.random())\n return self._download_json(\n 'https://www.jamendo.com' + path, resource_id, query={\n 'id[]': resource_id,\n }, headers={\n 'X-Jam-Call': '$%s*%s~' % (hashlib.sha1((path + rand).encode()).hexdigest(), rand)\n })[0]\n\n def _real_extract(self, url):\n track_id, display_id = self._VALID_URL_RE.match(url).groups()\n # webpage = self._download_webpage(\n # 'https://www.jamendo.com/track/' + track_id, track_id)\n # models = self._parse_json(self._html_search_regex(\n # r\"data-bundled-models='([^']+)\",\n # webpage, 'bundled models'), track_id)\n # track = models['track']['models'][0]\n track = self._call_api('track', track_id)\n title = track_name = track['name']\n # get_model = lambda x: try_get(models, lambda y: y[x]['models'][0], dict) or {}\n # artist = get_model('artist')\n # artist_name = artist.get('name')\n # if artist_name:\n # title = '%s - %s' % (artist_name, title)\n # album = get_model('album')\n\n formats = [{\n 'url': 'https://%s.jamendo.com/?trackid=%s&format=%s&from=app-97dab294'\n % (sub_domain, track_id, format_id),\n 'format_id': format_id,\n 'ext': ext,\n 'quality': quality,\n } for quality, (format_id, sub_domain, ext) in enumerate((\n ('mp31', 'mp3l', 'mp3'),\n ('mp32', 'mp3d', 'mp3'),\n ('ogg1', 'ogg', 'ogg'),\n ('flac', 'flac', 'flac'),\n ))]\n self._sort_formats(formats)\n\n urls = []\n thumbnails = []\n for covers in (track.get('cover') or {}).values():\n for cover_id, cover_url in covers.items():\n if not cover_url or cover_url in urls:\n continue\n urls.append(cover_url)\n size = int_or_none(cover_id.lstrip('size'))\n thumbnails.append({\n 'id': cover_id,\n 'url': cover_url,\n 'width': size,\n 'height': size,\n })\n\n tags = []\n for tag in (track.get('tags') or []):\n tag_name = tag.get('name')\n if not tag_name:\n continue\n tags.append(tag_name)\n\n stats = track.get('stats') or {}\n license = track.get('licenseCC') or []\n\n return {\n 'id': track_id,\n 'display_id': display_id,\n 'thumbnails': thumbnails,\n 'title': title,\n 'description': track.get('description'),\n 'duration': int_or_none(track.get('duration')),\n # 'artist': artist_name,\n 'track': track_name,\n # 'album': album.get('name'),\n 'formats': formats,\n 'license': '-'.join(license) if license else None,\n 'timestamp': int_or_none(track.get('dateCreated')),\n 'view_count': int_or_none(stats.get('listenedAll')),\n 'like_count': int_or_none(stats.get('favorited')),\n 'average_rating': int_or_none(stats.get('averageNote')),\n 'tags': tags,\n }\n\n\nclass JamendoAlbumIE(JamendoIE):\n _VALID_URL = r'https?://(?:www\\.)?jamendo\\.com/album/(?P<id>[0-9]+)'\n _TESTS = [{\n 'url': 'https://www.jamendo.com/album/121486/duck-on-cover',\n 'info_dict': {\n 'id': '121486',\n 'title': 'Duck On Cover',\n 'description': 'md5:c2920eaeef07d7af5b96d7c64daf1239',\n },\n 'playlist': [{\n 'md5': 'e1a2fcb42bda30dfac990212924149a8',\n 'info_dict': {\n 'id': '1032333',\n 'ext': 'flac',\n 'title': 'Shearer - Warmachine',\n 'artist': 'Shearer',\n 'track': 'Warmachine',\n 'timestamp': 1368089771,\n 'upload_date': '20130509',\n }\n }, {\n 'md5': '1f358d7b2f98edfe90fd55dac0799d50',\n 'info_dict': {\n 'id': '1032330',\n 'ext': 'flac',\n 'title': 'Shearer - Without Your Ghost',\n 'artist': 'Shearer',\n 'track': 'Without Your Ghost',\n 'timestamp': 1368089771,\n 'upload_date': '20130509',\n }\n }],\n 'params': {\n 'playlistend': 2\n }\n }]\n\n def _real_extract(self, url):\n album_id = self._match_id(url)\n album = self._call_api('album', album_id)\n album_name = album.get('name')\n\n entries = []\n for track in (album.get('tracks') or []):\n track_id = track.get('id')\n if not track_id:\n continue\n track_id = compat_str(track_id)\n entries.append({\n '_type': 'url_transparent',\n 'url': 'https://www.jamendo.com/track/' + track_id,\n 'ie_key': JamendoIE.ie_key(),\n 'id': track_id,\n 'album': album_name,\n })\n\n return self.playlist_result(\n entries, album_id, album_name,\n clean_html(try_get(album, lambda x: x['description']['en'], compat_str)))\n", "path": "yt_dlp/extractor/jamendo.py"}]} | 3,978 | 163 |
gh_patches_debug_33701 | rasdani/github-patches | git_diff | piskvorky__gensim-1217 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong calculation for max_iter_dump
In
https://github.com/RaRe-Technologies/gensim/blob/develop/gensim/models/wrappers/wordrank.py#L144
Shouldnt this line `max_iter_dump = iter / dump_period * dump_period - 1` just be
`max_iter_dump = iter - dump_period` ?
To reproduce try these parameters:
`model = Wordrank.train(wr_path, data, out_dir, iter=100, dump_period=5)`
It will error out with -
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ajkale/anaconda2/envs/wordrank/lib/python2.7/site-packages/gensim/models/wrappers/wordrank.py", line 146, in train
copyfile('model_word_%d.txt' % max_iter_dump, 'wordrank.words')
File "/home/ajkale/anaconda2/envs/wordrank/lib/python2.7/shutil.py", line 82, in copyfile
with open(src, 'rb') as fsrc:
IOError: [Errno 2] No such file or directory: 'model_word_99.txt'
```
Mainly because `max_iter_dump = iter / dump_period * dump_period - 1` calculates max_iter_dump=99 instead of 95.
</issue>
<code>
[start of gensim/models/wrappers/wordrank.py]
1 # Copyright (C) 2017 Parul Sethi <[email protected]>
2 # Copyright (C) 2017 Radim Rehurek <[email protected]>
3 # Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html
4
5 """
6 Python wrapper around word representation learning from Wordrank.
7 The wrapped model can NOT be updated with new documents for online training -- use gensim's
8 `Word2Vec` for that.
9
10 Example:
11 >>> model = gensim.models.wrappers.Wordrank('/Users/dummy/wordrank', corpus_file='text8', out_path='wr_model')
12 >>> print model[word] # prints vector for given words
13
14 .. [1] https://bitbucket.org/shihaoji/wordrank/
15 .. [2] https://arxiv.org/pdf/1506.02761v3.pdf
16 """
17
18 from __future__ import division
19
20 import logging
21 import os
22 import sys
23 import copy
24 import multiprocessing
25
26 import numpy as np
27
28 from gensim import utils
29 from gensim.models.keyedvectors import KeyedVectors
30 from gensim.scripts.glove2word2vec import glove2word2vec
31
32 from six import string_types
33 from smart_open import smart_open
34 from shutil import copyfile, rmtree
35
36
37 logger = logging.getLogger(__name__)
38
39
40 class Wordrank(KeyedVectors):
41 """
42 Class for word vector training using Wordrank. Communication between Wordrank and Python
43 takes place by working with data files on disk and calling the Wordrank binary and glove's
44 helper binaries (for preparing training data) with subprocess module.
45 """
46
47 @classmethod
48 def train(cls, wr_path, corpus_file, out_path, size=100, window=15, symmetric=1, min_count=5, max_vocab_size=0,
49 sgd_num=100, lrate=0.001, period=10, iter=91, epsilon=0.75, dump_period=10, reg=0, alpha=100,
50 beta=99, loss='hinge', memory=4.0, cleanup_files=True, sorted_vocab=1, ensemble=0):
51 """
52 `wr_path` is the path to the Wordrank directory.
53 `corpus_file` is the filename of the text file to be used for training the Wordrank model.
54 Expects file to contain space-separated tokens in a single line
55 `out_path` is the path to directory which will be created to save embeddings and training data.
56 `size` is the dimensionality of the feature vectors.
57 `window` is the number of context words to the left (and to the right, if symmetric = 1).
58 `symmetric` if 0, only use left context words, else use left and right both.
59 `min_count` = ignore all words with total frequency lower than this.
60 `max_vocab_size` upper bound on vocabulary size, i.e. keep the <int> most frequent words. Default is 0 for no limit.
61 `sgd_num` number of SGD taken for each data point.
62 `lrate` is the learning rate (too high diverges, give Nan).
63 `period` is the period of xi variable updates
64 `iter` = number of iterations (epochs) over the corpus.
65 `epsilon` is the power scaling value for weighting function.
66 `dump_period` is the period after which embeddings should be dumped.
67 `reg` is the value of regularization parameter.
68 `alpha` is the alpha parameter of gamma distribution.
69 `beta` is the beta parameter of gamma distribution.
70 `loss` = name of the loss (logistic, hinge).
71 `memory` = soft limit for memory consumption, in GB.
72 `cleanup_files` if True, delete directory and files used by this wrapper, setting to False can be useful for debugging
73 `sorted_vocab` = if 1 (default), sort the vocabulary by descending frequency before assigning word indexes.
74 `ensemble` = 0 (default), use ensemble of word and context vectors
75 """
76
77 meta_data_path = 'matrix.meta'
78 vocab_file = 'vocab.txt'
79 temp_vocab_file = 'tempvocab.txt'
80 cooccurrence_file = 'cooccurrence'
81 cooccurrence_shuf_file = 'wiki.toy'
82 meta_file = 'meta'
83
84 # prepare training data (cooccurrence matrix and vocab)
85 model_dir = os.path.join(wr_path, out_path)
86 meta_dir = os.path.join(model_dir, 'meta')
87 os.makedirs(meta_dir)
88 logger.info("Dumped data will be stored in '%s'", model_dir)
89 copyfile(corpus_file, os.path.join(meta_dir, corpus_file.split('/')[-1]))
90 os.chdir(meta_dir)
91
92 cmd_vocab_count = ['../../glove/vocab_count', '-min-count', str(min_count), '-max-vocab', str(max_vocab_size)]
93 cmd_cooccurence_count = ['../../glove/cooccur', '-memory', str(memory), '-vocab-file', temp_vocab_file, '-window-size', str(window), '-symmetric', str(symmetric)]
94 cmd_shuffle_cooccurences = ['../../glove/shuffle', '-memory', str(memory)]
95 cmd_del_vocab_freq = ['cut', '-d', " ", '-f', '1', temp_vocab_file]
96
97 commands = [cmd_vocab_count, cmd_cooccurence_count, cmd_shuffle_cooccurences]
98 logger.info("Prepare training data using glove code '%s'", commands)
99 input_fnames = [corpus_file.split('/')[-1], corpus_file.split('/')[-1], cooccurrence_file]
100 output_fnames = [temp_vocab_file, cooccurrence_file, cooccurrence_shuf_file]
101
102 for command, input_fname, output_fname in zip(commands, input_fnames, output_fnames):
103 with smart_open(input_fname, 'rb') as r:
104 with smart_open(output_fname, 'wb') as w:
105 utils.check_output(w, args=command, stdin=r)
106 with smart_open(vocab_file, 'wb') as w:
107 utils.check_output(w, args=cmd_del_vocab_freq)
108
109 with smart_open(vocab_file, 'rb') as f:
110 numwords = sum(1 for line in f)
111 with smart_open(cooccurrence_shuf_file, 'rb') as f:
112 numlines = sum(1 for line in f)
113 with smart_open(meta_file, 'wb') as f:
114 meta_info = "{0} {1}\n{2} {3}\n{4} {5}".format(numwords, numwords, numlines, cooccurrence_shuf_file, numwords, vocab_file)
115 f.write(meta_info.encode('utf-8'))
116
117 wr_args = {
118 'path': 'meta',
119 'nthread': multiprocessing.cpu_count(),
120 'sgd_num': sgd_num,
121 'lrate': lrate,
122 'period': period,
123 'iter': iter,
124 'epsilon': epsilon,
125 'dump_prefix': 'model',
126 'dump_period': dump_period,
127 'dim': size,
128 'reg': reg,
129 'alpha': alpha,
130 'beta': beta,
131 'loss': loss
132 }
133
134 os.chdir('..')
135 # run wordrank executable with wr_args
136 cmd = ['mpirun', '-np', '1', '../wordrank']
137 for option, value in wr_args.items():
138 cmd.append("--%s" % option)
139 cmd.append(str(value))
140 logger.info("Running wordrank binary '%s'", cmd)
141 output = utils.check_output(args=cmd)
142
143 # use embeddings from max. iteration's dump
144 max_iter_dump = iter / dump_period * dump_period - 1
145 copyfile('model_word_%d.txt' % max_iter_dump, 'wordrank.words')
146 copyfile('model_context_%d.txt' % max_iter_dump, 'wordrank.contexts')
147 model = cls.load_wordrank_model('wordrank.words', os.path.join('meta', vocab_file), 'wordrank.contexts', sorted_vocab, ensemble)
148 os.chdir('../..')
149
150 if cleanup_files:
151 rmtree(model_dir)
152 return model
153
154 @classmethod
155 def load_wordrank_model(cls, model_file, vocab_file=None, context_file=None, sorted_vocab=1, ensemble=1):
156 glove2word2vec(model_file, model_file+'.w2vformat')
157 model = cls.load_word2vec_format('%s.w2vformat' % model_file)
158 if ensemble and context_file:
159 model.ensemble_embedding(model_file, context_file)
160 if sorted_vocab and vocab_file:
161 model.sort_embeddings(vocab_file)
162 return model
163
164 def sort_embeddings(self, vocab_file):
165 """Sort embeddings according to word frequency."""
166 counts = {}
167 vocab_size = len(self.vocab)
168 prev_syn0 = copy.deepcopy(self.syn0)
169 prev_vocab = copy.deepcopy(self.vocab)
170 self.index2word = []
171
172 # sort embeddings using frequency sorted vocab file in wordrank
173 with utils.smart_open(vocab_file) as fin:
174 for index, line in enumerate(fin):
175 word, count = utils.to_unicode(line).strip(), vocab_size - index
176 # store word with it's count in a dict
177 counts[word] = int(count)
178 # build new index2word with frequency sorted words
179 self.index2word.append(word)
180 assert len(self.index2word) == vocab_size, 'mismatch between vocab sizes'
181
182 for word_id, word in enumerate(self.index2word):
183 self.syn0[word_id] = prev_syn0[prev_vocab[word].index]
184 self.vocab[word].index = word_id
185 self.vocab[word].count = counts[word]
186
187 def ensemble_embedding(self, word_embedding, context_embedding):
188 """Replace syn0 with the sum of context and word embeddings."""
189 glove2word2vec(context_embedding, context_embedding+'.w2vformat')
190 w_emb = KeyedVectors.load_word2vec_format('%s.w2vformat' % word_embedding)
191 c_emb = KeyedVectors.load_word2vec_format('%s.w2vformat' % context_embedding)
192 # compare vocab words using keys of dict vocab
193 assert set(w_emb.vocab) == set(c_emb.vocab), 'Vocabs are not same for both embeddings'
194
195 # sort context embedding to have words in same order as word embedding
196 prev_c_emb = copy.deepcopy(c_emb.syn0)
197 for word_id, word in enumerate(w_emb.index2word):
198 c_emb.syn0[word_id] = prev_c_emb[c_emb.vocab[word].index]
199 # add vectors of the two embeddings
200 new_emb = w_emb.syn0 + c_emb.syn0
201 self.syn0 = new_emb
202 return new_emb
203
204
[end of gensim/models/wrappers/wordrank.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gensim/models/wrappers/wordrank.py b/gensim/models/wrappers/wordrank.py
--- a/gensim/models/wrappers/wordrank.py
+++ b/gensim/models/wrappers/wordrank.py
@@ -46,7 +46,7 @@
@classmethod
def train(cls, wr_path, corpus_file, out_path, size=100, window=15, symmetric=1, min_count=5, max_vocab_size=0,
- sgd_num=100, lrate=0.001, period=10, iter=91, epsilon=0.75, dump_period=10, reg=0, alpha=100,
+ sgd_num=100, lrate=0.001, period=10, iter=90, epsilon=0.75, dump_period=10, reg=0, alpha=100,
beta=99, loss='hinge', memory=4.0, cleanup_files=True, sorted_vocab=1, ensemble=0):
"""
`wr_path` is the path to the Wordrank directory.
@@ -113,6 +113,11 @@
with smart_open(meta_file, 'wb') as f:
meta_info = "{0} {1}\n{2} {3}\n{4} {5}".format(numwords, numwords, numlines, cooccurrence_shuf_file, numwords, vocab_file)
f.write(meta_info.encode('utf-8'))
+
+ if iter % dump_period == 0:
+ iter += 1
+ else:
+ logger.warning('Resultant embedding would be from %d iteration', iter - iter % dump_period)
wr_args = {
'path': 'meta',
@@ -141,7 +146,7 @@
output = utils.check_output(args=cmd)
# use embeddings from max. iteration's dump
- max_iter_dump = iter / dump_period * dump_period - 1
+ max_iter_dump = iter - iter % dump_period
copyfile('model_word_%d.txt' % max_iter_dump, 'wordrank.words')
copyfile('model_context_%d.txt' % max_iter_dump, 'wordrank.contexts')
model = cls.load_wordrank_model('wordrank.words', os.path.join('meta', vocab_file), 'wordrank.contexts', sorted_vocab, ensemble)
| {"golden_diff": "diff --git a/gensim/models/wrappers/wordrank.py b/gensim/models/wrappers/wordrank.py\n--- a/gensim/models/wrappers/wordrank.py\n+++ b/gensim/models/wrappers/wordrank.py\n@@ -46,7 +46,7 @@\n \n @classmethod\n def train(cls, wr_path, corpus_file, out_path, size=100, window=15, symmetric=1, min_count=5, max_vocab_size=0,\n- sgd_num=100, lrate=0.001, period=10, iter=91, epsilon=0.75, dump_period=10, reg=0, alpha=100,\n+ sgd_num=100, lrate=0.001, period=10, iter=90, epsilon=0.75, dump_period=10, reg=0, alpha=100,\n beta=99, loss='hinge', memory=4.0, cleanup_files=True, sorted_vocab=1, ensemble=0):\n \"\"\"\n `wr_path` is the path to the Wordrank directory.\n@@ -113,6 +113,11 @@\n with smart_open(meta_file, 'wb') as f:\n meta_info = \"{0} {1}\\n{2} {3}\\n{4} {5}\".format(numwords, numwords, numlines, cooccurrence_shuf_file, numwords, vocab_file)\n f.write(meta_info.encode('utf-8'))\n+ \n+ if iter % dump_period == 0:\n+ iter += 1\n+ else:\n+ logger.warning('Resultant embedding would be from %d iteration', iter - iter % dump_period)\n \n wr_args = {\n 'path': 'meta',\n@@ -141,7 +146,7 @@\n output = utils.check_output(args=cmd)\n \n # use embeddings from max. iteration's dump\n- max_iter_dump = iter / dump_period * dump_period - 1\n+ max_iter_dump = iter - iter % dump_period\n copyfile('model_word_%d.txt' % max_iter_dump, 'wordrank.words')\n copyfile('model_context_%d.txt' % max_iter_dump, 'wordrank.contexts')\n model = cls.load_wordrank_model('wordrank.words', os.path.join('meta', vocab_file), 'wordrank.contexts', sorted_vocab, ensemble)\n", "issue": "Wrong calculation for max_iter_dump\nIn\r\nhttps://github.com/RaRe-Technologies/gensim/blob/develop/gensim/models/wrappers/wordrank.py#L144\r\nShouldnt this line `max_iter_dump = iter / dump_period * dump_period - 1` just be \r\n`max_iter_dump = iter - dump_period` ?\r\n\r\nTo reproduce try these parameters:\r\n`model = Wordrank.train(wr_path, data, out_dir, iter=100, dump_period=5)`\r\nIt will error out with -\r\n```\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/ajkale/anaconda2/envs/wordrank/lib/python2.7/site-packages/gensim/models/wrappers/wordrank.py\", line 146, in train\r\n copyfile('model_word_%d.txt' % max_iter_dump, 'wordrank.words')\r\n File \"/home/ajkale/anaconda2/envs/wordrank/lib/python2.7/shutil.py\", line 82, in copyfile\r\n with open(src, 'rb') as fsrc:\r\nIOError: [Errno 2] No such file or directory: 'model_word_99.txt'\r\n```\r\n\r\nMainly because `max_iter_dump = iter / dump_period * dump_period - 1` calculates max_iter_dump=99 instead of 95.\n", "before_files": [{"content": "# Copyright (C) 2017 Parul Sethi <[email protected]>\n# Copyright (C) 2017 Radim Rehurek <[email protected]>\n# Licensed under the GNU LGPL v2.1 - http://www.gnu.org/licenses/lgpl.html\n\n\"\"\"\nPython wrapper around word representation learning from Wordrank.\nThe wrapped model can NOT be updated with new documents for online training -- use gensim's\n`Word2Vec` for that.\n\nExample:\n>>> model = gensim.models.wrappers.Wordrank('/Users/dummy/wordrank', corpus_file='text8', out_path='wr_model')\n>>> print model[word] # prints vector for given words\n\n.. [1] https://bitbucket.org/shihaoji/wordrank/\n.. [2] https://arxiv.org/pdf/1506.02761v3.pdf\n\"\"\"\n\nfrom __future__ import division\n\nimport logging\nimport os\nimport sys\nimport copy\nimport multiprocessing\n\nimport numpy as np\n\nfrom gensim import utils\nfrom gensim.models.keyedvectors import KeyedVectors\nfrom gensim.scripts.glove2word2vec import glove2word2vec\n\nfrom six import string_types\nfrom smart_open import smart_open\nfrom shutil import copyfile, rmtree\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Wordrank(KeyedVectors):\n \"\"\"\n Class for word vector training using Wordrank. Communication between Wordrank and Python\n takes place by working with data files on disk and calling the Wordrank binary and glove's\n helper binaries (for preparing training data) with subprocess module.\n \"\"\"\n \n @classmethod\n def train(cls, wr_path, corpus_file, out_path, size=100, window=15, symmetric=1, min_count=5, max_vocab_size=0,\n sgd_num=100, lrate=0.001, period=10, iter=91, epsilon=0.75, dump_period=10, reg=0, alpha=100,\n beta=99, loss='hinge', memory=4.0, cleanup_files=True, sorted_vocab=1, ensemble=0):\n \"\"\"\n `wr_path` is the path to the Wordrank directory.\n `corpus_file` is the filename of the text file to be used for training the Wordrank model.\n Expects file to contain space-separated tokens in a single line\n `out_path` is the path to directory which will be created to save embeddings and training data.\n `size` is the dimensionality of the feature vectors.\n `window` is the number of context words to the left (and to the right, if symmetric = 1).\n `symmetric` if 0, only use left context words, else use left and right both.\n `min_count` = ignore all words with total frequency lower than this.\n `max_vocab_size` upper bound on vocabulary size, i.e. keep the <int> most frequent words. Default is 0 for no limit.\n `sgd_num` number of SGD taken for each data point.\n `lrate` is the learning rate (too high diverges, give Nan).\n `period` is the period of xi variable updates\n `iter` = number of iterations (epochs) over the corpus.\n `epsilon` is the power scaling value for weighting function.\n `dump_period` is the period after which embeddings should be dumped.\n `reg` is the value of regularization parameter.\n `alpha` is the alpha parameter of gamma distribution.\n `beta` is the beta parameter of gamma distribution.\n `loss` = name of the loss (logistic, hinge).\n `memory` = soft limit for memory consumption, in GB.\n `cleanup_files` if True, delete directory and files used by this wrapper, setting to False can be useful for debugging\n `sorted_vocab` = if 1 (default), sort the vocabulary by descending frequency before assigning word indexes.\n `ensemble` = 0 (default), use ensemble of word and context vectors\n \"\"\"\n\n meta_data_path = 'matrix.meta'\n vocab_file = 'vocab.txt'\n temp_vocab_file = 'tempvocab.txt'\n cooccurrence_file = 'cooccurrence'\n cooccurrence_shuf_file = 'wiki.toy'\n meta_file = 'meta'\n\n # prepare training data (cooccurrence matrix and vocab)\n model_dir = os.path.join(wr_path, out_path)\n meta_dir = os.path.join(model_dir, 'meta')\n os.makedirs(meta_dir)\n logger.info(\"Dumped data will be stored in '%s'\", model_dir)\n copyfile(corpus_file, os.path.join(meta_dir, corpus_file.split('/')[-1]))\n os.chdir(meta_dir)\n\n cmd_vocab_count = ['../../glove/vocab_count', '-min-count', str(min_count), '-max-vocab', str(max_vocab_size)]\n cmd_cooccurence_count = ['../../glove/cooccur', '-memory', str(memory), '-vocab-file', temp_vocab_file, '-window-size', str(window), '-symmetric', str(symmetric)]\n cmd_shuffle_cooccurences = ['../../glove/shuffle', '-memory', str(memory)]\n cmd_del_vocab_freq = ['cut', '-d', \" \", '-f', '1', temp_vocab_file]\n\n commands = [cmd_vocab_count, cmd_cooccurence_count, cmd_shuffle_cooccurences]\n logger.info(\"Prepare training data using glove code '%s'\", commands)\n input_fnames = [corpus_file.split('/')[-1], corpus_file.split('/')[-1], cooccurrence_file]\n output_fnames = [temp_vocab_file, cooccurrence_file, cooccurrence_shuf_file]\n\n for command, input_fname, output_fname in zip(commands, input_fnames, output_fnames):\n with smart_open(input_fname, 'rb') as r:\n with smart_open(output_fname, 'wb') as w:\n utils.check_output(w, args=command, stdin=r)\n with smart_open(vocab_file, 'wb') as w:\n utils.check_output(w, args=cmd_del_vocab_freq)\n\n with smart_open(vocab_file, 'rb') as f:\n numwords = sum(1 for line in f)\n with smart_open(cooccurrence_shuf_file, 'rb') as f:\n numlines = sum(1 for line in f)\n with smart_open(meta_file, 'wb') as f:\n meta_info = \"{0} {1}\\n{2} {3}\\n{4} {5}\".format(numwords, numwords, numlines, cooccurrence_shuf_file, numwords, vocab_file)\n f.write(meta_info.encode('utf-8'))\n\n wr_args = {\n 'path': 'meta',\n 'nthread': multiprocessing.cpu_count(),\n 'sgd_num': sgd_num,\n 'lrate': lrate,\n 'period': period,\n 'iter': iter,\n 'epsilon': epsilon,\n 'dump_prefix': 'model',\n 'dump_period': dump_period,\n 'dim': size,\n 'reg': reg,\n 'alpha': alpha,\n 'beta': beta,\n 'loss': loss\n }\n\n os.chdir('..')\n # run wordrank executable with wr_args\n cmd = ['mpirun', '-np', '1', '../wordrank']\n for option, value in wr_args.items():\n cmd.append(\"--%s\" % option)\n cmd.append(str(value))\n logger.info(\"Running wordrank binary '%s'\", cmd)\n output = utils.check_output(args=cmd)\n\n # use embeddings from max. iteration's dump\n max_iter_dump = iter / dump_period * dump_period - 1\n copyfile('model_word_%d.txt' % max_iter_dump, 'wordrank.words')\n copyfile('model_context_%d.txt' % max_iter_dump, 'wordrank.contexts')\n model = cls.load_wordrank_model('wordrank.words', os.path.join('meta', vocab_file), 'wordrank.contexts', sorted_vocab, ensemble)\n os.chdir('../..')\n\n if cleanup_files:\n rmtree(model_dir)\n return model\n\n @classmethod\n def load_wordrank_model(cls, model_file, vocab_file=None, context_file=None, sorted_vocab=1, ensemble=1):\n glove2word2vec(model_file, model_file+'.w2vformat')\n model = cls.load_word2vec_format('%s.w2vformat' % model_file)\n if ensemble and context_file:\n model.ensemble_embedding(model_file, context_file)\n if sorted_vocab and vocab_file:\n model.sort_embeddings(vocab_file)\n return model\n\n def sort_embeddings(self, vocab_file):\n \"\"\"Sort embeddings according to word frequency.\"\"\"\n counts = {}\n vocab_size = len(self.vocab)\n prev_syn0 = copy.deepcopy(self.syn0)\n prev_vocab = copy.deepcopy(self.vocab)\n self.index2word = []\n\n # sort embeddings using frequency sorted vocab file in wordrank\n with utils.smart_open(vocab_file) as fin:\n for index, line in enumerate(fin):\n word, count = utils.to_unicode(line).strip(), vocab_size - index\n # store word with it's count in a dict\n counts[word] = int(count)\n # build new index2word with frequency sorted words\n self.index2word.append(word)\n assert len(self.index2word) == vocab_size, 'mismatch between vocab sizes'\n\n for word_id, word in enumerate(self.index2word):\n self.syn0[word_id] = prev_syn0[prev_vocab[word].index]\n self.vocab[word].index = word_id\n self.vocab[word].count = counts[word]\n\n def ensemble_embedding(self, word_embedding, context_embedding):\n \"\"\"Replace syn0 with the sum of context and word embeddings.\"\"\"\n glove2word2vec(context_embedding, context_embedding+'.w2vformat')\n w_emb = KeyedVectors.load_word2vec_format('%s.w2vformat' % word_embedding)\n c_emb = KeyedVectors.load_word2vec_format('%s.w2vformat' % context_embedding)\n # compare vocab words using keys of dict vocab\n assert set(w_emb.vocab) == set(c_emb.vocab), 'Vocabs are not same for both embeddings'\n\n # sort context embedding to have words in same order as word embedding\n prev_c_emb = copy.deepcopy(c_emb.syn0)\n for word_id, word in enumerate(w_emb.index2word):\n c_emb.syn0[word_id] = prev_c_emb[c_emb.vocab[word].index]\n # add vectors of the two embeddings\n new_emb = w_emb.syn0 + c_emb.syn0\n self.syn0 = new_emb\n return new_emb\n\n", "path": "gensim/models/wrappers/wordrank.py"}]} | 3,682 | 548 |
gh_patches_debug_17437 | rasdani/github-patches | git_diff | GoogleCloudPlatform__PerfKitBenchmarker-922 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
iperf benchmark has race condition in saving process id
It's possible for iperf to save the wrong server id if two copies of the benchmark are running on the same machine. Instead of using `pgrep -n`, use `$!` to get the process id.
</issue>
<code>
[start of perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py]
1 # Copyright 2014 PerfKitBenchmarker Authors. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Runs plain Iperf.
16
17 Docs:
18 http://iperf.fr/
19
20 Runs Iperf to collect network throughput.
21 """
22
23 import logging
24 import re
25
26 from perfkitbenchmarker import configs
27 from perfkitbenchmarker import flags
28 from perfkitbenchmarker import sample
29 from perfkitbenchmarker import vm_util
30
31 flags.DEFINE_integer('iperf_sending_thread_count', 1,
32 'Number of connections to make to the '
33 'server for sending traffic.',
34 lower_bound=1)
35 flags.DEFINE_integer('iperf_runtime_in_seconds', 60,
36 'Number of seconds to run iperf.',
37 lower_bound=1)
38
39 FLAGS = flags.FLAGS
40
41 BENCHMARK_NAME = 'iperf'
42 BENCHMARK_CONFIG = """
43 iperf:
44 description: Run iperf
45 vm_groups:
46 vm_1:
47 vm_spec: *default_single_core
48 vm_2:
49 vm_spec: *default_single_core
50 """
51
52 IPERF_PORT = 20000
53 IPERF_RETRIES = 5
54
55
56 def GetConfig(user_config):
57 return configs.LoadConfig(BENCHMARK_CONFIG, user_config, BENCHMARK_NAME)
58
59
60 def Prepare(benchmark_spec):
61 """Install iperf and start the server on all machines.
62
63 Args:
64 benchmark_spec: The benchmark specification. Contains all data that is
65 required to run the benchmark.
66 """
67 vms = benchmark_spec.vms
68 if len(vms) != 2:
69 raise ValueError(
70 'iperf benchmark requires exactly two machines, found {0}'.format(len(
71 vms)))
72
73 for vm in vms:
74 vm.Install('iperf')
75 if vm_util.ShouldRunOnExternalIpAddress():
76 vm.AllowPort(IPERF_PORT)
77 vm.RemoteCommand('nohup iperf --server --port %s &> /dev/null &' %
78 IPERF_PORT)
79 stdout, _ = vm.RemoteCommand('pgrep -n iperf')
80 # TODO store this in a better place once we have a better place
81 vm.iperf_server_pid = stdout.strip()
82
83
84 @vm_util.Retry(max_retries=IPERF_RETRIES)
85 def _RunIperf(sending_vm, receiving_vm, receiving_ip_address, ip_type):
86 """Run iperf using sending 'vm' to connect to 'ip_address'.
87
88 Args:
89 sending_vm: The VM sending traffic.
90 receiving_vm: The VM receiving traffic.
91 receiving_ip_address: The IP address of the iperf server (ie the receiver).
92 ip_type: The IP type of 'ip_address' (e.g. 'internal', 'external')
93 Returns:
94 A Sample.
95 """
96 iperf_cmd = ('iperf --client %s --port %s --format m --time %s -P %s' %
97 (receiving_ip_address, IPERF_PORT,
98 FLAGS.iperf_runtime_in_seconds,
99 FLAGS.iperf_sending_thread_count))
100 # the additional time on top of the iperf runtime is to account for the
101 # time it takes for the iperf process to start and exit
102 timeout_buffer = 30 + FLAGS.iperf_sending_thread_count
103 stdout, _ = sending_vm.RemoteCommand(iperf_cmd, should_log=True,
104 timeout=FLAGS.iperf_runtime_in_seconds +
105 timeout_buffer)
106
107 # Example output from iperf that needs to be parsed
108 # STDOUT: ------------------------------------------------------------
109 # Client connecting to 10.237.229.201, TCP port 5001
110 # TCP window size: 0.04 MByte (default)
111 # ------------------------------------------------------------
112 # [ 6] local 10.76.234.115 port 53527 connected with 10.237.229.201 port 5001
113 # [ 3] local 10.76.234.115 port 53524 connected with 10.237.229.201 port 5001
114 # [ 4] local 10.76.234.115 port 53525 connected with 10.237.229.201 port 5001
115 # [ 5] local 10.76.234.115 port 53526 connected with 10.237.229.201 port 5001
116 # [ ID] Interval Transfer Bandwidth
117 # [ 4] 0.0-60.0 sec 3730 MBytes 521.1 Mbits/sec
118 # [ 5] 0.0-60.0 sec 3499 MBytes 489 Mbits/sec
119 # [ 6] 0.0-60.0 sec 3044 MBytes 425 Mbits/sec
120 # [ 3] 0.0-60.0 sec 3738 MBytes 522 Mbits/sec
121 # [SUM] 0.0-60.0 sec 14010 MBytes 1957 Mbits/sec
122
123 thread_values = re.findall(r'\[SUM].*\s+(\d+\.?\d*).Mbits/sec', stdout)
124 if not thread_values:
125 # If there is no sum you have try and figure out an estimate
126 # which happens when threads start at different times. The code
127 # below will tend to overestimate a bit.
128 thread_values = re.findall('\[.*\d+\].*\s+(\d+\.?\d*).Mbits/sec', stdout)
129
130 if len(thread_values) != FLAGS.iperf_sending_thread_count:
131 raise ValueError('Only %s out of %s iperf threads reported a'
132 ' throughput value.' %
133 (len(thread_values), FLAGS.iperf_sending_thread_count))
134
135 total_throughput = 0.0
136 for value in thread_values:
137 total_throughput += float(value)
138
139 metadata = {
140 # The meta data defining the environment
141 'receiving_machine_type': receiving_vm.machine_type,
142 'receiving_zone': receiving_vm.zone,
143 'sending_machine_type': sending_vm.machine_type,
144 'sending_thread_count': FLAGS.iperf_sending_thread_count,
145 'sending_zone': sending_vm.zone,
146 'runtime_in_seconds': FLAGS.iperf_runtime_in_seconds,
147 'ip_type': ip_type
148 }
149 return sample.Sample('Throughput', total_throughput, 'Mbits/sec', metadata)
150
151
152 def Run(benchmark_spec):
153 """Run iperf on the target vm.
154
155 Args:
156 benchmark_spec: The benchmark specification. Contains all data that is
157 required to run the benchmark.
158
159 Returns:
160 A list of sample.Sample objects.
161 """
162 vms = benchmark_spec.vms
163 results = []
164
165 logging.info('Iperf Results:')
166
167 # Send traffic in both directions
168 for sending_vm, receiving_vm in vms, reversed(vms):
169 # Send using external IP addresses
170 if vm_util.ShouldRunOnExternalIpAddress():
171 results.append(_RunIperf(sending_vm,
172 receiving_vm,
173 receiving_vm.ip_address,
174 'external'))
175
176 # Send using internal IP addresses
177 if vm_util.ShouldRunOnInternalIpAddress(sending_vm,
178 receiving_vm):
179 results.append(_RunIperf(sending_vm,
180 receiving_vm,
181 receiving_vm.internal_ip,
182 'internal'))
183
184 return results
185
186
187 def Cleanup(benchmark_spec):
188 """Cleanup iperf on the target vm (by uninstalling).
189
190 Args:
191 benchmark_spec: The benchmark specification. Contains all data that is
192 required to run the benchmark.
193 """
194 vms = benchmark_spec.vms
195 for vm in vms:
196 vm.RemoteCommand('kill -9 ' + vm.iperf_server_pid)
197
[end of perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py b/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py
--- a/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py
+++ b/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py
@@ -74,9 +74,8 @@
vm.Install('iperf')
if vm_util.ShouldRunOnExternalIpAddress():
vm.AllowPort(IPERF_PORT)
- vm.RemoteCommand('nohup iperf --server --port %s &> /dev/null &' %
- IPERF_PORT)
- stdout, _ = vm.RemoteCommand('pgrep -n iperf')
+ stdout, _ = vm.RemoteCommand(('nohup iperf --server --port %s &> /dev/null'
+ '& echo $!') % IPERF_PORT)
# TODO store this in a better place once we have a better place
vm.iperf_server_pid = stdout.strip()
@@ -193,4 +192,4 @@
"""
vms = benchmark_spec.vms
for vm in vms:
- vm.RemoteCommand('kill -9 ' + vm.iperf_server_pid)
+ vm.RemoteCommand('kill -9 ' + vm.iperf_server_pid, ignore_failure=True)
| {"golden_diff": "diff --git a/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py b/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py\n--- a/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py\n+++ b/perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py\n@@ -74,9 +74,8 @@\n vm.Install('iperf')\n if vm_util.ShouldRunOnExternalIpAddress():\n vm.AllowPort(IPERF_PORT)\n- vm.RemoteCommand('nohup iperf --server --port %s &> /dev/null &' %\n- IPERF_PORT)\n- stdout, _ = vm.RemoteCommand('pgrep -n iperf')\n+ stdout, _ = vm.RemoteCommand(('nohup iperf --server --port %s &> /dev/null'\n+ '& echo $!') % IPERF_PORT)\n # TODO store this in a better place once we have a better place\n vm.iperf_server_pid = stdout.strip()\n \n@@ -193,4 +192,4 @@\n \"\"\"\n vms = benchmark_spec.vms\n for vm in vms:\n- vm.RemoteCommand('kill -9 ' + vm.iperf_server_pid)\n+ vm.RemoteCommand('kill -9 ' + vm.iperf_server_pid, ignore_failure=True)\n", "issue": "iperf benchmark has race condition in saving process id\nIt's possible for iperf to save the wrong server id if two copies of the benchmark are running on the same machine. Instead of using `pgrep -n`, use `$!` to get the process id.\n\n", "before_files": [{"content": "# Copyright 2014 PerfKitBenchmarker Authors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Runs plain Iperf.\n\nDocs:\nhttp://iperf.fr/\n\nRuns Iperf to collect network throughput.\n\"\"\"\n\nimport logging\nimport re\n\nfrom perfkitbenchmarker import configs\nfrom perfkitbenchmarker import flags\nfrom perfkitbenchmarker import sample\nfrom perfkitbenchmarker import vm_util\n\nflags.DEFINE_integer('iperf_sending_thread_count', 1,\n 'Number of connections to make to the '\n 'server for sending traffic.',\n lower_bound=1)\nflags.DEFINE_integer('iperf_runtime_in_seconds', 60,\n 'Number of seconds to run iperf.',\n lower_bound=1)\n\nFLAGS = flags.FLAGS\n\nBENCHMARK_NAME = 'iperf'\nBENCHMARK_CONFIG = \"\"\"\niperf:\n description: Run iperf\n vm_groups:\n vm_1:\n vm_spec: *default_single_core\n vm_2:\n vm_spec: *default_single_core\n\"\"\"\n\nIPERF_PORT = 20000\nIPERF_RETRIES = 5\n\n\ndef GetConfig(user_config):\n return configs.LoadConfig(BENCHMARK_CONFIG, user_config, BENCHMARK_NAME)\n\n\ndef Prepare(benchmark_spec):\n \"\"\"Install iperf and start the server on all machines.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n \"\"\"\n vms = benchmark_spec.vms\n if len(vms) != 2:\n raise ValueError(\n 'iperf benchmark requires exactly two machines, found {0}'.format(len(\n vms)))\n\n for vm in vms:\n vm.Install('iperf')\n if vm_util.ShouldRunOnExternalIpAddress():\n vm.AllowPort(IPERF_PORT)\n vm.RemoteCommand('nohup iperf --server --port %s &> /dev/null &' %\n IPERF_PORT)\n stdout, _ = vm.RemoteCommand('pgrep -n iperf')\n # TODO store this in a better place once we have a better place\n vm.iperf_server_pid = stdout.strip()\n\n\n@vm_util.Retry(max_retries=IPERF_RETRIES)\ndef _RunIperf(sending_vm, receiving_vm, receiving_ip_address, ip_type):\n \"\"\"Run iperf using sending 'vm' to connect to 'ip_address'.\n\n Args:\n sending_vm: The VM sending traffic.\n receiving_vm: The VM receiving traffic.\n receiving_ip_address: The IP address of the iperf server (ie the receiver).\n ip_type: The IP type of 'ip_address' (e.g. 'internal', 'external')\n Returns:\n A Sample.\n \"\"\"\n iperf_cmd = ('iperf --client %s --port %s --format m --time %s -P %s' %\n (receiving_ip_address, IPERF_PORT,\n FLAGS.iperf_runtime_in_seconds,\n FLAGS.iperf_sending_thread_count))\n # the additional time on top of the iperf runtime is to account for the\n # time it takes for the iperf process to start and exit\n timeout_buffer = 30 + FLAGS.iperf_sending_thread_count\n stdout, _ = sending_vm.RemoteCommand(iperf_cmd, should_log=True,\n timeout=FLAGS.iperf_runtime_in_seconds +\n timeout_buffer)\n\n # Example output from iperf that needs to be parsed\n # STDOUT: ------------------------------------------------------------\n # Client connecting to 10.237.229.201, TCP port 5001\n # TCP window size: 0.04 MByte (default)\n # ------------------------------------------------------------\n # [ 6] local 10.76.234.115 port 53527 connected with 10.237.229.201 port 5001\n # [ 3] local 10.76.234.115 port 53524 connected with 10.237.229.201 port 5001\n # [ 4] local 10.76.234.115 port 53525 connected with 10.237.229.201 port 5001\n # [ 5] local 10.76.234.115 port 53526 connected with 10.237.229.201 port 5001\n # [ ID] Interval Transfer Bandwidth\n # [ 4] 0.0-60.0 sec 3730 MBytes 521.1 Mbits/sec\n # [ 5] 0.0-60.0 sec 3499 MBytes 489 Mbits/sec\n # [ 6] 0.0-60.0 sec 3044 MBytes 425 Mbits/sec\n # [ 3] 0.0-60.0 sec 3738 MBytes 522 Mbits/sec\n # [SUM] 0.0-60.0 sec 14010 MBytes 1957 Mbits/sec\n\n thread_values = re.findall(r'\\[SUM].*\\s+(\\d+\\.?\\d*).Mbits/sec', stdout)\n if not thread_values:\n # If there is no sum you have try and figure out an estimate\n # which happens when threads start at different times. The code\n # below will tend to overestimate a bit.\n thread_values = re.findall('\\[.*\\d+\\].*\\s+(\\d+\\.?\\d*).Mbits/sec', stdout)\n\n if len(thread_values) != FLAGS.iperf_sending_thread_count:\n raise ValueError('Only %s out of %s iperf threads reported a'\n ' throughput value.' %\n (len(thread_values), FLAGS.iperf_sending_thread_count))\n\n total_throughput = 0.0\n for value in thread_values:\n total_throughput += float(value)\n\n metadata = {\n # The meta data defining the environment\n 'receiving_machine_type': receiving_vm.machine_type,\n 'receiving_zone': receiving_vm.zone,\n 'sending_machine_type': sending_vm.machine_type,\n 'sending_thread_count': FLAGS.iperf_sending_thread_count,\n 'sending_zone': sending_vm.zone,\n 'runtime_in_seconds': FLAGS.iperf_runtime_in_seconds,\n 'ip_type': ip_type\n }\n return sample.Sample('Throughput', total_throughput, 'Mbits/sec', metadata)\n\n\ndef Run(benchmark_spec):\n \"\"\"Run iperf on the target vm.\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n\n Returns:\n A list of sample.Sample objects.\n \"\"\"\n vms = benchmark_spec.vms\n results = []\n\n logging.info('Iperf Results:')\n\n # Send traffic in both directions\n for sending_vm, receiving_vm in vms, reversed(vms):\n # Send using external IP addresses\n if vm_util.ShouldRunOnExternalIpAddress():\n results.append(_RunIperf(sending_vm,\n receiving_vm,\n receiving_vm.ip_address,\n 'external'))\n\n # Send using internal IP addresses\n if vm_util.ShouldRunOnInternalIpAddress(sending_vm,\n receiving_vm):\n results.append(_RunIperf(sending_vm,\n receiving_vm,\n receiving_vm.internal_ip,\n 'internal'))\n\n return results\n\n\ndef Cleanup(benchmark_spec):\n \"\"\"Cleanup iperf on the target vm (by uninstalling).\n\n Args:\n benchmark_spec: The benchmark specification. Contains all data that is\n required to run the benchmark.\n \"\"\"\n vms = benchmark_spec.vms\n for vm in vms:\n vm.RemoteCommand('kill -9 ' + vm.iperf_server_pid)\n", "path": "perfkitbenchmarker/linux_benchmarks/iperf_benchmark.py"}]} | 3,011 | 302 |
gh_patches_debug_19083 | rasdani/github-patches | git_diff | kubeflow__pipelines-1595 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Crate VolumeOP from k8s_resource error
**What happened:**
when I run
```
dsl.VolumeOP(k8s_resource=my_vpc)
```
```
Raises:
ValueError: if k8s_resource is provided along with other arguments
```
I think the reason is :
```
if "k8s_resource" in kwargs:
if resource_name or size or storage_class or modes or annotations:
raise ValueError("You cannot provide k8s_resource along with "
"other arguments.")
```
```
def __init__(self,
resource_name: str = None,
size: str = None,
storage_class: str = None,
modes: List[str] = VOLUME_MODE_RWM,
annotations: Dict[str, str] = None,
data_source=None,
**kwargs):
```
but the mode has a default value
sdk/python/kfp/dsl/_volume_op.py
**What did you expect to happen:**
I think, I should only put k8s_resource and name in it.
**What steps did you take:**
[A clear and concise description of what the bug is.]
**Anything else you would like to add:**
[Miscellaneous information that will assist in solving the issue.]
</issue>
<code>
[start of sdk/python/kfp/dsl/_volume_op.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 import re
17 from typing import List, Dict
18 from kubernetes.client.models import (
19 V1ObjectMeta, V1ResourceRequirements, V1PersistentVolumeClaimSpec,
20 V1PersistentVolumeClaim, V1TypedLocalObjectReference
21 )
22
23 from ._resource_op import ResourceOp
24 from ._pipeline_param import (
25 PipelineParam, match_serialized_pipelineparam, sanitize_k8s_name
26 )
27 from ._pipeline_volume import PipelineVolume
28
29
30 VOLUME_MODE_RWO = ["ReadWriteOnce"]
31 VOLUME_MODE_RWM = ["ReadWriteMany"]
32 VOLUME_MODE_ROM = ["ReadOnlyMany"]
33
34
35 class VolumeOp(ResourceOp):
36 """Represents an op which will be translated into a resource template
37 which will be creating a PVC.
38 """
39
40 def __init__(self,
41 resource_name: str = None,
42 size: str = None,
43 storage_class: str = None,
44 modes: List[str] = VOLUME_MODE_RWM,
45 annotations: Dict[str, str] = None,
46 data_source=None,
47 **kwargs):
48 """Create a new instance of VolumeOp.
49
50 Args:
51 resource_name: A desired name for the PVC which will be created
52 size: The size of the PVC which will be created
53 storage_class: The storage class to use for the dynamically created
54 PVC
55 modes: The access modes for the PVC
56 annotations: Annotations to be patched in the PVC
57 data_source: May be a V1TypedLocalObjectReference, and then it is
58 used in the data_source field of the PVC as is. Can also be a
59 string/PipelineParam, and in that case it will be used as a
60 VolumeSnapshot name (Alpha feature)
61 kwargs: See ResourceOp definition
62 Raises:
63 ValueError: if k8s_resource is provided along with other arguments
64 if k8s_resource is not a V1PersistentVolumeClaim
65 if size is None
66 if size is an invalid memory string (when not a
67 PipelineParam)
68 if data_source is not one of (str, PipelineParam,
69 V1TypedLocalObjectReference)
70 """
71 # Add size to attribute outputs
72 self.attribute_outputs = {"size": "{.status.capacity.storage}"}
73
74 if "k8s_resource" in kwargs:
75 if resource_name or size or storage_class or modes or annotations:
76 raise ValueError("You cannot provide k8s_resource along with "
77 "other arguments.")
78 if not isinstance(kwargs["k8s_resource"], V1PersistentVolumeClaim):
79 raise ValueError("k8s_resource in VolumeOp must be an instance"
80 " of V1PersistentVolumeClaim")
81 super().__init__(**kwargs)
82 self.volume = PipelineVolume(
83 name=sanitize_k8s_name(self.name),
84 pvc=self.outputs["name"]
85 )
86 return
87
88 if not size:
89 raise ValueError("Please provide size")
90 elif not match_serialized_pipelineparam(str(size)):
91 self._validate_memory_string(size)
92
93 if data_source and not isinstance(
94 data_source, (str, PipelineParam, V1TypedLocalObjectReference)):
95 raise ValueError("data_source can be one of (str, PipelineParam, "
96 "V1TypedLocalObjectReference).")
97 if data_source and isinstance(data_source, (str, PipelineParam)):
98 data_source = V1TypedLocalObjectReference(
99 api_group="snapshot.storage.k8s.io",
100 kind="VolumeSnapshot",
101 name=data_source
102 )
103
104 # Set the k8s_resource
105 if not match_serialized_pipelineparam(str(resource_name)):
106 resource_name = sanitize_k8s_name(resource_name)
107 pvc_metadata = V1ObjectMeta(
108 name="{{workflow.name}}-%s" % resource_name,
109 annotations=annotations
110 )
111 requested_resources = V1ResourceRequirements(
112 requests={"storage": size}
113 )
114 pvc_spec = V1PersistentVolumeClaimSpec(
115 access_modes=modes,
116 resources=requested_resources,
117 storage_class_name=storage_class,
118 data_source=data_source
119 )
120 k8s_resource = V1PersistentVolumeClaim(
121 api_version="v1",
122 kind="PersistentVolumeClaim",
123 metadata=pvc_metadata,
124 spec=pvc_spec
125 )
126
127 super().__init__(
128 k8s_resource=k8s_resource,
129 **kwargs,
130 )
131 self.volume = PipelineVolume(
132 name=sanitize_k8s_name(self.name),
133 pvc=self.outputs["name"]
134 )
135
136 def _validate_memory_string(self, memory_string):
137 """Validate a given string is valid for memory request or limit."""
138 if re.match(r'^[0-9]+(E|Ei|P|Pi|T|Ti|G|Gi|M|Mi|K|Ki){0,1}$',
139 memory_string) is None:
140 raise ValueError('Invalid memory string. Should be an integer, ' +
141 'or integer followed by one of ' +
142 '"E|Ei|P|Pi|T|Ti|G|Gi|M|Mi|K|Ki"')
143
[end of sdk/python/kfp/dsl/_volume_op.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sdk/python/kfp/dsl/_volume_op.py b/sdk/python/kfp/dsl/_volume_op.py
--- a/sdk/python/kfp/dsl/_volume_op.py
+++ b/sdk/python/kfp/dsl/_volume_op.py
@@ -41,7 +41,7 @@
resource_name: str = None,
size: str = None,
storage_class: str = None,
- modes: List[str] = VOLUME_MODE_RWM,
+ modes: List[str] = None,
annotations: Dict[str, str] = None,
data_source=None,
**kwargs):
@@ -112,7 +112,7 @@
requests={"storage": size}
)
pvc_spec = V1PersistentVolumeClaimSpec(
- access_modes=modes,
+ access_modes=modes or VOLUME_MODE_RWM,
resources=requested_resources,
storage_class_name=storage_class,
data_source=data_source
| {"golden_diff": "diff --git a/sdk/python/kfp/dsl/_volume_op.py b/sdk/python/kfp/dsl/_volume_op.py\n--- a/sdk/python/kfp/dsl/_volume_op.py\n+++ b/sdk/python/kfp/dsl/_volume_op.py\n@@ -41,7 +41,7 @@\n resource_name: str = None,\n size: str = None,\n storage_class: str = None,\n- modes: List[str] = VOLUME_MODE_RWM,\n+ modes: List[str] = None,\n annotations: Dict[str, str] = None,\n data_source=None,\n **kwargs):\n@@ -112,7 +112,7 @@\n requests={\"storage\": size}\n )\n pvc_spec = V1PersistentVolumeClaimSpec(\n- access_modes=modes,\n+ access_modes=modes or VOLUME_MODE_RWM,\n resources=requested_resources,\n storage_class_name=storage_class,\n data_source=data_source\n", "issue": "Crate VolumeOP from k8s_resource error\n**What happened:**\r\nwhen I run\r\n```\r\n dsl.VolumeOP(k8s_resource=my_vpc)\r\n```\r\n\r\n```\r\n Raises:\r\n ValueError: if k8s_resource is provided along with other arguments\r\n```\r\n\r\n\r\nI think the reason is :\r\n``` \r\n if \"k8s_resource\" in kwargs:\r\n if resource_name or size or storage_class or modes or annotations:\r\n raise ValueError(\"You cannot provide k8s_resource along with \"\r\n \"other arguments.\")\r\n```\r\n```\r\n def __init__(self,\r\n resource_name: str = None,\r\n size: str = None,\r\n storage_class: str = None,\r\n modes: List[str] = VOLUME_MODE_RWM,\r\n annotations: Dict[str, str] = None,\r\n data_source=None,\r\n **kwargs):\r\n```\r\n\r\nbut the mode has a default value\r\nsdk/python/kfp/dsl/_volume_op.py\r\n**What did you expect to happen:**\r\nI think, I should only put k8s_resource and name in it.\r\n**What steps did you take:**\r\n[A clear and concise description of what the bug is.]\r\n\r\n**Anything else you would like to add:**\r\n[Miscellaneous information that will assist in solving the issue.]\r\n\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nimport re\nfrom typing import List, Dict\nfrom kubernetes.client.models import (\n V1ObjectMeta, V1ResourceRequirements, V1PersistentVolumeClaimSpec,\n V1PersistentVolumeClaim, V1TypedLocalObjectReference\n)\n\nfrom ._resource_op import ResourceOp\nfrom ._pipeline_param import (\n PipelineParam, match_serialized_pipelineparam, sanitize_k8s_name\n)\nfrom ._pipeline_volume import PipelineVolume\n\n\nVOLUME_MODE_RWO = [\"ReadWriteOnce\"]\nVOLUME_MODE_RWM = [\"ReadWriteMany\"]\nVOLUME_MODE_ROM = [\"ReadOnlyMany\"]\n\n\nclass VolumeOp(ResourceOp):\n \"\"\"Represents an op which will be translated into a resource template\n which will be creating a PVC.\n \"\"\"\n\n def __init__(self,\n resource_name: str = None,\n size: str = None,\n storage_class: str = None,\n modes: List[str] = VOLUME_MODE_RWM,\n annotations: Dict[str, str] = None,\n data_source=None,\n **kwargs):\n \"\"\"Create a new instance of VolumeOp.\n\n Args:\n resource_name: A desired name for the PVC which will be created\n size: The size of the PVC which will be created\n storage_class: The storage class to use for the dynamically created\n PVC\n modes: The access modes for the PVC\n annotations: Annotations to be patched in the PVC\n data_source: May be a V1TypedLocalObjectReference, and then it is\n used in the data_source field of the PVC as is. Can also be a\n string/PipelineParam, and in that case it will be used as a\n VolumeSnapshot name (Alpha feature)\n kwargs: See ResourceOp definition\n Raises:\n ValueError: if k8s_resource is provided along with other arguments\n if k8s_resource is not a V1PersistentVolumeClaim\n if size is None\n if size is an invalid memory string (when not a\n PipelineParam)\n if data_source is not one of (str, PipelineParam,\n V1TypedLocalObjectReference)\n \"\"\"\n # Add size to attribute outputs\n self.attribute_outputs = {\"size\": \"{.status.capacity.storage}\"}\n\n if \"k8s_resource\" in kwargs:\n if resource_name or size or storage_class or modes or annotations:\n raise ValueError(\"You cannot provide k8s_resource along with \"\n \"other arguments.\")\n if not isinstance(kwargs[\"k8s_resource\"], V1PersistentVolumeClaim):\n raise ValueError(\"k8s_resource in VolumeOp must be an instance\"\n \" of V1PersistentVolumeClaim\")\n super().__init__(**kwargs)\n self.volume = PipelineVolume(\n name=sanitize_k8s_name(self.name),\n pvc=self.outputs[\"name\"]\n )\n return\n\n if not size:\n raise ValueError(\"Please provide size\")\n elif not match_serialized_pipelineparam(str(size)):\n self._validate_memory_string(size)\n\n if data_source and not isinstance(\n data_source, (str, PipelineParam, V1TypedLocalObjectReference)):\n raise ValueError(\"data_source can be one of (str, PipelineParam, \"\n \"V1TypedLocalObjectReference).\")\n if data_source and isinstance(data_source, (str, PipelineParam)):\n data_source = V1TypedLocalObjectReference(\n api_group=\"snapshot.storage.k8s.io\",\n kind=\"VolumeSnapshot\",\n name=data_source\n )\n\n # Set the k8s_resource\n if not match_serialized_pipelineparam(str(resource_name)):\n resource_name = sanitize_k8s_name(resource_name)\n pvc_metadata = V1ObjectMeta(\n name=\"{{workflow.name}}-%s\" % resource_name,\n annotations=annotations\n )\n requested_resources = V1ResourceRequirements(\n requests={\"storage\": size}\n )\n pvc_spec = V1PersistentVolumeClaimSpec(\n access_modes=modes,\n resources=requested_resources,\n storage_class_name=storage_class,\n data_source=data_source\n )\n k8s_resource = V1PersistentVolumeClaim(\n api_version=\"v1\",\n kind=\"PersistentVolumeClaim\",\n metadata=pvc_metadata,\n spec=pvc_spec\n )\n\n super().__init__(\n k8s_resource=k8s_resource,\n **kwargs,\n )\n self.volume = PipelineVolume(\n name=sanitize_k8s_name(self.name),\n pvc=self.outputs[\"name\"]\n )\n\n def _validate_memory_string(self, memory_string):\n \"\"\"Validate a given string is valid for memory request or limit.\"\"\"\n if re.match(r'^[0-9]+(E|Ei|P|Pi|T|Ti|G|Gi|M|Mi|K|Ki){0,1}$',\n memory_string) is None:\n raise ValueError('Invalid memory string. Should be an integer, ' +\n 'or integer followed by one of ' +\n '\"E|Ei|P|Pi|T|Ti|G|Gi|M|Mi|K|Ki\"')\n", "path": "sdk/python/kfp/dsl/_volume_op.py"}]} | 2,337 | 206 |
gh_patches_debug_27022 | rasdani/github-patches | git_diff | weecology__retriever-1577 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Clean up CLI to use the python interface
In the python interface, we use the function [def datasets(keywords=None, licenses=None):](https://github.com/weecology/retriever/blob/bb06180a030d34eafa2e1ea13b74a3719df827e1/retriever/lib/datasets.py#L5).
This should be able to return what we are doing in the main (CLI interface) at [line](https://github.com/weecology/retriever/blob/main/retriever/__main__.py#L105).
and the results can be printed in the away that looks good in CLI(terminal)
_Originally posted by @henrykironde in https://github.com/weecology/retriever/issues/1570#issuecomment-810558636_
</issue>
<code>
[start of retriever/lib/get_opts.py]
1 import argparse
2
3 import argcomplete
4 from argcomplete.completers import ChoicesCompleter
5
6 from retriever.engines import engine_list
7 from retriever.lib.defaults import VERSION, RETRIEVER_REPOSITORY
8 from retriever.lib.scripts import SCRIPT_LIST, get_dataset_names_upstream
9
10 module_list = SCRIPT_LIST()
11 script_list = []
12 keywords_list = []
13 licenses_list = []
14
15 for module in module_list:
16 script_list.append(module.name)
17
18 if hasattr(module, "keywords"):
19 # Add list of keywords to keywords_list
20 if module.keywords:
21 keywords_list += module.keywords
22
23 if hasattr(module, "licenses"):
24 # Append string to list of licenses_list
25 if module.licenses:
26 for dict_items in module.licenses:
27 if dict_items['name']:
28 licenses_list.append(dict_items['name'])
29
30 script_list.extend(get_dataset_names_upstream(repo=RETRIEVER_REPOSITORY))
31 script_list.extend(get_dataset_names_upstream())
32 script_list = sorted(set(script_list))
33
34 # set of all possible licenses and keywords
35 licenses_options = set(licenses_list)
36 keywords_options = set(keywords_list)
37
38 parser = argparse.ArgumentParser(prog="retriever")
39 parser.add_argument('-v', '--version', action='version', version=VERSION)
40 parser.add_argument('-q',
41 '--quiet',
42 help='suppress command-line output',
43 action='store_true')
44
45 # ..............................................................
46 # subparsers
47 # ..............................................................
48
49 # retriever HELP
50 subparsers = parser.add_subparsers(help='sub-command help', dest='command')
51
52 # retriever download/install/update/new help
53 download_parser = subparsers.add_parser('download',
54 help='download raw data files for a dataset')
55 install_parser = subparsers.add_parser('install', help='download and install dataset')
56 default_parser = subparsers.add_parser('defaults', help='displays default options')
57 update_parser = subparsers.add_parser('update',
58 help='download updated versions of scripts')
59 new_parser = subparsers.add_parser('new', help='create a new sample retriever script')
60 autocreate_parser = subparsers.add_parser(
61 'autocreate', help='CLI to automatically create retriever scripts')
62 ls_parser = subparsers.add_parser('ls',
63 help='display a list all available dataset scripts')
64 citation_parser = subparsers.add_parser('citation', help='view citation')
65 license_parser = subparsers.add_parser('license', help='view dataset license')
66 reset_parser = subparsers.add_parser(
67 'reset',
68 help='reset retriever: removes configuration settings, scripts, and cached data')
69 help_parser = subparsers.add_parser('help', help='')
70 commit_parser = subparsers.add_parser('commit', help='commit a dataset')
71 commit_log_parser = subparsers.add_parser('log', help='see log of a committed dataset')
72
73 # ..............................................................
74 # subparsers with Arguments
75 # ..............................................................
76
77 citation_parser.add_argument('dataset',
78 help='dataset name',
79 nargs='?',
80 default=None,
81 choices=script_list + [None])
82 commit_parser.add_argument('dataset', help='dataset name', choices=script_list)
83 commit_parser.add_argument('-p',
84 '--path',
85 help='path to store committed file',
86 default=None,
87 required=False)
88 commit_parser.add_argument('-m',
89 '--message',
90 help='commit message',
91 default=None,
92 required=True,
93 type=str)
94 commit_log_parser.add_argument('dataset', help='dataset name', choices=script_list)
95 license_parser.add_argument('dataset',
96 help='dataset name',
97 nargs='?',
98 default=None,
99 choices=script_list + [None])
100 new_parser.add_argument('filename', help='new script filename')
101 reset_parser.add_argument('scope', help='things to reset: all, scripts or data').completer = \
102 ChoicesCompleter(script_list + ['all', 'scripts', 'data'])
103 install_parser.add_argument('--compile',
104 help='force re-compile of script before downloading',
105 action='store_true')
106 install_parser.add_argument('--debug', help='run in debug mode', action='store_true')
107 install_parser.add_argument('--not-cached',
108 help='overwrites local cache of raw data',
109 action='store_true')
110 download_parser.add_argument('--debug', help='run in debug mode', action='store_true')
111 download_parser.add_argument('--not-cached',
112 help='overwrites local cache of raw data',
113 action='store_true')
114 download_parser.add_argument('-b',
115 '--bbox',
116 nargs=4,
117 help='Set bounding box xmin, ymin, xmax, ymax',
118 required=False)
119
120 ls_parser.add_argument('-l', help='search datasets with specific license(s)',
121 nargs='+').completer = ChoicesCompleter(list(licenses_options))
122 ls_parser.add_argument('-k', help='search datasets with keyword(s)',
123 nargs='+').completer = ChoicesCompleter(list(keywords_options))
124 ls_parser.add_argument('-v',
125 help='verbose list of all datasets',
126 nargs='*',
127 default=False)
128
129 autocreate_parser.add_argument('path', help='path to the data file(s)')
130 autocreate_parser.add_argument('-dt',
131 help='datatype for files',
132 nargs='?',
133 default='tabular',
134 choices=['raster', 'vector', 'tabular'])
135 autocreate_parser.add_argument('-d',
136 help='turn a directory and subdirectories into scripts',
137 action='store_true')
138 autocreate_parser.add_argument('-e',
139 help='encoding of the source file',
140 nargs='?',
141 default='utf-8')
142 autocreate_parser.add_argument('-f', help='turn files into scripts', action='store_true')
143 autocreate_parser.add_argument('-o',
144 help='write scripts out to a designated directory',
145 nargs='?',
146 const='')
147 autocreate_parser.add_argument('--skip-lines',
148 help='skip a set number of lines before processing data',
149 nargs=1,
150 type=int)
151 # retriever Install {Engine} ..
152 # retriever download [options]
153 install_subparsers = install_parser.add_subparsers(help='engine-specific help',
154 dest='engine')
155
156 for engine in engine_list:
157 if engine.name == "Download Only":
158 # download engine follows, retriever download [dataset]
159 download_parser.add_argument(
160 'dataset', help='dataset name').completer = ChoicesCompleter(script_list)
161 else:
162 engine_parser = install_subparsers.add_parser(engine.abbreviation,
163 help=engine.name)
164 engine_parser.add_argument(
165 'dataset', help='dataset name').completer = ChoicesCompleter(script_list)
166 engine_parser.add_argument(
167 '--hash-value',
168 help='install dataset from provenance directory using hash value',
169 default=None,
170 required=False,
171 type=str)
172 engine_parser.add_argument('-b',
173 '--bbox',
174 nargs=4,
175 help='Set bounding box xmin, ymin, xmax, ymax',
176 required=False)
177 if engine.name == "JSON":
178 engine_parser.add_argument('-p',
179 '--pretty',
180 help='Add indentation to json file',
181 action='store_true',
182 required=False)
183
184 abbreviations = set('h')
185
186 for arg in engine.required_opts:
187 arg_name, help_msg, default = arg[:3]
188 potential_abbreviations = [char for char in arg_name if char not in abbreviations]
189 if potential_abbreviations:
190 abbreviation = potential_abbreviations[0]
191 abbreviations.add(abbreviation)
192 else:
193 abbreviation = '-%s' % arg_name
194
195 if engine.name == "Download Only" or abbreviation == "download":
196 # add attributes to Download only engine
197 download_parser.add_argument('--%s' % arg_name,
198 '-%s' % abbreviation,
199 help=help_msg,
200 nargs='?',
201 default=default)
202 else:
203 engine_parser.add_argument('--%s' % arg_name,
204 '-%s' % abbreviation,
205 help=help_msg,
206 nargs='?',
207 default=default)
208
209 argcomplete.autocomplete(parser)
210
[end of retriever/lib/get_opts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/retriever/lib/get_opts.py b/retriever/lib/get_opts.py
--- a/retriever/lib/get_opts.py
+++ b/retriever/lib/get_opts.py
@@ -31,9 +31,10 @@
script_list.extend(get_dataset_names_upstream())
script_list = sorted(set(script_list))
-# set of all possible licenses and keywords
+# set of all possible licenses, keywords and scripts
licenses_options = set(licenses_list)
keywords_options = set(keywords_list)
+scripts_options = script_list
parser = argparse.ArgumentParser(prog="retriever")
parser.add_argument('-v', '--version', action='version', version=VERSION)
@@ -121,10 +122,8 @@
nargs='+').completer = ChoicesCompleter(list(licenses_options))
ls_parser.add_argument('-k', help='search datasets with keyword(s)',
nargs='+').completer = ChoicesCompleter(list(keywords_options))
-ls_parser.add_argument('-v',
- help='verbose list of all datasets',
- nargs='*',
- default=False)
+ls_parser.add_argument('-v', help='verbose list of specified dataset(s)',
+ nargs='+').completer = ChoicesCompleter(list(scripts_options))
autocreate_parser.add_argument('path', help='path to the data file(s)')
autocreate_parser.add_argument('-dt',
| {"golden_diff": "diff --git a/retriever/lib/get_opts.py b/retriever/lib/get_opts.py\n--- a/retriever/lib/get_opts.py\n+++ b/retriever/lib/get_opts.py\n@@ -31,9 +31,10 @@\n script_list.extend(get_dataset_names_upstream())\n script_list = sorted(set(script_list))\n \n-# set of all possible licenses and keywords\n+# set of all possible licenses, keywords and scripts\n licenses_options = set(licenses_list)\n keywords_options = set(keywords_list)\n+scripts_options = script_list\n \n parser = argparse.ArgumentParser(prog=\"retriever\")\n parser.add_argument('-v', '--version', action='version', version=VERSION)\n@@ -121,10 +122,8 @@\n nargs='+').completer = ChoicesCompleter(list(licenses_options))\n ls_parser.add_argument('-k', help='search datasets with keyword(s)',\n nargs='+').completer = ChoicesCompleter(list(keywords_options))\n-ls_parser.add_argument('-v',\n- help='verbose list of all datasets',\n- nargs='*',\n- default=False)\n+ls_parser.add_argument('-v', help='verbose list of specified dataset(s)',\n+ nargs='+').completer = ChoicesCompleter(list(scripts_options))\n \n autocreate_parser.add_argument('path', help='path to the data file(s)')\n autocreate_parser.add_argument('-dt',\n", "issue": "Clean up CLI to use the python interface\n\r\nIn the python interface, we use the function [def datasets(keywords=None, licenses=None):](https://github.com/weecology/retriever/blob/bb06180a030d34eafa2e1ea13b74a3719df827e1/retriever/lib/datasets.py#L5).\r\nThis should be able to return what we are doing in the main (CLI interface) at [line](https://github.com/weecology/retriever/blob/main/retriever/__main__.py#L105).\r\n and the results can be printed in the away that looks good in CLI(terminal)\r\n\r\n_Originally posted by @henrykironde in https://github.com/weecology/retriever/issues/1570#issuecomment-810558636_\n", "before_files": [{"content": "import argparse\n\nimport argcomplete\nfrom argcomplete.completers import ChoicesCompleter\n\nfrom retriever.engines import engine_list\nfrom retriever.lib.defaults import VERSION, RETRIEVER_REPOSITORY\nfrom retriever.lib.scripts import SCRIPT_LIST, get_dataset_names_upstream\n\nmodule_list = SCRIPT_LIST()\nscript_list = []\nkeywords_list = []\nlicenses_list = []\n\nfor module in module_list:\n script_list.append(module.name)\n\n if hasattr(module, \"keywords\"):\n # Add list of keywords to keywords_list\n if module.keywords:\n keywords_list += module.keywords\n\n if hasattr(module, \"licenses\"):\n # Append string to list of licenses_list\n if module.licenses:\n for dict_items in module.licenses:\n if dict_items['name']:\n licenses_list.append(dict_items['name'])\n\nscript_list.extend(get_dataset_names_upstream(repo=RETRIEVER_REPOSITORY))\nscript_list.extend(get_dataset_names_upstream())\nscript_list = sorted(set(script_list))\n\n# set of all possible licenses and keywords\nlicenses_options = set(licenses_list)\nkeywords_options = set(keywords_list)\n\nparser = argparse.ArgumentParser(prog=\"retriever\")\nparser.add_argument('-v', '--version', action='version', version=VERSION)\nparser.add_argument('-q',\n '--quiet',\n help='suppress command-line output',\n action='store_true')\n\n# ..............................................................\n# subparsers\n# ..............................................................\n\n# retriever HELP\nsubparsers = parser.add_subparsers(help='sub-command help', dest='command')\n\n# retriever download/install/update/new help\ndownload_parser = subparsers.add_parser('download',\n help='download raw data files for a dataset')\ninstall_parser = subparsers.add_parser('install', help='download and install dataset')\ndefault_parser = subparsers.add_parser('defaults', help='displays default options')\nupdate_parser = subparsers.add_parser('update',\n help='download updated versions of scripts')\nnew_parser = subparsers.add_parser('new', help='create a new sample retriever script')\nautocreate_parser = subparsers.add_parser(\n 'autocreate', help='CLI to automatically create retriever scripts')\nls_parser = subparsers.add_parser('ls',\n help='display a list all available dataset scripts')\ncitation_parser = subparsers.add_parser('citation', help='view citation')\nlicense_parser = subparsers.add_parser('license', help='view dataset license')\nreset_parser = subparsers.add_parser(\n 'reset',\n help='reset retriever: removes configuration settings, scripts, and cached data')\nhelp_parser = subparsers.add_parser('help', help='')\ncommit_parser = subparsers.add_parser('commit', help='commit a dataset')\ncommit_log_parser = subparsers.add_parser('log', help='see log of a committed dataset')\n\n# ..............................................................\n# subparsers with Arguments\n# ..............................................................\n\ncitation_parser.add_argument('dataset',\n help='dataset name',\n nargs='?',\n default=None,\n choices=script_list + [None])\ncommit_parser.add_argument('dataset', help='dataset name', choices=script_list)\ncommit_parser.add_argument('-p',\n '--path',\n help='path to store committed file',\n default=None,\n required=False)\ncommit_parser.add_argument('-m',\n '--message',\n help='commit message',\n default=None,\n required=True,\n type=str)\ncommit_log_parser.add_argument('dataset', help='dataset name', choices=script_list)\nlicense_parser.add_argument('dataset',\n help='dataset name',\n nargs='?',\n default=None,\n choices=script_list + [None])\nnew_parser.add_argument('filename', help='new script filename')\nreset_parser.add_argument('scope', help='things to reset: all, scripts or data').completer = \\\n ChoicesCompleter(script_list + ['all', 'scripts', 'data'])\ninstall_parser.add_argument('--compile',\n help='force re-compile of script before downloading',\n action='store_true')\ninstall_parser.add_argument('--debug', help='run in debug mode', action='store_true')\ninstall_parser.add_argument('--not-cached',\n help='overwrites local cache of raw data',\n action='store_true')\ndownload_parser.add_argument('--debug', help='run in debug mode', action='store_true')\ndownload_parser.add_argument('--not-cached',\n help='overwrites local cache of raw data',\n action='store_true')\ndownload_parser.add_argument('-b',\n '--bbox',\n nargs=4,\n help='Set bounding box xmin, ymin, xmax, ymax',\n required=False)\n\nls_parser.add_argument('-l', help='search datasets with specific license(s)',\n nargs='+').completer = ChoicesCompleter(list(licenses_options))\nls_parser.add_argument('-k', help='search datasets with keyword(s)',\n nargs='+').completer = ChoicesCompleter(list(keywords_options))\nls_parser.add_argument('-v',\n help='verbose list of all datasets',\n nargs='*',\n default=False)\n\nautocreate_parser.add_argument('path', help='path to the data file(s)')\nautocreate_parser.add_argument('-dt',\n help='datatype for files',\n nargs='?',\n default='tabular',\n choices=['raster', 'vector', 'tabular'])\nautocreate_parser.add_argument('-d',\n help='turn a directory and subdirectories into scripts',\n action='store_true')\nautocreate_parser.add_argument('-e',\n help='encoding of the source file',\n nargs='?',\n default='utf-8')\nautocreate_parser.add_argument('-f', help='turn files into scripts', action='store_true')\nautocreate_parser.add_argument('-o',\n help='write scripts out to a designated directory',\n nargs='?',\n const='')\nautocreate_parser.add_argument('--skip-lines',\n help='skip a set number of lines before processing data',\n nargs=1,\n type=int)\n# retriever Install {Engine} ..\n# retriever download [options]\ninstall_subparsers = install_parser.add_subparsers(help='engine-specific help',\n dest='engine')\n\nfor engine in engine_list:\n if engine.name == \"Download Only\":\n # download engine follows, retriever download [dataset]\n download_parser.add_argument(\n 'dataset', help='dataset name').completer = ChoicesCompleter(script_list)\n else:\n engine_parser = install_subparsers.add_parser(engine.abbreviation,\n help=engine.name)\n engine_parser.add_argument(\n 'dataset', help='dataset name').completer = ChoicesCompleter(script_list)\n engine_parser.add_argument(\n '--hash-value',\n help='install dataset from provenance directory using hash value',\n default=None,\n required=False,\n type=str)\n engine_parser.add_argument('-b',\n '--bbox',\n nargs=4,\n help='Set bounding box xmin, ymin, xmax, ymax',\n required=False)\n if engine.name == \"JSON\":\n engine_parser.add_argument('-p',\n '--pretty',\n help='Add indentation to json file',\n action='store_true',\n required=False)\n\n abbreviations = set('h')\n\n for arg in engine.required_opts:\n arg_name, help_msg, default = arg[:3]\n potential_abbreviations = [char for char in arg_name if char not in abbreviations]\n if potential_abbreviations:\n abbreviation = potential_abbreviations[0]\n abbreviations.add(abbreviation)\n else:\n abbreviation = '-%s' % arg_name\n\n if engine.name == \"Download Only\" or abbreviation == \"download\":\n # add attributes to Download only engine\n download_parser.add_argument('--%s' % arg_name,\n '-%s' % abbreviation,\n help=help_msg,\n nargs='?',\n default=default)\n else:\n engine_parser.add_argument('--%s' % arg_name,\n '-%s' % abbreviation,\n help=help_msg,\n nargs='?',\n default=default)\n\nargcomplete.autocomplete(parser)\n", "path": "retriever/lib/get_opts.py"}]} | 2,920 | 293 |
gh_patches_debug_5240 | rasdani/github-patches | git_diff | pytorch__vision-2159 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NMS returns different results on CPU & CUDA
## 🐛 Bug
Hi, I noticed that results of `torchvision.ops.nms` in CPU and CUDA have different value
## To Reproduce
Steps to reproduce the behavior:
1. `docker run --runtime=nvidia -it pytorch/pytorch:1.5-cuda10.1-cudnn7-devel bash`
2. run this script :
```
import torch
import torchvision
import random
random.seed(0)
torch.manual_seed(0)
torch.backends.cudnn.deterministic = True
boxes_for_nms = torch.tensor([[0.3764, 0.0905, 0.6533, 0.4487],[0.3744, 0.0899, 0.6535, 0.4513],[0.3753, 0.0916, 0.6532, 0.4512]])
scores = torch.tensor([1., 1., 1.])
iou_threshold = 0.2
cpu_keep = torchvision.ops.nms(boxes_for_nms, scores, iou_threshold)
gpu_keep = torchvision.ops.nms(boxes_for_nms.to('cuda'), scores.to('cuda'), iou_threshold)
print(torch.__version__, torchvision.__version__)
print('cpu keep', cpu_keep)
print('gpu keep', gpu_keep)
print('cpu==gpu', int(cpu_keep)==int(gpu_keep))
```
3. output
```
1.5.0 0.6.0a0+82fd1c8
cpu keep tensor([0])
gpu keep tensor([2], device='cuda:0')
cpu==gpu False
```
## Expected behavior
`cpu==gpu True`
## Environment
```
PyTorch version: 1.5.0
Is debug build: No
CUDA used to build PyTorch: 10.1
OS: Ubuntu 18.04.3 LTS
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
CMake version: Could not collect
Python version: 3.7
Is CUDA available: Yes
CUDA runtime version: 10.1.243
GPU models and configuration: GPU 0: GeForce GTX 1060 with Max-Q Design
Nvidia driver version: 418.56
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5
Versions of relevant libraries:
[pip] numpy==1.18.1
[pip] torch==1.5.0
[pip] torchvision==0.6.0a0+82fd1c8
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.1.243 h6bb024c_0
[conda] mkl 2020.0 166
[conda] mkl-service 2.3.0 py37he904b0f_0
[conda] mkl_fft 1.0.15 py37ha843d7b_0
[conda] mkl_random 1.1.0 py37hd6b4f25_0
[conda] numpy 1.18.1 py37h4f9e942_0
[conda] numpy-base 1.18.1 py37hde5b4d6_1
[conda] pytorch 1.5.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch
[conda] torchvision 0.6.0 py37_cu101 pytorch
```
## Additional context
testing on pytorch 1.4.0 and torchvision 0.5.0 also yields different result,
</issue>
<code>
[start of torchvision/ops/boxes.py]
1 import torch
2 from torch.jit.annotations import Tuple
3 from torch import Tensor
4 import torchvision
5
6
7 def nms(boxes, scores, iou_threshold):
8 # type: (Tensor, Tensor, float)
9 """
10 Performs non-maximum suppression (NMS) on the boxes according
11 to their intersection-over-union (IoU).
12
13 NMS iteratively removes lower scoring boxes which have an
14 IoU greater than iou_threshold with another (higher scoring)
15 box.
16
17 Parameters
18 ----------
19 boxes : Tensor[N, 4])
20 boxes to perform NMS on. They
21 are expected to be in (x1, y1, x2, y2) format
22 scores : Tensor[N]
23 scores for each one of the boxes
24 iou_threshold : float
25 discards all overlapping
26 boxes with IoU > iou_threshold
27
28 Returns
29 -------
30 keep : Tensor
31 int64 tensor with the indices
32 of the elements that have been kept
33 by NMS, sorted in decreasing order of scores
34 """
35 return torch.ops.torchvision.nms(boxes, scores, iou_threshold)
36
37
38 def batched_nms(boxes, scores, idxs, iou_threshold):
39 # type: (Tensor, Tensor, Tensor, float)
40 """
41 Performs non-maximum suppression in a batched fashion.
42
43 Each index value correspond to a category, and NMS
44 will not be applied between elements of different categories.
45
46 Parameters
47 ----------
48 boxes : Tensor[N, 4]
49 boxes where NMS will be performed. They
50 are expected to be in (x1, y1, x2, y2) format
51 scores : Tensor[N]
52 scores for each one of the boxes
53 idxs : Tensor[N]
54 indices of the categories for each one of the boxes.
55 iou_threshold : float
56 discards all overlapping boxes
57 with IoU > iou_threshold
58
59 Returns
60 -------
61 keep : Tensor
62 int64 tensor with the indices of
63 the elements that have been kept by NMS, sorted
64 in decreasing order of scores
65 """
66 if boxes.numel() == 0:
67 return torch.empty((0,), dtype=torch.int64, device=boxes.device)
68 # strategy: in order to perform NMS independently per class.
69 # we add an offset to all the boxes. The offset is dependent
70 # only on the class idx, and is large enough so that boxes
71 # from different classes do not overlap
72 max_coordinate = boxes.max()
73 offsets = idxs.to(boxes) * (max_coordinate + 1)
74 boxes_for_nms = boxes + offsets[:, None]
75 keep = nms(boxes_for_nms, scores, iou_threshold)
76 return keep
77
78
79 def remove_small_boxes(boxes, min_size):
80 # type: (Tensor, float)
81 """
82 Remove boxes which contains at least one side smaller than min_size.
83
84 Arguments:
85 boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format
86 min_size (float): minimum size
87
88 Returns:
89 keep (Tensor[K]): indices of the boxes that have both sides
90 larger than min_size
91 """
92 ws, hs = boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1]
93 keep = (ws >= min_size) & (hs >= min_size)
94 keep = keep.nonzero().squeeze(1)
95 return keep
96
97
98 def clip_boxes_to_image(boxes, size):
99 # type: (Tensor, Tuple[int, int])
100 """
101 Clip boxes so that they lie inside an image of size `size`.
102
103 Arguments:
104 boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format
105 size (Tuple[height, width]): size of the image
106
107 Returns:
108 clipped_boxes (Tensor[N, 4])
109 """
110 dim = boxes.dim()
111 boxes_x = boxes[..., 0::2]
112 boxes_y = boxes[..., 1::2]
113 height, width = size
114
115 if torchvision._is_tracing():
116 boxes_x = torch.max(boxes_x, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))
117 boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))
118 boxes_y = torch.max(boxes_y, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))
119 boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))
120 else:
121 boxes_x = boxes_x.clamp(min=0, max=width)
122 boxes_y = boxes_y.clamp(min=0, max=height)
123
124 clipped_boxes = torch.stack((boxes_x, boxes_y), dim=dim)
125 return clipped_boxes.reshape(boxes.shape)
126
127
128 def box_area(boxes):
129 """
130 Computes the area of a set of bounding boxes, which are specified by its
131 (x1, y1, x2, y2) coordinates.
132
133 Arguments:
134 boxes (Tensor[N, 4]): boxes for which the area will be computed. They
135 are expected to be in (x1, y1, x2, y2) format
136
137 Returns:
138 area (Tensor[N]): area for each box
139 """
140 return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])
141
142
143 # implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py
144 # with slight modifications
145 def box_iou(boxes1, boxes2):
146 """
147 Return intersection-over-union (Jaccard index) of boxes.
148
149 Both sets of boxes are expected to be in (x1, y1, x2, y2) format.
150
151 Arguments:
152 boxes1 (Tensor[N, 4])
153 boxes2 (Tensor[M, 4])
154
155 Returns:
156 iou (Tensor[N, M]): the NxM matrix containing the pairwise
157 IoU values for every element in boxes1 and boxes2
158 """
159 area1 = box_area(boxes1)
160 area2 = box_area(boxes2)
161
162 lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]
163 rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]
164
165 wh = (rb - lt).clamp(min=0) # [N,M,2]
166 inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]
167
168 iou = inter / (area1[:, None] + area2 - inter)
169 return iou
170
[end of torchvision/ops/boxes.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/ops/boxes.py b/torchvision/ops/boxes.py
--- a/torchvision/ops/boxes.py
+++ b/torchvision/ops/boxes.py
@@ -14,6 +14,11 @@
IoU greater than iou_threshold with another (higher scoring)
box.
+ If multiple boxes have the exact same score and satisfy the IoU
+ criterion with respect to a reference box, the selected box is
+ not guaranteed to be the same between CPU and GPU. This is similar
+ to the behavior of argsort in PyTorch when repeated values are present.
+
Parameters
----------
boxes : Tensor[N, 4])
| {"golden_diff": "diff --git a/torchvision/ops/boxes.py b/torchvision/ops/boxes.py\n--- a/torchvision/ops/boxes.py\n+++ b/torchvision/ops/boxes.py\n@@ -14,6 +14,11 @@\n IoU greater than iou_threshold with another (higher scoring)\n box.\n \n+ If multiple boxes have the exact same score and satisfy the IoU \n+ criterion with respect to a reference box, the selected box is \n+ not guaranteed to be the same between CPU and GPU. This is similar \n+ to the behavior of argsort in PyTorch when repeated values are present.\n+\n Parameters\n ----------\n boxes : Tensor[N, 4])\n", "issue": "NMS returns different results on CPU & CUDA\n## \ud83d\udc1b Bug\r\n\r\nHi, I noticed that results of `torchvision.ops.nms` in CPU and CUDA have different value\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. `docker run --runtime=nvidia -it pytorch/pytorch:1.5-cuda10.1-cudnn7-devel bash`\r\n2. run this script :\r\n ```\r\nimport torch\r\nimport torchvision\r\nimport random\r\n\r\nrandom.seed(0)\r\ntorch.manual_seed(0)\r\ntorch.backends.cudnn.deterministic = True\r\n\r\nboxes_for_nms = torch.tensor([[0.3764, 0.0905, 0.6533, 0.4487],[0.3744, 0.0899, 0.6535, 0.4513],[0.3753, 0.0916, 0.6532, 0.4512]])\r\nscores = torch.tensor([1., 1., 1.])\r\niou_threshold = 0.2\r\n\r\ncpu_keep = torchvision.ops.nms(boxes_for_nms, scores, iou_threshold)\r\ngpu_keep = torchvision.ops.nms(boxes_for_nms.to('cuda'), scores.to('cuda'), iou_threshold)\r\n\r\nprint(torch.__version__, torchvision.__version__)\r\nprint('cpu keep', cpu_keep)\r\nprint('gpu keep', gpu_keep)\r\nprint('cpu==gpu', int(cpu_keep)==int(gpu_keep))\r\n ```\r\n3. output\r\n```\r\n1.5.0 0.6.0a0+82fd1c8\r\ncpu keep tensor([0])\r\ngpu keep tensor([2], device='cuda:0')\r\ncpu==gpu False\r\n```\r\n\r\n## Expected behavior\r\n\r\n`cpu==gpu True`\r\n\r\n## Environment\r\n\r\n```\r\nPyTorch version: 1.5.0\r\nIs debug build: No\r\nCUDA used to build PyTorch: 10.1\r\n\r\nOS: Ubuntu 18.04.3 LTS\r\nGCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0\r\nCMake version: Could not collect\r\n\r\nPython version: 3.7\r\nIs CUDA available: Yes\r\nCUDA runtime version: 10.1.243\r\nGPU models and configuration: GPU 0: GeForce GTX 1060 with Max-Q Design\r\nNvidia driver version: 418.56\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.5\r\n\r\nVersions of relevant libraries:\r\n[pip] numpy==1.18.1\r\n[pip] torch==1.5.0\r\n[pip] torchvision==0.6.0a0+82fd1c8\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 10.1.243 h6bb024c_0 \r\n[conda] mkl 2020.0 166 \r\n[conda] mkl-service 2.3.0 py37he904b0f_0 \r\n[conda] mkl_fft 1.0.15 py37ha843d7b_0 \r\n[conda] mkl_random 1.1.0 py37hd6b4f25_0 \r\n[conda] numpy 1.18.1 py37h4f9e942_0 \r\n[conda] numpy-base 1.18.1 py37hde5b4d6_1 \r\n[conda] pytorch 1.5.0 py3.7_cuda10.1.243_cudnn7.6.3_0 pytorch\r\n[conda] torchvision 0.6.0 py37_cu101 pytorch\r\n```\r\n\r\n## Additional context\r\n\r\ntesting on pytorch 1.4.0 and torchvision 0.5.0 also yields different result, \r\n\n", "before_files": [{"content": "import torch\nfrom torch.jit.annotations import Tuple\nfrom torch import Tensor\nimport torchvision\n\n\ndef nms(boxes, scores, iou_threshold):\n # type: (Tensor, Tensor, float)\n \"\"\"\n Performs non-maximum suppression (NMS) on the boxes according\n to their intersection-over-union (IoU).\n\n NMS iteratively removes lower scoring boxes which have an\n IoU greater than iou_threshold with another (higher scoring)\n box.\n\n Parameters\n ----------\n boxes : Tensor[N, 4])\n boxes to perform NMS on. They\n are expected to be in (x1, y1, x2, y2) format\n scores : Tensor[N]\n scores for each one of the boxes\n iou_threshold : float\n discards all overlapping\n boxes with IoU > iou_threshold\n\n Returns\n -------\n keep : Tensor\n int64 tensor with the indices\n of the elements that have been kept\n by NMS, sorted in decreasing order of scores\n \"\"\"\n return torch.ops.torchvision.nms(boxes, scores, iou_threshold)\n\n\ndef batched_nms(boxes, scores, idxs, iou_threshold):\n # type: (Tensor, Tensor, Tensor, float)\n \"\"\"\n Performs non-maximum suppression in a batched fashion.\n\n Each index value correspond to a category, and NMS\n will not be applied between elements of different categories.\n\n Parameters\n ----------\n boxes : Tensor[N, 4]\n boxes where NMS will be performed. They\n are expected to be in (x1, y1, x2, y2) format\n scores : Tensor[N]\n scores for each one of the boxes\n idxs : Tensor[N]\n indices of the categories for each one of the boxes.\n iou_threshold : float\n discards all overlapping boxes\n with IoU > iou_threshold\n\n Returns\n -------\n keep : Tensor\n int64 tensor with the indices of\n the elements that have been kept by NMS, sorted\n in decreasing order of scores\n \"\"\"\n if boxes.numel() == 0:\n return torch.empty((0,), dtype=torch.int64, device=boxes.device)\n # strategy: in order to perform NMS independently per class.\n # we add an offset to all the boxes. The offset is dependent\n # only on the class idx, and is large enough so that boxes\n # from different classes do not overlap\n max_coordinate = boxes.max()\n offsets = idxs.to(boxes) * (max_coordinate + 1)\n boxes_for_nms = boxes + offsets[:, None]\n keep = nms(boxes_for_nms, scores, iou_threshold)\n return keep\n\n\ndef remove_small_boxes(boxes, min_size):\n # type: (Tensor, float)\n \"\"\"\n Remove boxes which contains at least one side smaller than min_size.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format\n min_size (float): minimum size\n\n Returns:\n keep (Tensor[K]): indices of the boxes that have both sides\n larger than min_size\n \"\"\"\n ws, hs = boxes[:, 2] - boxes[:, 0], boxes[:, 3] - boxes[:, 1]\n keep = (ws >= min_size) & (hs >= min_size)\n keep = keep.nonzero().squeeze(1)\n return keep\n\n\ndef clip_boxes_to_image(boxes, size):\n # type: (Tensor, Tuple[int, int])\n \"\"\"\n Clip boxes so that they lie inside an image of size `size`.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes in (x1, y1, x2, y2) format\n size (Tuple[height, width]): size of the image\n\n Returns:\n clipped_boxes (Tensor[N, 4])\n \"\"\"\n dim = boxes.dim()\n boxes_x = boxes[..., 0::2]\n boxes_y = boxes[..., 1::2]\n height, width = size\n\n if torchvision._is_tracing():\n boxes_x = torch.max(boxes_x, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))\n boxes_x = torch.min(boxes_x, torch.tensor(width, dtype=boxes.dtype, device=boxes.device))\n boxes_y = torch.max(boxes_y, torch.tensor(0, dtype=boxes.dtype, device=boxes.device))\n boxes_y = torch.min(boxes_y, torch.tensor(height, dtype=boxes.dtype, device=boxes.device))\n else:\n boxes_x = boxes_x.clamp(min=0, max=width)\n boxes_y = boxes_y.clamp(min=0, max=height)\n\n clipped_boxes = torch.stack((boxes_x, boxes_y), dim=dim)\n return clipped_boxes.reshape(boxes.shape)\n\n\ndef box_area(boxes):\n \"\"\"\n Computes the area of a set of bounding boxes, which are specified by its\n (x1, y1, x2, y2) coordinates.\n\n Arguments:\n boxes (Tensor[N, 4]): boxes for which the area will be computed. They\n are expected to be in (x1, y1, x2, y2) format\n\n Returns:\n area (Tensor[N]): area for each box\n \"\"\"\n return (boxes[:, 2] - boxes[:, 0]) * (boxes[:, 3] - boxes[:, 1])\n\n\n# implementation from https://github.com/kuangliu/torchcv/blob/master/torchcv/utils/box.py\n# with slight modifications\ndef box_iou(boxes1, boxes2):\n \"\"\"\n Return intersection-over-union (Jaccard index) of boxes.\n\n Both sets of boxes are expected to be in (x1, y1, x2, y2) format.\n\n Arguments:\n boxes1 (Tensor[N, 4])\n boxes2 (Tensor[M, 4])\n\n Returns:\n iou (Tensor[N, M]): the NxM matrix containing the pairwise\n IoU values for every element in boxes1 and boxes2\n \"\"\"\n area1 = box_area(boxes1)\n area2 = box_area(boxes2)\n\n lt = torch.max(boxes1[:, None, :2], boxes2[:, :2]) # [N,M,2]\n rb = torch.min(boxes1[:, None, 2:], boxes2[:, 2:]) # [N,M,2]\n\n wh = (rb - lt).clamp(min=0) # [N,M,2]\n inter = wh[:, :, 0] * wh[:, :, 1] # [N,M]\n\n iou = inter / (area1[:, None] + area2 - inter)\n return iou\n", "path": "torchvision/ops/boxes.py"}]} | 3,349 | 156 |
gh_patches_debug_3637 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-3246 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
#3491 [mB] add video embed to interactive event
**URL:** https://meinberlin-dev.liqd.net/projekte/design-project/
**device & browser:** *Safari Version 14.0 (15610.1.28.1.9, 15610)*
**Comment/Question:**
*Just to confirm, the live stream field should appear just when the project is published? Cause, I can't select the live stream section before being published, otherwise all good*
<img width="1361" alt="Screenshot 2020-11-10 at 16 03 41" src="https://user-images.githubusercontent.com/59610786/98691968-e462ff80-236e-11eb-904b-755ff83b79cc.png">
<img width="1389" alt="Screenshot 2020-11-10 at 16 04 07" src="https://user-images.githubusercontent.com/59610786/98691978-e7f68680-236e-11eb-9a18-53ade0537fa8.png">
<img width="1330" alt="Screenshot 2020-11-10 at 16 04 24" src="https://user-images.githubusercontent.com/59610786/98691980-e927b380-236e-11eb-88a8-ad2c644e58df.png">
</issue>
<code>
[start of meinberlin/apps/livequestions/dashboard.py]
1 from django.urls import reverse
2 from django.utils.translation import ugettext_lazy as _
3
4 from adhocracy4.dashboard import DashboardComponent
5 from adhocracy4.dashboard import components
6
7 from . import views
8
9
10 class LiveStreamComponent(DashboardComponent):
11 identifier = 'live_stream'
12 weight = 20
13 label = _('Live Stream')
14
15 def is_effective(self, module):
16 module_app = module.phases[0].content().app
17 return (module_app == 'meinberlin_livequestions' and
18 not module.project.is_draft)
19
20 def get_progress(self, module):
21 return 0, 0
22
23 def get_base_url(self, module):
24 return reverse('a4dashboard:livequestions-livestream', kwargs={
25 'module_slug': module.slug,
26 })
27
28 def get_urls(self):
29 return [(
30 r'^modules/(?P<module_slug>[-\w_]+)/livestream/$',
31 views.LiveStreamDashboardView.as_view(component=self),
32 'livequestions-livestream'
33 )]
34
35
36 components.register_module(LiveStreamComponent())
37
[end of meinberlin/apps/livequestions/dashboard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/meinberlin/apps/livequestions/dashboard.py b/meinberlin/apps/livequestions/dashboard.py
--- a/meinberlin/apps/livequestions/dashboard.py
+++ b/meinberlin/apps/livequestions/dashboard.py
@@ -14,8 +14,7 @@
def is_effective(self, module):
module_app = module.phases[0].content().app
- return (module_app == 'meinberlin_livequestions' and
- not module.project.is_draft)
+ return (module_app == 'meinberlin_livequestions')
def get_progress(self, module):
return 0, 0
| {"golden_diff": "diff --git a/meinberlin/apps/livequestions/dashboard.py b/meinberlin/apps/livequestions/dashboard.py\n--- a/meinberlin/apps/livequestions/dashboard.py\n+++ b/meinberlin/apps/livequestions/dashboard.py\n@@ -14,8 +14,7 @@\n \n def is_effective(self, module):\n module_app = module.phases[0].content().app\n- return (module_app == 'meinberlin_livequestions' and\n- not module.project.is_draft)\n+ return (module_app == 'meinberlin_livequestions')\n \n def get_progress(self, module):\n return 0, 0\n", "issue": "#3491 [mB] add video embed to interactive event \n**URL:** https://meinberlin-dev.liqd.net/projekte/design-project/\r\n**device & browser:** *Safari Version 14.0 (15610.1.28.1.9, 15610)*\r\n**Comment/Question:** \r\n*Just to confirm, the live stream field should appear just when the project is published? Cause, I can't select the live stream section before being published, otherwise all good* \r\n\r\n<img width=\"1361\" alt=\"Screenshot 2020-11-10 at 16 03 41\" src=\"https://user-images.githubusercontent.com/59610786/98691968-e462ff80-236e-11eb-904b-755ff83b79cc.png\">\r\n<img width=\"1389\" alt=\"Screenshot 2020-11-10 at 16 04 07\" src=\"https://user-images.githubusercontent.com/59610786/98691978-e7f68680-236e-11eb-9a18-53ade0537fa8.png\">\r\n<img width=\"1330\" alt=\"Screenshot 2020-11-10 at 16 04 24\" src=\"https://user-images.githubusercontent.com/59610786/98691980-e927b380-236e-11eb-88a8-ad2c644e58df.png\">\r\n\r\n\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.dashboard import DashboardComponent\nfrom adhocracy4.dashboard import components\n\nfrom . import views\n\n\nclass LiveStreamComponent(DashboardComponent):\n identifier = 'live_stream'\n weight = 20\n label = _('Live Stream')\n\n def is_effective(self, module):\n module_app = module.phases[0].content().app\n return (module_app == 'meinberlin_livequestions' and\n not module.project.is_draft)\n\n def get_progress(self, module):\n return 0, 0\n\n def get_base_url(self, module):\n return reverse('a4dashboard:livequestions-livestream', kwargs={\n 'module_slug': module.slug,\n })\n\n def get_urls(self):\n return [(\n r'^modules/(?P<module_slug>[-\\w_]+)/livestream/$',\n views.LiveStreamDashboardView.as_view(component=self),\n 'livequestions-livestream'\n )]\n\n\ncomponents.register_module(LiveStreamComponent())\n", "path": "meinberlin/apps/livequestions/dashboard.py"}]} | 1,240 | 141 |
gh_patches_debug_11143 | rasdani/github-patches | git_diff | qtile__qtile-2811 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set version using importlib.metadata
<!--
Please do not ask general questions here! There are [community
contact](https://github.com/qtile/qtile#community) options for that.
If you are suggesting a new feature/enhancement please instead post it on the
discussions board as an idea: https://github.com/qtile/qtile/discussions/categories/ideas
-->
# Issue description
Currently, if setuptools is not installed on the system running qtile, it will run into issues upon start.
An Arch user reported this downstream: https://bugs.archlinux.org/task/71804
Apart from also guarding against `ModuleNotFoundError` I think it could be a great idea to [use importlib.metadata to provide qtile's version](https://docs.python.org/3.9/library/importlib.metadata.html?highlight=importlib%20metadata#distribution-versions) instead for newer python versions.
<!--
A brief discussion of what failed and how it failed. A description of
what you tried is helpful, i.e. "When I use lazy.kill() on a window I get
the following stack trace" instead of "Closing windows doesn't work".
-->
# Qtile version
0.18.1
# Stack traces
Copied verbatim from the issue reported downstream:
```
Traceback (most recent call last):
File "/usr/lib/python3.9/site-packages/libqtile/scripts/main.py", line 9, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/qtile", line 33, in <module>
sys.exit(load_entry_point('qtile==0.18.1.dev0+g8e7ecc0a.d20210719', 'console_scripts', 'qtile')())
File "/usr/bin/qtile", line 25, in importlib_load_entry_point
return next(matches).load()
File "/usr/lib/python3.9/importlib/metadata.py", line 77, in load
module = import_module(match.group('module'))
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/usr/lib/python3.9/site-packages/libqtile/scripts/main.py", line 11, in <module>
except (pkg_resources.DistributionNotFound, ImportError):
NameError: name 'pkg_resources' is not defined
```
# Configuration
not important for this issue
</issue>
<code>
[start of libqtile/scripts/main.py]
1 import argparse
2 import logging
3 import sys
4
5 from libqtile.log_utils import init_log
6 from libqtile.scripts import check, cmd_obj, migrate, run_cmd, shell, start, top
7
8 try:
9 import pkg_resources
10 VERSION = pkg_resources.require("qtile")[0].version
11 except (pkg_resources.DistributionNotFound, ImportError):
12 VERSION = 'dev'
13
14
15 def main():
16 parent_parser = argparse.ArgumentParser(add_help=False)
17 parent_parser.add_argument(
18 '-l', '--log-level',
19 default='WARNING',
20 dest='log_level',
21 type=str.upper,
22 choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),
23 help='Set qtile log level'
24 )
25
26 main_parser = argparse.ArgumentParser(
27 prog='qtile',
28 description='A full-featured, pure-Python tiling window manager.',
29 )
30 main_parser.add_argument(
31 '-v', '--version',
32 action='version',
33 version=VERSION,
34 )
35
36 subparsers = main_parser.add_subparsers()
37 start.add_subcommand(subparsers, [parent_parser])
38 shell.add_subcommand(subparsers, [parent_parser])
39 top.add_subcommand(subparsers, [parent_parser])
40 run_cmd.add_subcommand(subparsers, [parent_parser])
41 cmd_obj.add_subcommand(subparsers, [parent_parser])
42 check.add_subcommand(subparsers, [parent_parser])
43 migrate.add_subcommand(subparsers, [parent_parser])
44
45 # `qtile help` should print help
46 def print_help(options):
47 main_parser.print_help()
48 help_ = subparsers.add_parser("help", help="Print help information and exit")
49 help_.set_defaults(func=print_help)
50
51 options = main_parser.parse_args()
52 try:
53 log_level = getattr(logging, options.log_level)
54 init_log(log_level=log_level, log_color=sys.stdout.isatty())
55 options.func(options)
56 except AttributeError:
57 main_parser.print_usage()
58 print("")
59 print("Did you mean:")
60 print(" ".join(sys.argv + ['start']))
61 sys.exit(1)
62
63
64 if __name__ == "__main__":
65 main()
66
[end of libqtile/scripts/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libqtile/scripts/main.py b/libqtile/scripts/main.py
--- a/libqtile/scripts/main.py
+++ b/libqtile/scripts/main.py
@@ -6,10 +6,16 @@
from libqtile.scripts import check, cmd_obj, migrate, run_cmd, shell, start, top
try:
- import pkg_resources
- VERSION = pkg_resources.require("qtile")[0].version
-except (pkg_resources.DistributionNotFound, ImportError):
- VERSION = 'dev'
+ # Python>3.7 can get the version from importlib
+ from importlib.metadata import distribution
+ VERSION = distribution("qtile").version
+except ModuleNotFoundError:
+ try:
+ # pkg_resources is required for 3.7
+ import pkg_resources
+ VERSION = pkg_resources.require("qtile")[0].version
+ except (pkg_resources.DistributionNotFound, ModuleNotFoundError):
+ VERSION = 'dev'
def main():
| {"golden_diff": "diff --git a/libqtile/scripts/main.py b/libqtile/scripts/main.py\n--- a/libqtile/scripts/main.py\n+++ b/libqtile/scripts/main.py\n@@ -6,10 +6,16 @@\n from libqtile.scripts import check, cmd_obj, migrate, run_cmd, shell, start, top\n \n try:\n- import pkg_resources\n- VERSION = pkg_resources.require(\"qtile\")[0].version\n-except (pkg_resources.DistributionNotFound, ImportError):\n- VERSION = 'dev'\n+ # Python>3.7 can get the version from importlib\n+ from importlib.metadata import distribution\n+ VERSION = distribution(\"qtile\").version\n+except ModuleNotFoundError:\n+ try:\n+ # pkg_resources is required for 3.7\n+ import pkg_resources\n+ VERSION = pkg_resources.require(\"qtile\")[0].version\n+ except (pkg_resources.DistributionNotFound, ModuleNotFoundError):\n+ VERSION = 'dev'\n \n \n def main():\n", "issue": "Set version using importlib.metadata\n<!--\r\nPlease do not ask general questions here! There are [community\r\ncontact](https://github.com/qtile/qtile#community) options for that.\r\n\r\nIf you are suggesting a new feature/enhancement please instead post it on the\r\ndiscussions board as an idea: https://github.com/qtile/qtile/discussions/categories/ideas\r\n-->\r\n\r\n# Issue description\r\n\r\nCurrently, if setuptools is not installed on the system running qtile, it will run into issues upon start.\r\nAn Arch user reported this downstream: https://bugs.archlinux.org/task/71804\r\n\r\nApart from also guarding against `ModuleNotFoundError` I think it could be a great idea to [use importlib.metadata to provide qtile's version](https://docs.python.org/3.9/library/importlib.metadata.html?highlight=importlib%20metadata#distribution-versions) instead for newer python versions.\r\n<!--\r\nA brief discussion of what failed and how it failed. A description of\r\nwhat you tried is helpful, i.e. \"When I use lazy.kill() on a window I get\r\nthe following stack trace\" instead of \"Closing windows doesn't work\".\r\n-->\r\n\r\n# Qtile version\r\n\r\n0.18.1\r\n\r\n# Stack traces\r\n\r\nCopied verbatim from the issue reported downstream:\r\n\r\n```\r\nTraceback (most recent call last):\r\nFile \"/usr/lib/python3.9/site-packages/libqtile/scripts/main.py\", line 9, in <module>\r\nimport pkg_resources\r\nModuleNotFoundError: No module named 'pkg_resources'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\nFile \"/usr/bin/qtile\", line 33, in <module>\r\nsys.exit(load_entry_point('qtile==0.18.1.dev0+g8e7ecc0a.d20210719', 'console_scripts', 'qtile')())\r\nFile \"/usr/bin/qtile\", line 25, in importlib_load_entry_point\r\nreturn next(matches).load()\r\nFile \"/usr/lib/python3.9/importlib/metadata.py\", line 77, in load\r\nmodule = import_module(match.group('module'))\r\nFile \"/usr/lib/python3.9/importlib/__init__.py\", line 127, in import_module\r\nreturn _bootstrap._gcd_import(name[level:], package, level)\r\nFile \"<frozen importlib._bootstrap>\", line 1030, in _gcd_import\r\nFile \"<frozen importlib._bootstrap>\", line 1007, in _find_and_load\r\nFile \"<frozen importlib._bootstrap>\", line 986, in _find_and_load_unlocked\r\nFile \"<frozen importlib._bootstrap>\", line 680, in _load_unlocked\r\nFile \"<frozen importlib._bootstrap_external>\", line 850, in exec_module\r\nFile \"<frozen importlib._bootstrap>\", line 228, in _call_with_frames_removed\r\nFile \"/usr/lib/python3.9/site-packages/libqtile/scripts/main.py\", line 11, in <module>\r\nexcept (pkg_resources.DistributionNotFound, ImportError):\r\nNameError: name 'pkg_resources' is not defined\r\n```\r\n\r\n# Configuration\r\n\r\nnot important for this issue\n", "before_files": [{"content": "import argparse\nimport logging\nimport sys\n\nfrom libqtile.log_utils import init_log\nfrom libqtile.scripts import check, cmd_obj, migrate, run_cmd, shell, start, top\n\ntry:\n import pkg_resources\n VERSION = pkg_resources.require(\"qtile\")[0].version\nexcept (pkg_resources.DistributionNotFound, ImportError):\n VERSION = 'dev'\n\n\ndef main():\n parent_parser = argparse.ArgumentParser(add_help=False)\n parent_parser.add_argument(\n '-l', '--log-level',\n default='WARNING',\n dest='log_level',\n type=str.upper,\n choices=('DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL'),\n help='Set qtile log level'\n )\n\n main_parser = argparse.ArgumentParser(\n prog='qtile',\n description='A full-featured, pure-Python tiling window manager.',\n )\n main_parser.add_argument(\n '-v', '--version',\n action='version',\n version=VERSION,\n )\n\n subparsers = main_parser.add_subparsers()\n start.add_subcommand(subparsers, [parent_parser])\n shell.add_subcommand(subparsers, [parent_parser])\n top.add_subcommand(subparsers, [parent_parser])\n run_cmd.add_subcommand(subparsers, [parent_parser])\n cmd_obj.add_subcommand(subparsers, [parent_parser])\n check.add_subcommand(subparsers, [parent_parser])\n migrate.add_subcommand(subparsers, [parent_parser])\n\n # `qtile help` should print help\n def print_help(options):\n main_parser.print_help()\n help_ = subparsers.add_parser(\"help\", help=\"Print help information and exit\")\n help_.set_defaults(func=print_help)\n\n options = main_parser.parse_args()\n try:\n log_level = getattr(logging, options.log_level)\n init_log(log_level=log_level, log_color=sys.stdout.isatty())\n options.func(options)\n except AttributeError:\n main_parser.print_usage()\n print(\"\")\n print(\"Did you mean:\")\n print(\" \".join(sys.argv + ['start']))\n sys.exit(1)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "libqtile/scripts/main.py"}]} | 1,797 | 215 |
gh_patches_debug_11452 | rasdani/github-patches | git_diff | cupy__cupy-1138 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cupy.random.permutation() overwrites its argument.
`cupy.random.permutation()` overwrites its argument.
This is incompatible with `numpy.random.permutation`.
</issue>
<code>
[start of cupy/random/permutations.py]
1 from cupy.random import generator
2 import six
3
4
5 def shuffle(a):
6 """Shuffles an array.
7
8 Args:
9 a (cupy.ndarray): The array to be shuffled.
10
11 .. seealso:: :func:`numpy.random.shuffle`
12
13 """
14 rs = generator.get_random_state()
15 return rs.shuffle(a)
16
17
18 def permutation(a):
19 """Returns a permuted range or shuffles an array."""
20 if isinstance(a, six.integer_types):
21 rs = generator.get_random_state()
22 return rs.permutation(a)
23 else:
24 return shuffle(a)
25
[end of cupy/random/permutations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cupy/random/permutations.py b/cupy/random/permutations.py
--- a/cupy/random/permutations.py
+++ b/cupy/random/permutations.py
@@ -16,9 +16,20 @@
def permutation(a):
- """Returns a permuted range or shuffles an array."""
+ """Returns a permuted range or a permutation of an array.
+
+ Args:
+ a (int or cupy.ndarray): The range or the array to be shuffled.
+
+ Returns:
+ cupy.ndarray: If `a` is an integer, it is permutation range between 0
+ and `a` - 1.
+ Otherwise, it is a permutation of `a`.
+
+ .. seealso:: :func:`numpy.random.permutation`
+ """
+ rs = generator.get_random_state()
if isinstance(a, six.integer_types):
- rs = generator.get_random_state()
return rs.permutation(a)
else:
- return shuffle(a)
+ return a[rs.permutation(len(a))]
| {"golden_diff": "diff --git a/cupy/random/permutations.py b/cupy/random/permutations.py\n--- a/cupy/random/permutations.py\n+++ b/cupy/random/permutations.py\n@@ -16,9 +16,20 @@\n \n \n def permutation(a):\n- \"\"\"Returns a permuted range or shuffles an array.\"\"\"\n+ \"\"\"Returns a permuted range or a permutation of an array.\n+\n+ Args:\n+ a (int or cupy.ndarray): The range or the array to be shuffled.\n+\n+ Returns:\n+ cupy.ndarray: If `a` is an integer, it is permutation range between 0\n+ and `a` - 1.\n+ Otherwise, it is a permutation of `a`.\n+\n+ .. seealso:: :func:`numpy.random.permutation`\n+ \"\"\"\n+ rs = generator.get_random_state()\n if isinstance(a, six.integer_types):\n- rs = generator.get_random_state()\n return rs.permutation(a)\n else:\n- return shuffle(a)\n+ return a[rs.permutation(len(a))]\n", "issue": "cupy.random.permutation() overwrites its argument.\n`cupy.random.permutation()` overwrites its argument.\r\nThis is incompatible with `numpy.random.permutation`.\r\n\n", "before_files": [{"content": "from cupy.random import generator\nimport six\n\n\ndef shuffle(a):\n \"\"\"Shuffles an array.\n\n Args:\n a (cupy.ndarray): The array to be shuffled.\n\n .. seealso:: :func:`numpy.random.shuffle`\n\n \"\"\"\n rs = generator.get_random_state()\n return rs.shuffle(a)\n\n\ndef permutation(a):\n \"\"\"Returns a permuted range or shuffles an array.\"\"\"\n if isinstance(a, six.integer_types):\n rs = generator.get_random_state()\n return rs.permutation(a)\n else:\n return shuffle(a)\n", "path": "cupy/random/permutations.py"}]} | 734 | 228 |
gh_patches_debug_31686 | rasdani/github-patches | git_diff | translate__translate-4045 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
RemovedInTTK2Warning seems strange
There is ``RemovedInTTK2Warning`` which apparently was meant to flag feature which will be removed in translate-toolkit 2. However it is already out and that did not happen :-).
Either RemovedInTTK2Warning should be renamed as translate-toolkit 2 has already been released, or the deprecation should be applied.
However it seems that quite a lot of the code seems to rely on that behavior.
</issue>
<code>
[start of translate/misc/multistring.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2006 Zuza Software Foundation
4 #
5 # This file is part of translate.
6 #
7 # translate is free software; you can redistribute it and/or modify
8 # it under the terms of the GNU General Public License as published by
9 # the Free Software Foundation; either version 2 of the License, or
10 # (at your option) any later version.
11 #
12 # translate is distributed in the hope that it will be useful,
13 # but WITHOUT ANY WARRANTY; without even the implied warranty of
14 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
15 # GNU General Public License for more details.
16 #
17 # You should have received a copy of the GNU General Public License
18 # along with this program; if not, see <http://www.gnu.org/licenses/>.
19
20 """Supports a hybrid Unicode string that can also have a list of alternate
21 strings in the strings attribute
22 """
23
24 import warnings
25
26
27 from .deprecation import RemovedInTTK2Warning
28
29
30 def _create_text_type(newtype, string, encoding):
31 """Helper to construct a text type out of characters or bytes. Required to
32 temporarily preserve backwards compatibility. Must be removed in TTK2.
33 """
34 if string is None:
35 string = ''
36 if isinstance(string, str):
37 return str.__new__(newtype, string)
38
39 warnings.warn(
40 'Passing non-ASCII bytes as well as the `encoding` argument to '
41 '`multistring` is deprecated. Always pass unicode characters instead.',
42 RemovedInTTK2Warning, stacklevel=2,
43 )
44 return str.__new__(newtype, string, encoding)
45
46
47 class multistring(str):
48
49 def __new__(newtype, string=u"", *args, **kwargs):
50 encoding = kwargs.pop('encoding', 'utf-8')
51 if isinstance(string, list):
52 if not string:
53 raise ValueError("multistring must contain at least one string")
54 newstring = _create_text_type(newtype, string[0], encoding)
55 newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]
56 else:
57 newstring = _create_text_type(newtype, string, encoding)
58 newstring.strings = [newstring]
59 return newstring
60
61 def __init__(self, *args, **kwargs):
62 super().__init__()
63 if not hasattr(self, "strings"):
64 self.strings = []
65
66 def __cmp__(self, otherstring):
67 def cmp_compat(s1, s2):
68 # Python 3 compatible cmp() equivalent
69 return (s1 > s2) - (s1 < s2)
70 if isinstance(otherstring, multistring):
71 parentcompare = cmp_compat(str(self), otherstring)
72 if parentcompare:
73 return parentcompare
74 else:
75 return cmp_compat(self.strings[1:], otherstring.strings[1:])
76 elif isinstance(otherstring, str):
77 return cmp_compat(str(self), otherstring)
78 elif isinstance(otherstring, bytes):
79 return cmp_compat(self.encode('utf-8'), otherstring)
80 elif isinstance(otherstring, list) and otherstring:
81 return cmp_compat(self, multistring(otherstring))
82 else:
83 return cmp_compat(str(type(self)), str(type(otherstring)))
84
85 def __hash__(self):
86 return hash(str(self))
87
88 def __ne__(self, otherstring):
89 return self.__cmp__(otherstring) != 0
90
91 def __eq__(self, otherstring):
92 return self.__cmp__(otherstring) == 0
93
94 def __repr__(self):
95 return u"multistring(%r)" % (
96 [str(item) for item in self.strings]
97 )
98
99 def replace(self, old, new, count=None):
100 if count is None:
101 newstr = multistring(super().replace(old, new))
102 else:
103 newstr = multistring(super().replace(old, new, count))
104 for s in self.strings[1:]:
105 if count is None:
106 newstr.strings.append(s.replace(old, new))
107 else:
108 newstr.strings.append(s.replace(old, new, count))
109 return newstr
110
[end of translate/misc/multistring.py]
[start of translate/misc/deprecation.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright 2014 Zuza Software Foundation
4 #
5 # This file is part of translate.
6 #
7 # translate is free software; you can redistribute it and/or modify it under
8 # the terms of the GNU General Public License as published by the Free Software
9 # Foundation; either version 2 of the License, or (at your option) any later
10 # version.
11 #
12 # translate is distributed in the hope that it will be useful, but WITHOUT ANY
13 # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
14 # A PARTICULAR PURPOSE. See the GNU General Public License for more details.
15 #
16 # You should have received a copy of the GNU General Public License along with
17 # this program; if not, see <http://www.gnu.org/licenses/>.
18
19 import warnings
20 from functools import wraps
21
22
23 class RemovedInTTK2Warning(DeprecationWarning):
24 pass
25
26
27 def deprecated(message=""):
28 """Decorator that marks functions and methods as deprecated.
29
30 A warning will be emitted when the function or method is used. If a custom
31 message is provided, it will be shown after the default warning message.
32 """
33 def inner_render(func):
34 @wraps(func)
35 def new_func(*args, **kwargs):
36 msg = message # Hack to avoid UnboundLocalError.
37 if msg:
38 msg = "\n" + msg
39 func_code = func.__code__
40 warnings.warn_explicit(
41 "Call to deprecated function {0}.{1}".format(func.__name__,
42 msg),
43 category=DeprecationWarning,
44 filename=func_code.co_filename,
45 lineno=func_code.co_firstlineno + 1
46 )
47 return func(*args, **kwargs)
48 return new_func
49 return inner_render
50
[end of translate/misc/deprecation.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/translate/misc/deprecation.py b/translate/misc/deprecation.py
--- a/translate/misc/deprecation.py
+++ b/translate/misc/deprecation.py
@@ -20,10 +20,6 @@
from functools import wraps
-class RemovedInTTK2Warning(DeprecationWarning):
- pass
-
-
def deprecated(message=""):
"""Decorator that marks functions and methods as deprecated.
diff --git a/translate/misc/multistring.py b/translate/misc/multistring.py
--- a/translate/misc/multistring.py
+++ b/translate/misc/multistring.py
@@ -21,40 +21,17 @@
strings in the strings attribute
"""
-import warnings
-
-
-from .deprecation import RemovedInTTK2Warning
-
-
-def _create_text_type(newtype, string, encoding):
- """Helper to construct a text type out of characters or bytes. Required to
- temporarily preserve backwards compatibility. Must be removed in TTK2.
- """
- if string is None:
- string = ''
- if isinstance(string, str):
- return str.__new__(newtype, string)
-
- warnings.warn(
- 'Passing non-ASCII bytes as well as the `encoding` argument to '
- '`multistring` is deprecated. Always pass unicode characters instead.',
- RemovedInTTK2Warning, stacklevel=2,
- )
- return str.__new__(newtype, string, encoding)
-
class multistring(str):
- def __new__(newtype, string=u"", *args, **kwargs):
- encoding = kwargs.pop('encoding', 'utf-8')
+ def __new__(newtype, string=""):
if isinstance(string, list):
if not string:
raise ValueError("multistring must contain at least one string")
- newstring = _create_text_type(newtype, string[0], encoding)
+ newstring = str.__new__(newtype, string[0])
newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]
else:
- newstring = _create_text_type(newtype, string, encoding)
+ newstring = str.__new__(newtype, string)
newstring.strings = [newstring]
return newstring
| {"golden_diff": "diff --git a/translate/misc/deprecation.py b/translate/misc/deprecation.py\n--- a/translate/misc/deprecation.py\n+++ b/translate/misc/deprecation.py\n@@ -20,10 +20,6 @@\n from functools import wraps\n \n \n-class RemovedInTTK2Warning(DeprecationWarning):\n- pass\n-\n-\n def deprecated(message=\"\"):\n \"\"\"Decorator that marks functions and methods as deprecated.\n \ndiff --git a/translate/misc/multistring.py b/translate/misc/multistring.py\n--- a/translate/misc/multistring.py\n+++ b/translate/misc/multistring.py\n@@ -21,40 +21,17 @@\n strings in the strings attribute\n \"\"\"\n \n-import warnings\n-\n-\n-from .deprecation import RemovedInTTK2Warning\n-\n-\n-def _create_text_type(newtype, string, encoding):\n- \"\"\"Helper to construct a text type out of characters or bytes. Required to\n- temporarily preserve backwards compatibility. Must be removed in TTK2.\n- \"\"\"\n- if string is None:\n- string = ''\n- if isinstance(string, str):\n- return str.__new__(newtype, string)\n-\n- warnings.warn(\n- 'Passing non-ASCII bytes as well as the `encoding` argument to '\n- '`multistring` is deprecated. Always pass unicode characters instead.',\n- RemovedInTTK2Warning, stacklevel=2,\n- )\n- return str.__new__(newtype, string, encoding)\n-\n \n class multistring(str):\n \n- def __new__(newtype, string=u\"\", *args, **kwargs):\n- encoding = kwargs.pop('encoding', 'utf-8')\n+ def __new__(newtype, string=\"\"):\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n- newstring = _create_text_type(newtype, string[0], encoding)\n+ newstring = str.__new__(newtype, string[0])\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n- newstring = _create_text_type(newtype, string, encoding)\n+ newstring = str.__new__(newtype, string)\n newstring.strings = [newstring]\n return newstring\n", "issue": "RemovedInTTK2Warning seems strange\nThere is ``RemovedInTTK2Warning`` which apparently was meant to flag feature which will be removed in translate-toolkit 2. However it is already out and that did not happen :-).\r\n\r\nEither RemovedInTTK2Warning should be renamed as translate-toolkit 2 has already been released, or the deprecation should be applied.\r\n\r\nHowever it seems that quite a lot of the code seems to rely on that behavior.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2006 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify\n# it under the terms of the GNU General Public License as published by\n# the Free Software Foundation; either version 2 of the License, or\n# (at your option) any later version.\n#\n# translate is distributed in the hope that it will be useful,\n# but WITHOUT ANY WARRANTY; without even the implied warranty of\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n# GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License\n# along with this program; if not, see <http://www.gnu.org/licenses/>.\n\n\"\"\"Supports a hybrid Unicode string that can also have a list of alternate\nstrings in the strings attribute\n\"\"\"\n\nimport warnings\n\n\nfrom .deprecation import RemovedInTTK2Warning\n\n\ndef _create_text_type(newtype, string, encoding):\n \"\"\"Helper to construct a text type out of characters or bytes. Required to\n temporarily preserve backwards compatibility. Must be removed in TTK2.\n \"\"\"\n if string is None:\n string = ''\n if isinstance(string, str):\n return str.__new__(newtype, string)\n\n warnings.warn(\n 'Passing non-ASCII bytes as well as the `encoding` argument to '\n '`multistring` is deprecated. Always pass unicode characters instead.',\n RemovedInTTK2Warning, stacklevel=2,\n )\n return str.__new__(newtype, string, encoding)\n\n\nclass multistring(str):\n\n def __new__(newtype, string=u\"\", *args, **kwargs):\n encoding = kwargs.pop('encoding', 'utf-8')\n if isinstance(string, list):\n if not string:\n raise ValueError(\"multistring must contain at least one string\")\n newstring = _create_text_type(newtype, string[0], encoding)\n newstring.strings = [newstring] + [multistring.__new__(newtype, altstring) for altstring in string[1:]]\n else:\n newstring = _create_text_type(newtype, string, encoding)\n newstring.strings = [newstring]\n return newstring\n\n def __init__(self, *args, **kwargs):\n super().__init__()\n if not hasattr(self, \"strings\"):\n self.strings = []\n\n def __cmp__(self, otherstring):\n def cmp_compat(s1, s2):\n # Python 3 compatible cmp() equivalent\n return (s1 > s2) - (s1 < s2)\n if isinstance(otherstring, multistring):\n parentcompare = cmp_compat(str(self), otherstring)\n if parentcompare:\n return parentcompare\n else:\n return cmp_compat(self.strings[1:], otherstring.strings[1:])\n elif isinstance(otherstring, str):\n return cmp_compat(str(self), otherstring)\n elif isinstance(otherstring, bytes):\n return cmp_compat(self.encode('utf-8'), otherstring)\n elif isinstance(otherstring, list) and otherstring:\n return cmp_compat(self, multistring(otherstring))\n else:\n return cmp_compat(str(type(self)), str(type(otherstring)))\n\n def __hash__(self):\n return hash(str(self))\n\n def __ne__(self, otherstring):\n return self.__cmp__(otherstring) != 0\n\n def __eq__(self, otherstring):\n return self.__cmp__(otherstring) == 0\n\n def __repr__(self):\n return u\"multistring(%r)\" % (\n [str(item) for item in self.strings]\n )\n\n def replace(self, old, new, count=None):\n if count is None:\n newstr = multistring(super().replace(old, new))\n else:\n newstr = multistring(super().replace(old, new, count))\n for s in self.strings[1:]:\n if count is None:\n newstr.strings.append(s.replace(old, new))\n else:\n newstr.strings.append(s.replace(old, new, count))\n return newstr\n", "path": "translate/misc/multistring.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# Copyright 2014 Zuza Software Foundation\n#\n# This file is part of translate.\n#\n# translate is free software; you can redistribute it and/or modify it under\n# the terms of the GNU General Public License as published by the Free Software\n# Foundation; either version 2 of the License, or (at your option) any later\n# version.\n#\n# translate is distributed in the hope that it will be useful, but WITHOUT ANY\n# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR\n# A PARTICULAR PURPOSE. See the GNU General Public License for more details.\n#\n# You should have received a copy of the GNU General Public License along with\n# this program; if not, see <http://www.gnu.org/licenses/>.\n\nimport warnings\nfrom functools import wraps\n\n\nclass RemovedInTTK2Warning(DeprecationWarning):\n pass\n\n\ndef deprecated(message=\"\"):\n \"\"\"Decorator that marks functions and methods as deprecated.\n\n A warning will be emitted when the function or method is used. If a custom\n message is provided, it will be shown after the default warning message.\n \"\"\"\n def inner_render(func):\n @wraps(func)\n def new_func(*args, **kwargs):\n msg = message # Hack to avoid UnboundLocalError.\n if msg:\n msg = \"\\n\" + msg\n func_code = func.__code__\n warnings.warn_explicit(\n \"Call to deprecated function {0}.{1}\".format(func.__name__,\n msg),\n category=DeprecationWarning,\n filename=func_code.co_filename,\n lineno=func_code.co_firstlineno + 1\n )\n return func(*args, **kwargs)\n return new_func\n return inner_render\n", "path": "translate/misc/deprecation.py"}]} | 2,234 | 512 |
gh_patches_debug_64393 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider longhorn_steakhouse is broken
During the global build at 2021-10-20-14-42-48, spider **longhorn_steakhouse** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/logs/longhorn_steakhouse.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/longhorn_steakhouse.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/longhorn_steakhouse.geojson))
</issue>
<code>
[start of locations/spiders/longhorn_steakhouse.py]
1 # -*- coding: utf-8 -*-
2 import json
3 import re
4
5 import scrapy
6
7 from locations.items import GeojsonPointItem
8 from locations.hours import OpeningHours
9
10
11 class LongHornSteakhouseSpider(scrapy.Spider):
12 name = "longhorn_steakhouse"
13 item_attributes = {'brand': 'LongHorn Steakhouse', 'brand_wikidata': "Q3259007"}
14 allowed_domains = []
15 start_urls = [
16 'https://www.longhornsteakhouse.com/locations-sitemap.xml',
17 ]
18 custom_settings = {
19 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
20 }
21 download_delay = 5
22
23 def parse_hours(self, hours):
24 opening_hours = OpeningHours()
25
26 for hour in hours:
27 day, open_close = hour.split(' ')
28 open_time, close_time = open_close.split('-')
29 opening_hours.add_range(day=day, open_time=open_time, close_time=close_time, time_format='%H:%M')
30 return opening_hours.as_opening_hours()
31
32 def parse(self, response):
33 response.selector.remove_namespaces()
34 urls = response.xpath('//url/loc/text()').extract()
35 for url in urls:
36 yield scrapy.Request(url=url, callback=self.parse_store)
37
38 def parse_store(self, response):
39 store_data = response.xpath('//script[@type="application/ld+json" and contains(text(), "streetAddress")]/text()').extract_first()
40 if store_data:
41 data = json.loads(store_data)
42 ref = re.search(r'.+/(.+?)/?(?:\.html|$)', response.url).group(1)
43
44 # Handle store pages that are missing the application/ld+json data
45 addr, city_state_zip, phone = response.xpath('//p[@id="info-link-webhead"]/text()').extract()
46 city, state, postcode = re.search(r'(.*?),\s([A-Z]{2})\s([\d-]+)$', city_state_zip).groups()
47
48 properties = {
49 'name': data.get("name") or response.xpath('//h1[@class="style_h1"]/text()').extract_first().strip(),
50 'ref': data["branchCode"] or ref,
51 'addr_full': data["address"]["streetAddress"].strip() or addr.strip(),
52 'city': data["address"]["addressLocality"] or city,
53 'state': data["address"]["addressRegion"] or state,
54 'postcode': data["address"]["postalCode"] or postcode,
55 'country': data["address"]["addressCountry"],
56 'phone': data.get("telephone") or phone.strip(),
57 'website': data.get("url") or response.url,
58 'lat': float(data["geo"]["latitude"]),
59 'lon': float(data["geo"]["longitude"]),
60 }
61
62 hours = data.get("openingHours")
63 if hours:
64 store_hours = self.parse_hours(hours)
65 properties["opening_hours"] = store_hours
66
67 yield GeojsonPointItem(**properties)
68
[end of locations/spiders/longhorn_steakhouse.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/longhorn_steakhouse.py b/locations/spiders/longhorn_steakhouse.py
--- a/locations/spiders/longhorn_steakhouse.py
+++ b/locations/spiders/longhorn_steakhouse.py
@@ -18,7 +18,7 @@
custom_settings = {
'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',
}
- download_delay = 5
+ download_delay = 1
def parse_hours(self, hours):
opening_hours = OpeningHours()
| {"golden_diff": "diff --git a/locations/spiders/longhorn_steakhouse.py b/locations/spiders/longhorn_steakhouse.py\n--- a/locations/spiders/longhorn_steakhouse.py\n+++ b/locations/spiders/longhorn_steakhouse.py\n@@ -18,7 +18,7 @@\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n- download_delay = 5\n+ download_delay = 1\n \n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n", "issue": "Spider longhorn_steakhouse is broken\nDuring the global build at 2021-10-20-14-42-48, spider **longhorn_steakhouse** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/logs/longhorn_steakhouse.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/longhorn_steakhouse.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-20-14-42-48/output/longhorn_steakhouse.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass LongHornSteakhouseSpider(scrapy.Spider):\n name = \"longhorn_steakhouse\"\n item_attributes = {'brand': 'LongHorn Steakhouse', 'brand_wikidata': \"Q3259007\"}\n allowed_domains = []\n start_urls = [\n 'https://www.longhornsteakhouse.com/locations-sitemap.xml',\n ]\n custom_settings = {\n 'USER_AGENT': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36',\n }\n download_delay = 5\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n day, open_close = hour.split(' ')\n open_time, close_time = open_close.split('-')\n opening_hours.add_range(day=day, open_time=open_time, close_time=close_time, time_format='%H:%M')\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n response.selector.remove_namespaces()\n urls = response.xpath('//url/loc/text()').extract()\n for url in urls:\n yield scrapy.Request(url=url, callback=self.parse_store)\n\n def parse_store(self, response):\n store_data = response.xpath('//script[@type=\"application/ld+json\" and contains(text(), \"streetAddress\")]/text()').extract_first()\n if store_data:\n data = json.loads(store_data)\n ref = re.search(r'.+/(.+?)/?(?:\\.html|$)', response.url).group(1)\n\n # Handle store pages that are missing the application/ld+json data\n addr, city_state_zip, phone = response.xpath('//p[@id=\"info-link-webhead\"]/text()').extract()\n city, state, postcode = re.search(r'(.*?),\\s([A-Z]{2})\\s([\\d-]+)$', city_state_zip).groups()\n\n properties = {\n 'name': data.get(\"name\") or response.xpath('//h1[@class=\"style_h1\"]/text()').extract_first().strip(),\n 'ref': data[\"branchCode\"] or ref,\n 'addr_full': data[\"address\"][\"streetAddress\"].strip() or addr.strip(),\n 'city': data[\"address\"][\"addressLocality\"] or city,\n 'state': data[\"address\"][\"addressRegion\"] or state,\n 'postcode': data[\"address\"][\"postalCode\"] or postcode,\n 'country': data[\"address\"][\"addressCountry\"],\n 'phone': data.get(\"telephone\") or phone.strip(),\n 'website': data.get(\"url\") or response.url,\n 'lat': float(data[\"geo\"][\"latitude\"]),\n 'lon': float(data[\"geo\"][\"longitude\"]),\n }\n\n hours = data.get(\"openingHours\")\n if hours:\n store_hours = self.parse_hours(hours)\n properties[\"opening_hours\"] = store_hours\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/longhorn_steakhouse.py"}]} | 1,561 | 167 |
gh_patches_debug_60375 | rasdani/github-patches | git_diff | UTNkar__moore-794 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Translations for footer_en missing in production
I noticed that in the settings the footer option is called footer_en. Seems like a translation has gone missing

</issue>
<code>
[start of src/branding/models.py]
1 from django.db import models
2 from wagtail.contrib.settings.models import BaseSetting, register_setting
3
4 from django.utils.translation import gettext_lazy as _
5 from wagtail.admin.edit_handlers import FieldPanel, FieldRowPanel, \
6 MultiFieldPanel, StreamFieldPanel, TabbedInterface, ObjectList
7 from wagtail.core import blocks
8 from wagtail.core.fields import StreamField
9 from wagtail.images.edit_handlers import ImageChooserPanel
10 from utils.translation import TranslatedField
11
12
13 @register_setting(icon='fa-window-minimize')
14 class FooterSettings(BaseSetting):
15 class Meta:
16 verbose_name = _('footer_en') # quickfix
17
18 footer_en = StreamField(
19 [('column', blocks.StructBlock([
20 ('size', blocks.IntegerBlock(min_value=1, max_value=12)),
21 ('content', blocks.RichTextBlock()),
22 ]))],
23 blank=True,
24 )
25
26 footer_sv = StreamField(
27 [('column', blocks.StructBlock([
28 ('size', blocks.IntegerBlock(min_value=1, max_value=12)),
29 ('content', blocks.RichTextBlock()),
30 ]))],
31 blank=True,
32 )
33
34 footer = TranslatedField('footer_en', 'footer_sv')
35
36 panels_sv = [
37 StreamFieldPanel('footer_sv')
38 ]
39
40 panels_en = [
41 StreamFieldPanel('footer_en')
42 ]
43
44 edit_handler = TabbedInterface([
45 ObjectList(panels_en, heading=_("English")),
46 ObjectList(panels_sv, heading=_("Swedish"))
47 ])
48
49
50 @register_setting(icon='openquote')
51 class SocialMediaSettings(BaseSetting):
52 class Meta:
53 verbose_name = _('social media accounts')
54
55 facebook = models.URLField(
56 help_text=_('Your Facebook page URL'),
57 blank=True,
58 )
59 instagram = models.CharField(
60 max_length=255,
61 help_text=_('Your Instagram username, without the @'),
62 blank=True,
63 )
64 twitter = models.CharField(
65 max_length=255,
66 help_text=_('Your Twitter username, without the @'),
67 blank=True,
68 )
69
70
71 class Logo(models.Model):
72 class Meta:
73 verbose_name = _('logo')
74 verbose_name_plural = _('logos')
75
76 def __str__(self):
77 logotext = str(_('logo'))
78 return logotext.capitalize()
79
80 CATEGORY_CHOICES = (
81 ('committee', _('Committee')),
82 ('section', _('Section')),
83 )
84
85 category = models.CharField(
86 max_length=20,
87 choices=CATEGORY_CHOICES,
88 verbose_name=_('category'),
89 blank=False,
90 null=False,
91 )
92
93 link = models.URLField(
94 verbose_name=_('links to'),
95 null=False,
96 blank=False,
97 )
98
99 logo = models.ForeignKey(
100 'wagtailimages.Image',
101 verbose_name=_('logo'),
102 null=True,
103 blank=True,
104 on_delete=models.SET_NULL,
105 related_name='+'
106 )
107
108 logo_white = models.ForeignKey(
109 'wagtailimages.Image',
110 verbose_name=_('white logo'),
111 null=True,
112 blank=True,
113 on_delete=models.SET_NULL,
114 related_name='+'
115 )
116
117 logo_black = models.ForeignKey(
118 'wagtailimages.Image',
119 verbose_name=_('black logo'),
120 null=True,
121 blank=True,
122 on_delete=models.SET_NULL,
123 related_name='+'
124 )
125
126 belongs_to = models.ForeignKey(
127 'wagtailcore.Site',
128 verbose_name=_('belongs to'),
129 null=True,
130 blank=True,
131 on_delete=models.SET_NULL,
132 )
133
134 # ------ Administrator settings ------
135 panels = [MultiFieldPanel([
136 FieldRowPanel([
137 FieldPanel('category'),
138 FieldPanel('link'),
139 ]),
140 ImageChooserPanel('logo'),
141 ImageChooserPanel('logo_white'),
142 ImageChooserPanel('logo_black'),
143 FieldPanel('belongs_to'),
144 ])]
145
[end of src/branding/models.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/branding/models.py b/src/branding/models.py
--- a/src/branding/models.py
+++ b/src/branding/models.py
@@ -13,7 +13,7 @@
@register_setting(icon='fa-window-minimize')
class FooterSettings(BaseSetting):
class Meta:
- verbose_name = _('footer_en') # quickfix
+ verbose_name = _('footer') # quickfix
footer_en = StreamField(
[('column', blocks.StructBlock([
| {"golden_diff": "diff --git a/src/branding/models.py b/src/branding/models.py\n--- a/src/branding/models.py\n+++ b/src/branding/models.py\n@@ -13,7 +13,7 @@\n @register_setting(icon='fa-window-minimize')\n class FooterSettings(BaseSetting):\n class Meta:\n- verbose_name = _('footer_en') # quickfix\n+ verbose_name = _('footer') # quickfix\n \n footer_en = StreamField(\n [('column', blocks.StructBlock([\n", "issue": "Translations for footer_en missing in production\nI noticed that in the settings the footer option is called footer_en. Seems like a translation has gone missing\r\n\r\n\n", "before_files": [{"content": "from django.db import models\nfrom wagtail.contrib.settings.models import BaseSetting, register_setting\n\nfrom django.utils.translation import gettext_lazy as _\nfrom wagtail.admin.edit_handlers import FieldPanel, FieldRowPanel, \\\n MultiFieldPanel, StreamFieldPanel, TabbedInterface, ObjectList\nfrom wagtail.core import blocks\nfrom wagtail.core.fields import StreamField\nfrom wagtail.images.edit_handlers import ImageChooserPanel\nfrom utils.translation import TranslatedField\n\n\n@register_setting(icon='fa-window-minimize')\nclass FooterSettings(BaseSetting):\n class Meta:\n verbose_name = _('footer_en') # quickfix\n\n footer_en = StreamField(\n [('column', blocks.StructBlock([\n ('size', blocks.IntegerBlock(min_value=1, max_value=12)),\n ('content', blocks.RichTextBlock()),\n ]))],\n blank=True,\n )\n\n footer_sv = StreamField(\n [('column', blocks.StructBlock([\n ('size', blocks.IntegerBlock(min_value=1, max_value=12)),\n ('content', blocks.RichTextBlock()),\n ]))],\n blank=True,\n )\n\n footer = TranslatedField('footer_en', 'footer_sv')\n\n panels_sv = [\n StreamFieldPanel('footer_sv')\n ]\n\n panels_en = [\n StreamFieldPanel('footer_en')\n ]\n\n edit_handler = TabbedInterface([\n ObjectList(panels_en, heading=_(\"English\")),\n ObjectList(panels_sv, heading=_(\"Swedish\"))\n ])\n\n\n@register_setting(icon='openquote')\nclass SocialMediaSettings(BaseSetting):\n class Meta:\n verbose_name = _('social media accounts')\n\n facebook = models.URLField(\n help_text=_('Your Facebook page URL'),\n blank=True,\n )\n instagram = models.CharField(\n max_length=255,\n help_text=_('Your Instagram username, without the @'),\n blank=True,\n )\n twitter = models.CharField(\n max_length=255,\n help_text=_('Your Twitter username, without the @'),\n blank=True,\n )\n\n\nclass Logo(models.Model):\n class Meta:\n verbose_name = _('logo')\n verbose_name_plural = _('logos')\n\n def __str__(self):\n logotext = str(_('logo'))\n return logotext.capitalize()\n\n CATEGORY_CHOICES = (\n ('committee', _('Committee')),\n ('section', _('Section')),\n )\n\n category = models.CharField(\n max_length=20,\n choices=CATEGORY_CHOICES,\n verbose_name=_('category'),\n blank=False,\n null=False,\n )\n\n link = models.URLField(\n verbose_name=_('links to'),\n null=False,\n blank=False,\n )\n\n logo = models.ForeignKey(\n 'wagtailimages.Image',\n verbose_name=_('logo'),\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n related_name='+'\n )\n\n logo_white = models.ForeignKey(\n 'wagtailimages.Image',\n verbose_name=_('white logo'),\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n related_name='+'\n )\n\n logo_black = models.ForeignKey(\n 'wagtailimages.Image',\n verbose_name=_('black logo'),\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n related_name='+'\n )\n\n belongs_to = models.ForeignKey(\n 'wagtailcore.Site',\n verbose_name=_('belongs to'),\n null=True,\n blank=True,\n on_delete=models.SET_NULL,\n )\n\n # ------ Administrator settings ------\n panels = [MultiFieldPanel([\n FieldRowPanel([\n FieldPanel('category'),\n FieldPanel('link'),\n ]),\n ImageChooserPanel('logo'),\n ImageChooserPanel('logo_white'),\n ImageChooserPanel('logo_black'),\n FieldPanel('belongs_to'),\n ])]\n", "path": "src/branding/models.py"}]} | 1,774 | 110 |
gh_patches_debug_12188 | rasdani/github-patches | git_diff | encode__uvicorn-895 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Subprocess returncode is not detected when running Gunicorn with Uvicorn (with fix PR companion)
### Checklist
<!-- Please make sure you check all these items before submitting your bug report. -->
- [x] The bug is reproducible against the latest release and/or `master`.
- [x] There are no similar issues or pull requests to fix it yet.
### Describe the bug
<!-- A clear and concise description of what the bug is. -->
When starting Gunicorn with Uvicorn worker(s), if the app uses `subprocess` to start other processes and captures the output, their `returncode` is in most cases `0`, even if the actual exit code was `1`.
### To reproduce
<!-- Provide a *minimal* example with steps to reproduce the bug locally.
NOTE: try to keep any external dependencies *at an absolute minimum* .
In other words, remove anything that doesn't make the bug go away.
-->
Take this minimal FastAPI app (or replace with Starlette), `main.py`:
```Python
import subprocess
from fastapi import FastAPI
app = FastAPI()
@app.post("/run")
def run_subprocess():
result = subprocess.run(
["python", "-c", "import sys; sys.exit(1)"], capture_output=True
)
return {"returncode": result.returncode}
```
Then run it with:
```console
$ gunicorn -k uvicorn.workers.UvicornWorker main:app
```
Open the browser at http:127.0.0.1:8000/docs and send a request to `/run`.
### Expected behavior
<!-- A clear and concise description of what you expected to happen. -->
The detected `returncode` should always be `1`, as the subprocess always exits with `1`.
### Actual behavior
<!-- A clear and concise description of what actually happens. -->
In most of the cases it will return a `returncode` of `0`. Strangely enough, in some cases, it will return a `returncode` of `1`.
### Debugging material
<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.
NOTE:
- Please list tracebacks in full (don't truncate them).
- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).
- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).
-->
This is because the `UvicornWorker`, which inherits from the base Gunicorn worker, declares a method `init_signals()` (overriding the parent method) but doesn't do anything. I suspect it's because the signal handlers are declared in the `Server.install_signal_handlers()` with compatibility with `asyncio`.
But the `UvicornWorker` process is started with `os.fork()` by Gunicorn (if I understand correctly) and by the point it is forked, the Gunicorn "Arbiter" class (that handles worker processes) already set its own signal handlers.
And the signal handlers in the Gunicorn base worker reset those handlers, but the `UvicornWorker` doesn't. So, when a process started with `subprocessing` is terminated, the `SIGCHLD` signal is handled by the Gunicorn `Arbiter` (as if the terminated process was a worker) instead of by the `UvicornWorker`.
Disclaimer: why the `SIGCHLD` signal handling in the Gunicorn `Arbiter` alters the `returncode` of a process run with `subprocess`, when capturing output, is still a mystery to me. But I realized the signal handler in the `Arbiter` is expected to handle dead worker processes. And worker subclasses all seem to reset the signal handlers to revert those signals set by the `Arbiter`.
I'm also submitting a PR to fix this: https://github.com/encode/uvicorn/pull/895. It's just 3 lines of code. But debugging it and finding it took me almost a week. :sweat_smile:
### Environment
- OS / Python / Uvicorn version: just run `uvicorn --version`: `Running uvicorn 0.13.1 with CPython 3.8.5 on Linux` (it's actually installed from source, for debugging)
- Gunicorn version (also installed from source, for debugging): `gunicorn (version 20.0.4)`
- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.
```console
$ gunicorn -k uvicorn.workers.UvicornWorker main:app
```
### Additional context
<!-- Any additional information that can help understanding the problem.
Eg. linked issues, or a description of what you were trying to achieve. -->
I'm pretty sure this issue https://github.com/encode/uvicorn/issues/584 is related to the same problem.
</issue>
<code>
[start of uvicorn/workers.py]
1 import asyncio
2 import logging
3
4 from gunicorn.workers.base import Worker
5
6 from uvicorn.config import Config
7 from uvicorn.main import Server
8
9
10 class UvicornWorker(Worker):
11 """
12 A worker class for Gunicorn that interfaces with an ASGI consumer callable,
13 rather than a WSGI callable.
14 """
15
16 CONFIG_KWARGS = {"loop": "uvloop", "http": "httptools"}
17
18 def __init__(self, *args, **kwargs):
19 super(UvicornWorker, self).__init__(*args, **kwargs)
20
21 logger = logging.getLogger("uvicorn.error")
22 logger.handlers = self.log.error_log.handlers
23 logger.setLevel(self.log.error_log.level)
24 logger.propagate = False
25
26 logger = logging.getLogger("uvicorn.access")
27 logger.handlers = self.log.access_log.handlers
28 logger.setLevel(self.log.access_log.level)
29 logger.propagate = False
30
31 config_kwargs = {
32 "app": None,
33 "log_config": None,
34 "timeout_keep_alive": self.cfg.keepalive,
35 "timeout_notify": self.timeout,
36 "callback_notify": self.callback_notify,
37 "limit_max_requests": self.max_requests,
38 "forwarded_allow_ips": self.cfg.forwarded_allow_ips,
39 }
40
41 if self.cfg.is_ssl:
42 ssl_kwargs = {
43 "ssl_keyfile": self.cfg.ssl_options.get("keyfile"),
44 "ssl_certfile": self.cfg.ssl_options.get("certfile"),
45 "ssl_keyfile_password": self.cfg.ssl_options.get("password"),
46 "ssl_version": self.cfg.ssl_options.get("ssl_version"),
47 "ssl_cert_reqs": self.cfg.ssl_options.get("cert_reqs"),
48 "ssl_ca_certs": self.cfg.ssl_options.get("ca_certs"),
49 "ssl_ciphers": self.cfg.ssl_options.get("ciphers"),
50 }
51 config_kwargs.update(ssl_kwargs)
52
53 if self.cfg.settings["backlog"].value:
54 config_kwargs["backlog"] = self.cfg.settings["backlog"].value
55
56 config_kwargs.update(self.CONFIG_KWARGS)
57
58 self.config = Config(**config_kwargs)
59
60 def init_process(self):
61 self.config.setup_event_loop()
62 super(UvicornWorker, self).init_process()
63
64 def init_signals(self):
65 pass
66
67 def run(self):
68 self.config.app = self.wsgi
69 server = Server(config=self.config)
70 loop = asyncio.get_event_loop()
71 loop.run_until_complete(server.serve(sockets=self.sockets))
72
73 async def callback_notify(self):
74 self.notify()
75
76
77 class UvicornH11Worker(UvicornWorker):
78 CONFIG_KWARGS = {"loop": "asyncio", "http": "h11"}
79
[end of uvicorn/workers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/uvicorn/workers.py b/uvicorn/workers.py
--- a/uvicorn/workers.py
+++ b/uvicorn/workers.py
@@ -1,5 +1,6 @@
import asyncio
import logging
+import signal
from gunicorn.workers.base import Worker
@@ -62,7 +63,11 @@
super(UvicornWorker, self).init_process()
def init_signals(self):
- pass
+ # Reset signals so Gunicorn doesn't swallow subprocess return codes
+ # other signals are set up by Server.install_signal_handlers()
+ # See: https://github.com/encode/uvicorn/issues/894
+ for s in self.SIGNALS:
+ signal.signal(s, signal.SIG_DFL)
def run(self):
self.config.app = self.wsgi
| {"golden_diff": "diff --git a/uvicorn/workers.py b/uvicorn/workers.py\n--- a/uvicorn/workers.py\n+++ b/uvicorn/workers.py\n@@ -1,5 +1,6 @@\n import asyncio\n import logging\n+import signal\n \n from gunicorn.workers.base import Worker\n \n@@ -62,7 +63,11 @@\n super(UvicornWorker, self).init_process()\n \n def init_signals(self):\n- pass\n+ # Reset signals so Gunicorn doesn't swallow subprocess return codes\n+ # other signals are set up by Server.install_signal_handlers()\n+ # See: https://github.com/encode/uvicorn/issues/894\n+ for s in self.SIGNALS:\n+ signal.signal(s, signal.SIG_DFL)\n \n def run(self):\n self.config.app = self.wsgi\n", "issue": "Subprocess returncode is not detected when running Gunicorn with Uvicorn (with fix PR companion)\n### Checklist\r\n\r\n<!-- Please make sure you check all these items before submitting your bug report. -->\r\n\r\n- [x] The bug is reproducible against the latest release and/or `master`.\r\n- [x] There are no similar issues or pull requests to fix it yet.\r\n\r\n### Describe the bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\nWhen starting Gunicorn with Uvicorn worker(s), if the app uses `subprocess` to start other processes and captures the output, their `returncode` is in most cases `0`, even if the actual exit code was `1`.\r\n\r\n### To reproduce\r\n\r\n<!-- Provide a *minimal* example with steps to reproduce the bug locally.\r\n\r\nNOTE: try to keep any external dependencies *at an absolute minimum* .\r\nIn other words, remove anything that doesn't make the bug go away.\r\n\r\n-->\r\n\r\nTake this minimal FastAPI app (or replace with Starlette), `main.py`:\r\n\r\n```Python\r\nimport subprocess\r\n\r\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\[email protected](\"/run\")\r\ndef run_subprocess():\r\n result = subprocess.run(\r\n [\"python\", \"-c\", \"import sys; sys.exit(1)\"], capture_output=True\r\n )\r\n return {\"returncode\": result.returncode}\r\n```\r\n\r\nThen run it with:\r\n\r\n```console\r\n$ gunicorn -k uvicorn.workers.UvicornWorker main:app\r\n```\r\n\r\nOpen the browser at http:127.0.0.1:8000/docs and send a request to `/run`.\r\n\r\n### Expected behavior\r\n\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\nThe detected `returncode` should always be `1`, as the subprocess always exits with `1`.\r\n\r\n### Actual behavior\r\n\r\n<!-- A clear and concise description of what actually happens. -->\r\n\r\nIn most of the cases it will return a `returncode` of `0`. Strangely enough, in some cases, it will return a `returncode` of `1`.\r\n\r\n### Debugging material\r\n\r\n<!-- Any tracebacks, screenshots, etc. that can help understanding the problem.\r\n\r\nNOTE:\r\n- Please list tracebacks in full (don't truncate them).\r\n- If relevant, consider turning on DEBUG or TRACE logs for additional details (see the Logging section on https://www.uvicorn.org/settings/ specifically the `log-level` flag).\r\n- Consider using `<details>` to make tracebacks/logs collapsible if they're very large (see https://gist.github.com/ericclemmons/b146fe5da72ca1f706b2ef72a20ac39d).\r\n-->\r\n\r\nThis is because the `UvicornWorker`, which inherits from the base Gunicorn worker, declares a method `init_signals()` (overriding the parent method) but doesn't do anything. I suspect it's because the signal handlers are declared in the `Server.install_signal_handlers()` with compatibility with `asyncio`.\r\n\r\nBut the `UvicornWorker` process is started with `os.fork()` by Gunicorn (if I understand correctly) and by the point it is forked, the Gunicorn \"Arbiter\" class (that handles worker processes) already set its own signal handlers.\r\n\r\nAnd the signal handlers in the Gunicorn base worker reset those handlers, but the `UvicornWorker` doesn't. So, when a process started with `subprocessing` is terminated, the `SIGCHLD` signal is handled by the Gunicorn `Arbiter` (as if the terminated process was a worker) instead of by the `UvicornWorker`.\r\n\r\nDisclaimer: why the `SIGCHLD` signal handling in the Gunicorn `Arbiter` alters the `returncode` of a process run with `subprocess`, when capturing output, is still a mystery to me. But I realized the signal handler in the `Arbiter` is expected to handle dead worker processes. And worker subclasses all seem to reset the signal handlers to revert those signals set by the `Arbiter`.\r\n\r\nI'm also submitting a PR to fix this: https://github.com/encode/uvicorn/pull/895. It's just 3 lines of code. But debugging it and finding it took me almost a week. :sweat_smile: \r\n\r\n### Environment\r\n\r\n- OS / Python / Uvicorn version: just run `uvicorn --version`: `Running uvicorn 0.13.1 with CPython 3.8.5 on Linux` (it's actually installed from source, for debugging)\r\n- Gunicorn version (also installed from source, for debugging): `gunicorn (version 20.0.4)`\r\n- The exact command you're running uvicorn with, all flags you passed included. If you run it with gunicorn please do the same. If there is a reverse-proxy involved and you cannot reproduce without it please give the minimal config of it to reproduce.\r\n\r\n```console\r\n$ gunicorn -k uvicorn.workers.UvicornWorker main:app\r\n```\r\n\r\n### Additional context\r\n\r\n<!-- Any additional information that can help understanding the problem.\r\n\r\nEg. linked issues, or a description of what you were trying to achieve. -->\r\n\r\nI'm pretty sure this issue https://github.com/encode/uvicorn/issues/584 is related to the same problem.\n", "before_files": [{"content": "import asyncio\nimport logging\n\nfrom gunicorn.workers.base import Worker\n\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n logger = logging.getLogger(\"uvicorn.error\")\n logger.handlers = self.log.error_log.handlers\n logger.setLevel(self.log.error_log.level)\n logger.propagate = False\n\n logger = logging.getLogger(\"uvicorn.access\")\n logger.handlers = self.log.access_log.handlers\n logger.setLevel(self.log.access_log.level)\n logger.propagate = False\n\n config_kwargs = {\n \"app\": None,\n \"log_config\": None,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n \"limit_max_requests\": self.max_requests,\n \"forwarded_allow_ips\": self.cfg.forwarded_allow_ips,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_keyfile_password\": self.cfg.ssl_options.get(\"password\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n if self.cfg.settings[\"backlog\"].value:\n config_kwargs[\"backlog\"] = self.cfg.settings[\"backlog\"].value\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(server.serve(sockets=self.sockets))\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}]} | 2,392 | 186 |
gh_patches_debug_296 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-959 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PSNR - Higher is better.
## 🐛 Bug
`PSNR.higher_is_better` should be `True`
### Additional context
This is a simple change, created [PR#959](https://github.com/PyTorchLightning/metrics/pull/959) with the change.
</issue>
<code>
[start of torchmetrics/image/psnr.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Any, Dict, Optional, Sequence, Tuple, Union
15
16 import torch
17 from torch import Tensor, tensor
18 from typing_extensions import Literal
19
20 from torchmetrics.functional.image.psnr import _psnr_compute, _psnr_update
21 from torchmetrics.metric import Metric
22 from torchmetrics.utilities import rank_zero_warn
23
24
25 class PeakSignalNoiseRatio(Metric):
26 r"""
27 Computes `Computes Peak Signal-to-Noise Ratio`_ (PSNR):
28
29 .. math:: \text{PSNR}(I, J) = 10 * \log_{10} \left(\frac{\max(I)^2}{\text{MSE}(I, J)}\right)
30
31 Where :math:`\text{MSE}` denotes the `mean-squared-error`_ function.
32
33 Args:
34 data_range:
35 the range of the data. If None, it is determined from the data (max - min).
36 The ``data_range`` must be given when ``dim`` is not None.
37 base: a base of a logarithm to use.
38 reduction: a method to reduce metric score over labels.
39
40 - ``'elementwise_mean'``: takes the mean (default)
41 - ``'sum'``: takes the sum
42 - ``'none'`` or ``None``: no reduction will be applied
43
44 dim:
45 Dimensions to reduce PSNR scores over, provided as either an integer or a list of integers. Default is
46 None meaning scores will be reduced across all dimensions and all batches.
47 compute_on_step:
48 Forward only calls ``update()`` and returns None if this is set to False.
49
50 .. deprecated:: v0.8
51 Argument has no use anymore and will be removed v0.9.
52
53 kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.
54
55 Raises:
56 ValueError:
57 If ``dim`` is not ``None`` and ``data_range`` is not given.
58
59 Example:
60 >>> from torchmetrics import PeakSignalNoiseRatio
61 >>> psnr = PeakSignalNoiseRatio()
62 >>> preds = torch.tensor([[0.0, 1.0], [2.0, 3.0]])
63 >>> target = torch.tensor([[3.0, 2.0], [1.0, 0.0]])
64 >>> psnr(preds, target)
65 tensor(2.5527)
66
67 .. note::
68 Half precision is only support on GPU for this metric
69
70 """
71 min_target: Tensor
72 max_target: Tensor
73 higher_is_better = False
74
75 def __init__(
76 self,
77 data_range: Optional[float] = None,
78 base: float = 10.0,
79 reduction: Literal["elementwise_mean", "sum", "none", None] = "elementwise_mean",
80 dim: Optional[Union[int, Tuple[int, ...]]] = None,
81 compute_on_step: Optional[bool] = None,
82 **kwargs: Dict[str, Any],
83 ) -> None:
84 super().__init__(compute_on_step=compute_on_step, **kwargs)
85
86 if dim is None and reduction != "elementwise_mean":
87 rank_zero_warn(f"The `reduction={reduction}` will not have any effect when `dim` is None.")
88
89 if dim is None:
90 self.add_state("sum_squared_error", default=tensor(0.0), dist_reduce_fx="sum")
91 self.add_state("total", default=tensor(0), dist_reduce_fx="sum")
92 else:
93 self.add_state("sum_squared_error", default=[])
94 self.add_state("total", default=[])
95
96 if data_range is None:
97 if dim is not None:
98 # Maybe we could use `torch.amax(target, dim=dim) - torch.amin(target, dim=dim)` in PyTorch 1.7 to
99 # calculate `data_range` in the future.
100 raise ValueError("The `data_range` must be given when `dim` is not None.")
101
102 self.data_range = None
103 self.add_state("min_target", default=tensor(0.0), dist_reduce_fx=torch.min)
104 self.add_state("max_target", default=tensor(0.0), dist_reduce_fx=torch.max)
105 else:
106 self.add_state("data_range", default=tensor(float(data_range)), dist_reduce_fx="mean")
107 self.base = base
108 self.reduction = reduction
109 self.dim = tuple(dim) if isinstance(dim, Sequence) else dim
110
111 def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore
112 """Update state with predictions and targets.
113
114 Args:
115 preds: Predictions from model
116 target: Ground truth values
117 """
118 sum_squared_error, n_obs = _psnr_update(preds, target, dim=self.dim)
119 if self.dim is None:
120 if self.data_range is None:
121 # keep track of min and max target values
122 self.min_target = min(target.min(), self.min_target)
123 self.max_target = max(target.max(), self.max_target)
124
125 self.sum_squared_error += sum_squared_error
126 self.total += n_obs
127 else:
128 self.sum_squared_error.append(sum_squared_error)
129 self.total.append(n_obs)
130
131 def compute(self) -> Tensor:
132 """Compute peak signal-to-noise ratio over state."""
133 if self.data_range is not None:
134 data_range = self.data_range
135 else:
136 data_range = self.max_target - self.min_target
137
138 if self.dim is None:
139 sum_squared_error = self.sum_squared_error
140 total = self.total
141 else:
142 sum_squared_error = torch.cat([values.flatten() for values in self.sum_squared_error])
143 total = torch.cat([values.flatten() for values in self.total])
144 return _psnr_compute(sum_squared_error, total, data_range, base=self.base, reduction=self.reduction)
145
[end of torchmetrics/image/psnr.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchmetrics/image/psnr.py b/torchmetrics/image/psnr.py
--- a/torchmetrics/image/psnr.py
+++ b/torchmetrics/image/psnr.py
@@ -70,7 +70,7 @@
"""
min_target: Tensor
max_target: Tensor
- higher_is_better = False
+ higher_is_better = True
def __init__(
self,
| {"golden_diff": "diff --git a/torchmetrics/image/psnr.py b/torchmetrics/image/psnr.py\n--- a/torchmetrics/image/psnr.py\n+++ b/torchmetrics/image/psnr.py\n@@ -70,7 +70,7 @@\n \"\"\"\n min_target: Tensor\n max_target: Tensor\n- higher_is_better = False\n+ higher_is_better = True\n \n def __init__(\n self,\n", "issue": "PSNR - Higher is better.\n## \ud83d\udc1b Bug\r\n\r\n`PSNR.higher_is_better` should be `True`\r\n\r\n### Additional context\r\n\r\nThis is a simple change, created [PR#959](https://github.com/PyTorchLightning/metrics/pull/959) with the change.\r\n\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Any, Dict, Optional, Sequence, Tuple, Union\n\nimport torch\nfrom torch import Tensor, tensor\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.image.psnr import _psnr_compute, _psnr_update\nfrom torchmetrics.metric import Metric\nfrom torchmetrics.utilities import rank_zero_warn\n\n\nclass PeakSignalNoiseRatio(Metric):\n r\"\"\"\n Computes `Computes Peak Signal-to-Noise Ratio`_ (PSNR):\n\n .. math:: \\text{PSNR}(I, J) = 10 * \\log_{10} \\left(\\frac{\\max(I)^2}{\\text{MSE}(I, J)}\\right)\n\n Where :math:`\\text{MSE}` denotes the `mean-squared-error`_ function.\n\n Args:\n data_range:\n the range of the data. If None, it is determined from the data (max - min).\n The ``data_range`` must be given when ``dim`` is not None.\n base: a base of a logarithm to use.\n reduction: a method to reduce metric score over labels.\n\n - ``'elementwise_mean'``: takes the mean (default)\n - ``'sum'``: takes the sum\n - ``'none'`` or ``None``: no reduction will be applied\n\n dim:\n Dimensions to reduce PSNR scores over, provided as either an integer or a list of integers. Default is\n None meaning scores will be reduced across all dimensions and all batches.\n compute_on_step:\n Forward only calls ``update()`` and returns None if this is set to False.\n\n .. deprecated:: v0.8\n Argument has no use anymore and will be removed v0.9.\n\n kwargs: Additional keyword arguments, see :ref:`Metric kwargs` for more info.\n\n Raises:\n ValueError:\n If ``dim`` is not ``None`` and ``data_range`` is not given.\n\n Example:\n >>> from torchmetrics import PeakSignalNoiseRatio\n >>> psnr = PeakSignalNoiseRatio()\n >>> preds = torch.tensor([[0.0, 1.0], [2.0, 3.0]])\n >>> target = torch.tensor([[3.0, 2.0], [1.0, 0.0]])\n >>> psnr(preds, target)\n tensor(2.5527)\n\n .. note::\n Half precision is only support on GPU for this metric\n\n \"\"\"\n min_target: Tensor\n max_target: Tensor\n higher_is_better = False\n\n def __init__(\n self,\n data_range: Optional[float] = None,\n base: float = 10.0,\n reduction: Literal[\"elementwise_mean\", \"sum\", \"none\", None] = \"elementwise_mean\",\n dim: Optional[Union[int, Tuple[int, ...]]] = None,\n compute_on_step: Optional[bool] = None,\n **kwargs: Dict[str, Any],\n ) -> None:\n super().__init__(compute_on_step=compute_on_step, **kwargs)\n\n if dim is None and reduction != \"elementwise_mean\":\n rank_zero_warn(f\"The `reduction={reduction}` will not have any effect when `dim` is None.\")\n\n if dim is None:\n self.add_state(\"sum_squared_error\", default=tensor(0.0), dist_reduce_fx=\"sum\")\n self.add_state(\"total\", default=tensor(0), dist_reduce_fx=\"sum\")\n else:\n self.add_state(\"sum_squared_error\", default=[])\n self.add_state(\"total\", default=[])\n\n if data_range is None:\n if dim is not None:\n # Maybe we could use `torch.amax(target, dim=dim) - torch.amin(target, dim=dim)` in PyTorch 1.7 to\n # calculate `data_range` in the future.\n raise ValueError(\"The `data_range` must be given when `dim` is not None.\")\n\n self.data_range = None\n self.add_state(\"min_target\", default=tensor(0.0), dist_reduce_fx=torch.min)\n self.add_state(\"max_target\", default=tensor(0.0), dist_reduce_fx=torch.max)\n else:\n self.add_state(\"data_range\", default=tensor(float(data_range)), dist_reduce_fx=\"mean\")\n self.base = base\n self.reduction = reduction\n self.dim = tuple(dim) if isinstance(dim, Sequence) else dim\n\n def update(self, preds: Tensor, target: Tensor) -> None: # type: ignore\n \"\"\"Update state with predictions and targets.\n\n Args:\n preds: Predictions from model\n target: Ground truth values\n \"\"\"\n sum_squared_error, n_obs = _psnr_update(preds, target, dim=self.dim)\n if self.dim is None:\n if self.data_range is None:\n # keep track of min and max target values\n self.min_target = min(target.min(), self.min_target)\n self.max_target = max(target.max(), self.max_target)\n\n self.sum_squared_error += sum_squared_error\n self.total += n_obs\n else:\n self.sum_squared_error.append(sum_squared_error)\n self.total.append(n_obs)\n\n def compute(self) -> Tensor:\n \"\"\"Compute peak signal-to-noise ratio over state.\"\"\"\n if self.data_range is not None:\n data_range = self.data_range\n else:\n data_range = self.max_target - self.min_target\n\n if self.dim is None:\n sum_squared_error = self.sum_squared_error\n total = self.total\n else:\n sum_squared_error = torch.cat([values.flatten() for values in self.sum_squared_error])\n total = torch.cat([values.flatten() for values in self.total])\n return _psnr_compute(sum_squared_error, total, data_range, base=self.base, reduction=self.reduction)\n", "path": "torchmetrics/image/psnr.py"}]} | 2,337 | 96 |
gh_patches_debug_12948 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-308 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
invalid keyword argument 'range'
testing on Windows Qt5.5.1/ PyQtgraph github of 20160102, I have the following error on the "Custom Flowchart Nodes" test:
```
Using PyQt5 (default graphics system)
QWindowsWindow::setGeometryDp: Unable to set geometry 600x900+480+210 on QWidget
Window/'QMainWindowClassWindow'. Resulting geometry: 600x874+480+210 (frame: 8,
30, 8, 8, custom margin: 0, 0, 0, 0, minimum size: 69x69, maximum size: 1677721
5x16777215).
Using PyQt5 (default graphics system)
D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64\lib\sit
e-packages\pyqtgraph\flowchart\eq.py:11: FutureWarning: comparison to `None` wil
l result in an elementwise object comparison in the future.
e = a==b
Traceback (most recent call last):
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\examples\FlowchartCustomNode.py", line 147, in <mod
ule>
fNode = fc.createNode('UnsharpMask', pos=(0, 0))
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\flowchart\Flowchart.py", line 177, in createNode
node = self.library.getNodeType(nodeType)(name)
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\examples\FlowchartCustomNode.py", line 106, in __in
it__
CtrlNode.__init__(self, name, terminals=terminals)
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\flowchart\library\common.py", line 97, in __init__
self.ui, self.stateGroup, self.ctrls = generateUi(ui)
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\flowchart\library\common.py", line 51, in generateU
i
w.setOpts(**o)
File "D:\WinPython\basedir34\buildQt5\winpython-3.4.4.amd64\python-3.4.4.amd64
\lib\site-packages\pyqtgraph\widgets\SpinBox.py", line 160, in setOpts
raise TypeError("Invalid keyword argument '%s'." % k)
TypeError: Invalid keyword argument 'range'.
```
</issue>
<code>
[start of examples/FlowchartCustomNode.py]
1 # -*- coding: utf-8 -*-
2 """
3 This example demonstrates writing a custom Node subclass for use with flowcharts.
4
5 We implement a couple of simple image processing nodes.
6 """
7 import initExample ## Add path to library (just for examples; you do not need this)
8
9 from pyqtgraph.flowchart import Flowchart, Node
10 import pyqtgraph.flowchart.library as fclib
11 from pyqtgraph.flowchart.library.common import CtrlNode
12 from pyqtgraph.Qt import QtGui, QtCore
13 import pyqtgraph as pg
14 import numpy as np
15
16 app = QtGui.QApplication([])
17
18 ## Create main window with a grid layout inside
19 win = QtGui.QMainWindow()
20 win.setWindowTitle('pyqtgraph example: FlowchartCustomNode')
21 cw = QtGui.QWidget()
22 win.setCentralWidget(cw)
23 layout = QtGui.QGridLayout()
24 cw.setLayout(layout)
25
26 ## Create an empty flowchart with a single input and output
27 fc = Flowchart(terminals={
28 'dataIn': {'io': 'in'},
29 'dataOut': {'io': 'out'}
30 })
31 w = fc.widget()
32
33 layout.addWidget(fc.widget(), 0, 0, 2, 1)
34
35 ## Create two ImageView widgets to display the raw and processed data with contrast
36 ## and color control.
37 v1 = pg.ImageView()
38 v2 = pg.ImageView()
39 layout.addWidget(v1, 0, 1)
40 layout.addWidget(v2, 1, 1)
41
42 win.show()
43
44 ## generate random input data
45 data = np.random.normal(size=(100,100))
46 data = 25 * pg.gaussianFilter(data, (5,5))
47 data += np.random.normal(size=(100,100))
48 data[40:60, 40:60] += 15.0
49 data[30:50, 30:50] += 15.0
50 #data += np.sin(np.linspace(0, 100, 1000))
51 #data = metaarray.MetaArray(data, info=[{'name': 'Time', 'values': np.linspace(0, 1.0, len(data))}, {}])
52
53 ## Set the raw data as the input value to the flowchart
54 fc.setInput(dataIn=data)
55
56
57 ## At this point, we need some custom Node classes since those provided in the library
58 ## are not sufficient. Each node will define a set of input/output terminals, a
59 ## processing function, and optionally a control widget (to be displayed in the
60 ## flowchart control panel)
61
62 class ImageViewNode(Node):
63 """Node that displays image data in an ImageView widget"""
64 nodeName = 'ImageView'
65
66 def __init__(self, name):
67 self.view = None
68 ## Initialize node with only a single input terminal
69 Node.__init__(self, name, terminals={'data': {'io':'in'}})
70
71 def setView(self, view): ## setView must be called by the program
72 self.view = view
73
74 def process(self, data, display=True):
75 ## if process is called with display=False, then the flowchart is being operated
76 ## in batch processing mode, so we should skip displaying to improve performance.
77
78 if display and self.view is not None:
79 ## the 'data' argument is the value given to the 'data' terminal
80 if data is None:
81 self.view.setImage(np.zeros((1,1))) # give a blank array to clear the view
82 else:
83 self.view.setImage(data)
84
85
86
87
88 ## We will define an unsharp masking filter node as a subclass of CtrlNode.
89 ## CtrlNode is just a convenience class that automatically creates its
90 ## control widget based on a simple data structure.
91 class UnsharpMaskNode(CtrlNode):
92 """Return the input data passed through an unsharp mask."""
93 nodeName = "UnsharpMask"
94 uiTemplate = [
95 ('sigma', 'spin', {'value': 1.0, 'step': 1.0, 'range': [0.0, None]}),
96 ('strength', 'spin', {'value': 1.0, 'dec': True, 'step': 0.5, 'minStep': 0.01, 'range': [0.0, None]}),
97 ]
98 def __init__(self, name):
99 ## Define the input / output terminals available on this node
100 terminals = {
101 'dataIn': dict(io='in'), # each terminal needs at least a name and
102 'dataOut': dict(io='out'), # to specify whether it is input or output
103 } # other more advanced options are available
104 # as well..
105
106 CtrlNode.__init__(self, name, terminals=terminals)
107
108 def process(self, dataIn, display=True):
109 # CtrlNode has created self.ctrls, which is a dict containing {ctrlName: widget}
110 sigma = self.ctrls['sigma'].value()
111 strength = self.ctrls['strength'].value()
112 output = dataIn - (strength * pg.gaussianFilter(dataIn, (sigma,sigma)))
113 return {'dataOut': output}
114
115
116 ## To make our custom node classes available in the flowchart context menu,
117 ## we can either register them with the default node library or make a
118 ## new library.
119
120
121 ## Method 1: Register to global default library:
122 #fclib.registerNodeType(ImageViewNode, [('Display',)])
123 #fclib.registerNodeType(UnsharpMaskNode, [('Image',)])
124
125 ## Method 2: If we want to make our custom node available only to this flowchart,
126 ## then instead of registering the node type globally, we can create a new
127 ## NodeLibrary:
128 library = fclib.LIBRARY.copy() # start with the default node set
129 library.addNodeType(ImageViewNode, [('Display',)])
130 # Add the unsharp mask node to two locations in the menu to demonstrate
131 # that we can create arbitrary menu structures
132 library.addNodeType(UnsharpMaskNode, [('Image',),
133 ('Submenu_test','submenu2','submenu3')])
134 fc.setLibrary(library)
135
136
137 ## Now we will programmatically add nodes to define the function of the flowchart.
138 ## Normally, the user will do this manually or by loading a pre-generated
139 ## flowchart file.
140
141 v1Node = fc.createNode('ImageView', pos=(0, -150))
142 v1Node.setView(v1)
143
144 v2Node = fc.createNode('ImageView', pos=(150, -150))
145 v2Node.setView(v2)
146
147 fNode = fc.createNode('UnsharpMask', pos=(0, 0))
148 fc.connectTerminals(fc['dataIn'], fNode['dataIn'])
149 fc.connectTerminals(fc['dataIn'], v1Node['data'])
150 fc.connectTerminals(fNode['dataOut'], v2Node['data'])
151 fc.connectTerminals(fNode['dataOut'], fc['dataOut'])
152
153
154
155 ## Start Qt event loop unless running in interactive mode or using pyside.
156 if __name__ == '__main__':
157 import sys
158 if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):
159 QtGui.QApplication.instance().exec_()
160
[end of examples/FlowchartCustomNode.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/FlowchartCustomNode.py b/examples/FlowchartCustomNode.py
--- a/examples/FlowchartCustomNode.py
+++ b/examples/FlowchartCustomNode.py
@@ -92,8 +92,8 @@
"""Return the input data passed through an unsharp mask."""
nodeName = "UnsharpMask"
uiTemplate = [
- ('sigma', 'spin', {'value': 1.0, 'step': 1.0, 'range': [0.0, None]}),
- ('strength', 'spin', {'value': 1.0, 'dec': True, 'step': 0.5, 'minStep': 0.01, 'range': [0.0, None]}),
+ ('sigma', 'spin', {'value': 1.0, 'step': 1.0, 'bounds': [0.0, None]}),
+ ('strength', 'spin', {'value': 1.0, 'dec': True, 'step': 0.5, 'minStep': 0.01, 'bounds': [0.0, None]}),
]
def __init__(self, name):
## Define the input / output terminals available on this node
| {"golden_diff": "diff --git a/examples/FlowchartCustomNode.py b/examples/FlowchartCustomNode.py\n--- a/examples/FlowchartCustomNode.py\n+++ b/examples/FlowchartCustomNode.py\n@@ -92,8 +92,8 @@\n \"\"\"Return the input data passed through an unsharp mask.\"\"\"\n nodeName = \"UnsharpMask\"\n uiTemplate = [\n- ('sigma', 'spin', {'value': 1.0, 'step': 1.0, 'range': [0.0, None]}),\n- ('strength', 'spin', {'value': 1.0, 'dec': True, 'step': 0.5, 'minStep': 0.01, 'range': [0.0, None]}),\n+ ('sigma', 'spin', {'value': 1.0, 'step': 1.0, 'bounds': [0.0, None]}),\n+ ('strength', 'spin', {'value': 1.0, 'dec': True, 'step': 0.5, 'minStep': 0.01, 'bounds': [0.0, None]}),\n ]\n def __init__(self, name):\n ## Define the input / output terminals available on this node\n", "issue": "invalid keyword argument 'range'\ntesting on Windows Qt5.5.1/ PyQtgraph github of 20160102, I have the following error on the \"Custom Flowchart Nodes\" test:\n\n```\nUsing PyQt5 (default graphics system)\nQWindowsWindow::setGeometryDp: Unable to set geometry 600x900+480+210 on QWidget\nWindow/'QMainWindowClassWindow'. Resulting geometry: 600x874+480+210 (frame: 8,\n 30, 8, 8, custom margin: 0, 0, 0, 0, minimum size: 69x69, maximum size: 1677721\n5x16777215).\nUsing PyQt5 (default graphics system)\nD:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\\lib\\sit\ne-packages\\pyqtgraph\\flowchart\\eq.py:11: FutureWarning: comparison to `None` wil\nl result in an elementwise object comparison in the future.\n e = a==b\nTraceback (most recent call last):\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\examples\\FlowchartCustomNode.py\", line 147, in <mod\nule>\n fNode = fc.createNode('UnsharpMask', pos=(0, 0))\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\flowchart\\Flowchart.py\", line 177, in createNode\n node = self.library.getNodeType(nodeType)(name)\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\examples\\FlowchartCustomNode.py\", line 106, in __in\nit__\n CtrlNode.__init__(self, name, terminals=terminals)\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\flowchart\\library\\common.py\", line 97, in __init__\n self.ui, self.stateGroup, self.ctrls = generateUi(ui)\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\flowchart\\library\\common.py\", line 51, in generateU\ni\n w.setOpts(**o)\n File \"D:\\WinPython\\basedir34\\buildQt5\\winpython-3.4.4.amd64\\python-3.4.4.amd64\n\\lib\\site-packages\\pyqtgraph\\widgets\\SpinBox.py\", line 160, in setOpts\n raise TypeError(\"Invalid keyword argument '%s'.\" % k)\nTypeError: Invalid keyword argument 'range'.\n```\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nThis example demonstrates writing a custom Node subclass for use with flowcharts.\n\nWe implement a couple of simple image processing nodes.\n\"\"\"\nimport initExample ## Add path to library (just for examples; you do not need this)\n\nfrom pyqtgraph.flowchart import Flowchart, Node\nimport pyqtgraph.flowchart.library as fclib\nfrom pyqtgraph.flowchart.library.common import CtrlNode\nfrom pyqtgraph.Qt import QtGui, QtCore\nimport pyqtgraph as pg\nimport numpy as np\n\napp = QtGui.QApplication([])\n\n## Create main window with a grid layout inside\nwin = QtGui.QMainWindow()\nwin.setWindowTitle('pyqtgraph example: FlowchartCustomNode')\ncw = QtGui.QWidget()\nwin.setCentralWidget(cw)\nlayout = QtGui.QGridLayout()\ncw.setLayout(layout)\n\n## Create an empty flowchart with a single input and output\nfc = Flowchart(terminals={\n 'dataIn': {'io': 'in'},\n 'dataOut': {'io': 'out'} \n})\nw = fc.widget()\n\nlayout.addWidget(fc.widget(), 0, 0, 2, 1)\n\n## Create two ImageView widgets to display the raw and processed data with contrast\n## and color control.\nv1 = pg.ImageView()\nv2 = pg.ImageView()\nlayout.addWidget(v1, 0, 1)\nlayout.addWidget(v2, 1, 1)\n\nwin.show()\n\n## generate random input data\ndata = np.random.normal(size=(100,100))\ndata = 25 * pg.gaussianFilter(data, (5,5))\ndata += np.random.normal(size=(100,100))\ndata[40:60, 40:60] += 15.0\ndata[30:50, 30:50] += 15.0\n#data += np.sin(np.linspace(0, 100, 1000))\n#data = metaarray.MetaArray(data, info=[{'name': 'Time', 'values': np.linspace(0, 1.0, len(data))}, {}])\n\n## Set the raw data as the input value to the flowchart\nfc.setInput(dataIn=data)\n\n\n## At this point, we need some custom Node classes since those provided in the library\n## are not sufficient. Each node will define a set of input/output terminals, a \n## processing function, and optionally a control widget (to be displayed in the \n## flowchart control panel)\n\nclass ImageViewNode(Node):\n \"\"\"Node that displays image data in an ImageView widget\"\"\"\n nodeName = 'ImageView'\n \n def __init__(self, name):\n self.view = None\n ## Initialize node with only a single input terminal\n Node.__init__(self, name, terminals={'data': {'io':'in'}})\n \n def setView(self, view): ## setView must be called by the program\n self.view = view\n \n def process(self, data, display=True):\n ## if process is called with display=False, then the flowchart is being operated\n ## in batch processing mode, so we should skip displaying to improve performance.\n \n if display and self.view is not None:\n ## the 'data' argument is the value given to the 'data' terminal\n if data is None:\n self.view.setImage(np.zeros((1,1))) # give a blank array to clear the view\n else:\n self.view.setImage(data)\n\n\n\n \n## We will define an unsharp masking filter node as a subclass of CtrlNode.\n## CtrlNode is just a convenience class that automatically creates its\n## control widget based on a simple data structure.\nclass UnsharpMaskNode(CtrlNode):\n \"\"\"Return the input data passed through an unsharp mask.\"\"\"\n nodeName = \"UnsharpMask\"\n uiTemplate = [\n ('sigma', 'spin', {'value': 1.0, 'step': 1.0, 'range': [0.0, None]}),\n ('strength', 'spin', {'value': 1.0, 'dec': True, 'step': 0.5, 'minStep': 0.01, 'range': [0.0, None]}),\n ]\n def __init__(self, name):\n ## Define the input / output terminals available on this node\n terminals = {\n 'dataIn': dict(io='in'), # each terminal needs at least a name and\n 'dataOut': dict(io='out'), # to specify whether it is input or output\n } # other more advanced options are available\n # as well..\n \n CtrlNode.__init__(self, name, terminals=terminals)\n \n def process(self, dataIn, display=True):\n # CtrlNode has created self.ctrls, which is a dict containing {ctrlName: widget}\n sigma = self.ctrls['sigma'].value()\n strength = self.ctrls['strength'].value()\n output = dataIn - (strength * pg.gaussianFilter(dataIn, (sigma,sigma)))\n return {'dataOut': output}\n\n\n## To make our custom node classes available in the flowchart context menu,\n## we can either register them with the default node library or make a\n## new library.\n\n \n## Method 1: Register to global default library:\n#fclib.registerNodeType(ImageViewNode, [('Display',)])\n#fclib.registerNodeType(UnsharpMaskNode, [('Image',)])\n\n## Method 2: If we want to make our custom node available only to this flowchart,\n## then instead of registering the node type globally, we can create a new \n## NodeLibrary:\nlibrary = fclib.LIBRARY.copy() # start with the default node set\nlibrary.addNodeType(ImageViewNode, [('Display',)])\n# Add the unsharp mask node to two locations in the menu to demonstrate\n# that we can create arbitrary menu structures\nlibrary.addNodeType(UnsharpMaskNode, [('Image',), \n ('Submenu_test','submenu2','submenu3')])\nfc.setLibrary(library)\n\n\n## Now we will programmatically add nodes to define the function of the flowchart.\n## Normally, the user will do this manually or by loading a pre-generated\n## flowchart file.\n\nv1Node = fc.createNode('ImageView', pos=(0, -150))\nv1Node.setView(v1)\n\nv2Node = fc.createNode('ImageView', pos=(150, -150))\nv2Node.setView(v2)\n\nfNode = fc.createNode('UnsharpMask', pos=(0, 0))\nfc.connectTerminals(fc['dataIn'], fNode['dataIn'])\nfc.connectTerminals(fc['dataIn'], v1Node['data'])\nfc.connectTerminals(fNode['dataOut'], v2Node['data'])\nfc.connectTerminals(fNode['dataOut'], fc['dataOut'])\n\n\n\n## Start Qt event loop unless running in interactive mode or using pyside.\nif __name__ == '__main__':\n import sys\n if (sys.flags.interactive != 1) or not hasattr(QtCore, 'PYQT_VERSION'):\n QtGui.QApplication.instance().exec_()\n", "path": "examples/FlowchartCustomNode.py"}]} | 3,233 | 274 |
gh_patches_debug_86 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-2754 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Transitive import of mitmproxy.version causes warning
Since #1837, we import `.script`, will imports `.flow`, which imports `.version`.
This causes the following warning in pytest:
```
test/mitmproxy/test_version.py::test_version
/Users/kriechi/.pyenv/versions/3.5.3/lib/python3.5/runpy.py:125:
RuntimeWarning: 'mitmproxy.version' found in sys.modules after import of package
'mitmproxy', but prior to execution of 'mitmproxy.version'; this may result in
unpredictable behaviour
warn(RuntimeWarning(msg))
-- Docs: http://doc.pytest.org/en/latest/warnings.html
```
[Note](http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html#the-double-import-trap)
> This next trap exists in all current versions of Python, including 3.3, and can be summed up in the following general guideline: “Never add a package directory, or any directory inside a package, directly to the Python path”.
> The reason this is problematic is that every module in that directory is now potentially accessible under two different names: as a top level module (since the directory is on sys.path) and as a submodule of the package (if the higher level directory containing the package itself is also on sys.path).
Maybe using the approach described [here](https://stackoverflow.com/questions/27947639/how-to-properly-create-a-pyinstaller-hook-or-maybe-hidden-import) works better?
</issue>
<code>
[start of mitmproxy/version.py]
1 import os
2 import subprocess
3
4 # The actual version string. For precompiled binaries, this will be changed to include the build
5 # tag, e.g. "3.0.0.dev0042-0xcafeabc"
6 VERSION = "3.0.0"
7 PATHOD = "pathod " + VERSION
8 MITMPROXY = "mitmproxy " + VERSION
9
10 # Serialization format version. This is displayed nowhere, it just needs to be incremented by one
11 # for each change in the file format.
12 FLOW_FORMAT_VERSION = 5
13
14
15 def get_version(dev: bool = False, build: bool = False, refresh: bool = False) -> str:
16 """
17 Return a detailed version string, sourced either from a hardcoded VERSION constant
18 or obtained dynamically using git.
19
20 Args:
21 dev: If True, non-tagged releases will include a ".devXXXX" suffix, where XXXX is the number
22 of commits since the last tagged release.
23 build: If True, non-tagged releases will include a "-0xXXXXXXX" suffix, where XXXXXXX are
24 the first seven digits of the commit hash.
25 refresh: If True, always try to use git instead of a potentially hardcoded constant.
26 """
27
28 mitmproxy_version = VERSION
29
30 if "dev" in VERSION and not refresh:
31 pass # There is a hardcoded build tag, so we just use what's there.
32 elif dev or build:
33 here = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
34 try:
35 git_describe = subprocess.check_output(
36 ['git', 'describe', '--tags', '--long'],
37 stderr=subprocess.STDOUT,
38 cwd=here,
39 )
40 last_tag, tag_dist, commit = git_describe.decode().strip().rsplit("-", 2)
41 commit = commit.lstrip("g")[:7]
42 tag_dist = int(tag_dist)
43 except Exception:
44 pass
45 else:
46 # Remove current suffix
47 mitmproxy_version = mitmproxy_version.split(".dev")[0]
48
49 # Add suffix for non-tagged releases
50 if tag_dist > 0:
51 mitmproxy_version += ".dev{tag_dist}".format(tag_dist=tag_dist)
52 # The wheel build tag (we use the commit) must start with a digit, so we include "0x"
53 mitmproxy_version += "-0x{commit}".format(commit=commit)
54
55 if not dev:
56 mitmproxy_version = mitmproxy_version.split(".dev")[0]
57 elif not build:
58 mitmproxy_version = mitmproxy_version.split("-0x")[0]
59
60 return mitmproxy_version
61
62
63 if __name__ == "__main__":
64 print(VERSION)
65
[end of mitmproxy/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/version.py b/mitmproxy/version.py
--- a/mitmproxy/version.py
+++ b/mitmproxy/version.py
@@ -60,5 +60,5 @@
return mitmproxy_version
-if __name__ == "__main__":
+if __name__ == "__main__": # pragma: no cover
print(VERSION)
| {"golden_diff": "diff --git a/mitmproxy/version.py b/mitmproxy/version.py\n--- a/mitmproxy/version.py\n+++ b/mitmproxy/version.py\n@@ -60,5 +60,5 @@\n return mitmproxy_version\n \n \n-if __name__ == \"__main__\":\n+if __name__ == \"__main__\": # pragma: no cover\n print(VERSION)\n", "issue": "Transitive import of mitmproxy.version causes warning\nSince #1837, we import `.script`, will imports `.flow`, which imports `.version`.\r\nThis causes the following warning in pytest:\r\n\r\n```\r\ntest/mitmproxy/test_version.py::test_version\r\n /Users/kriechi/.pyenv/versions/3.5.3/lib/python3.5/runpy.py:125: \r\nRuntimeWarning: 'mitmproxy.version' found in sys.modules after import of package \r\n'mitmproxy', but prior to execution of 'mitmproxy.version'; this may result in \r\nunpredictable behaviour\r\n warn(RuntimeWarning(msg))\r\n\r\n-- Docs: http://doc.pytest.org/en/latest/warnings.html\r\n```\r\n\r\n[Note](http://python-notes.curiousefficiency.org/en/latest/python_concepts/import_traps.html#the-double-import-trap)\r\n> This next trap exists in all current versions of Python, including 3.3, and can be summed up in the following general guideline: \u201cNever add a package directory, or any directory inside a package, directly to the Python path\u201d.\r\n\r\n> The reason this is problematic is that every module in that directory is now potentially accessible under two different names: as a top level module (since the directory is on sys.path) and as a submodule of the package (if the higher level directory containing the package itself is also on sys.path).\r\n\r\nMaybe using the approach described [here](https://stackoverflow.com/questions/27947639/how-to-properly-create-a-pyinstaller-hook-or-maybe-hidden-import) works better?\n", "before_files": [{"content": "import os\nimport subprocess\n\n# The actual version string. For precompiled binaries, this will be changed to include the build\n# tag, e.g. \"3.0.0.dev0042-0xcafeabc\"\nVERSION = \"3.0.0\"\nPATHOD = \"pathod \" + VERSION\nMITMPROXY = \"mitmproxy \" + VERSION\n\n# Serialization format version. This is displayed nowhere, it just needs to be incremented by one\n# for each change in the file format.\nFLOW_FORMAT_VERSION = 5\n\n\ndef get_version(dev: bool = False, build: bool = False, refresh: bool = False) -> str:\n \"\"\"\n Return a detailed version string, sourced either from a hardcoded VERSION constant\n or obtained dynamically using git.\n\n Args:\n dev: If True, non-tagged releases will include a \".devXXXX\" suffix, where XXXX is the number\n of commits since the last tagged release.\n build: If True, non-tagged releases will include a \"-0xXXXXXXX\" suffix, where XXXXXXX are\n the first seven digits of the commit hash.\n refresh: If True, always try to use git instead of a potentially hardcoded constant.\n \"\"\"\n\n mitmproxy_version = VERSION\n\n if \"dev\" in VERSION and not refresh:\n pass # There is a hardcoded build tag, so we just use what's there.\n elif dev or build:\n here = os.path.abspath(os.path.join(os.path.dirname(__file__), \"..\"))\n try:\n git_describe = subprocess.check_output(\n ['git', 'describe', '--tags', '--long'],\n stderr=subprocess.STDOUT,\n cwd=here,\n )\n last_tag, tag_dist, commit = git_describe.decode().strip().rsplit(\"-\", 2)\n commit = commit.lstrip(\"g\")[:7]\n tag_dist = int(tag_dist)\n except Exception:\n pass\n else:\n # Remove current suffix\n mitmproxy_version = mitmproxy_version.split(\".dev\")[0]\n\n # Add suffix for non-tagged releases\n if tag_dist > 0:\n mitmproxy_version += \".dev{tag_dist}\".format(tag_dist=tag_dist)\n # The wheel build tag (we use the commit) must start with a digit, so we include \"0x\"\n mitmproxy_version += \"-0x{commit}\".format(commit=commit)\n\n if not dev:\n mitmproxy_version = mitmproxy_version.split(\".dev\")[0]\n elif not build:\n mitmproxy_version = mitmproxy_version.split(\"-0x\")[0]\n\n return mitmproxy_version\n\n\nif __name__ == \"__main__\":\n print(VERSION)\n", "path": "mitmproxy/version.py"}]} | 1,577 | 82 |
gh_patches_debug_3742 | rasdani/github-patches | git_diff | netket__netket-1193 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bose-Hubbard model fails in extended Hilbert space for newer versions
Hello,
we are working on an extended Hilbert space in order to calculate the ground state energy of the Bose-Hubbard model. We added spin 1/2 sites between every boson site in the one-dimensional chain, but we left the Hamiltonian unchanged. We expected that the ground state energy should be the same in this extended Hilbert space, but in the newer versions of Netket (post 3.3.3), this energy differs from the actual solution. Even more, every time we run the exact diagonalization of the Hamiltonian matrix (either with the full_ed or lanczos_ed method), its eigenvalues change; we also diagonalized the same Hamiltonian matrix with the linalg package of numpy and the result also differs every time we do it. This doesn't happen in the older versions; the problem begins in the 3.3.4 version of Netket.
We claim that the problem resides in the way that the newer versions of Netket interpret the Hamiltonian matrices with mixed Hilbert spaces. When we do the same Bose-Hubbard model in a Fock-Boson space, the calculation gives the correct value of the ground state energy.
Here is the code that we run in both versions of Netket, and a video of the discrepancy.
Notebook ran in Nektet 3.3.3: https://www.youtube.com/watch?v=ENhRJfYg7dg
Notebook ran in Netket 3.4.1: https://www.youtube.com/watch?v=Q3XfWrnR7LU
You can download the Jupyter Script here:
https://www.dropbox.com/s/dza4kbyem2ycg6v/BoseHubbardNetket.ipynb?dl=0
```
# Extended Hilbert space
n_max = 3
L = 3 # Number of spin sites
hil_gen = nk.hilbert.Fock(n_max = n_max)*nk.hilbert.Spin(1/2)
hi_ext = hil_gen
for i in range(L-1):
hi_ext = hi_ext * hil_gen
hi_ext = hi_ext * nk.hilbert.Fock(n_max = n_max)
# Boson-Fock Hilbert space
N = 4 # Number of bosons in the chain
# Chain graph
g = nk.graph.Chain(length=N, pbc=False)
hi_fock = nk.hilbert.Fock(n_max=n_max, N=N)
# Bose-Hubbard Hamiltonian for extended Hilbert space
J = 0.5
U = 0.5
h = create(hi_ext,0)*destroy(hi_ext,2) + create(hi_ext,2)*destroy(hi_ext,4) + create(hi_ext,4)*destroy(hi_ext,6)
h_hc = h.H.collect()
h_u = (number(hi_ext,0)*(number(hi_ext,0)-1) + number(hi_ext,2)*(number(hi_ext,2)-1) + number(hi_ext,4)*(number(hi_ext,4)-1) +
number(hi_ext,6)*(number(hi_ext,6)-1))
H_bh = -J*(h + h_hc) + (U/2)*h_u
# Bose-Hubbard Hamiltonian for the Fock-Bose Hilbert space
H_bh2 = nk.operator.BoseHubbard(hilbert=hi_fock,U=0.5,J=0.5,graph=g)
# lanczos computation of the eigenvalues of the Hamiltonian with the extended Hilbert space
E_bh, ket_bh = nk.exact.lanczos_ed(H_bh, compute_eigenvectors=True)
print("Exact ground state energy = {0:.3f}".format(E_bh[0]))
# full computation of the eigenvalues of the Hamiltonian with the extended Hilbert space
E_bhfull = nk.exact.full_ed(H_bh)
print("Exact ground state energy = {0:.3f}".format(E_bhfull[0]))
# numpy.linalg computation of the eigenvalues of the Hamiltonian with the extended Hilbert space
M_bh = H_bh.to_dense()
E_bhla = min(np.linalg.eig(M_bh)[0])
print("Ground state energy:", E_bhla.real)
# scipy sparse computation of the eigenvalues of the Hamiltonian with the extended Hilbert space
H_bhsparse = H_bh.to_sparse()
eig_vals, eig_vecs = eigsh(H_bhsparse, k=1, which="SA")
print("eigenvalues with scipy sparse:", eig_vals)
# lanczos computation of the eigenvalues of the Hamiltonian with the Fock Hilbert space
E_bh2, ket_bh2 = nk.exact.lanczos_ed(H_bh2, compute_eigenvectors=True)
print("Exact ground state energy = {0:.3f}".format(E_bh2[0]))
# full computation of the eigenvalues of the Hamiltonian with the Fock Hilbert space
E_bhfull2= nk.exact.full_ed(H_bh2)
print("Exact ground state energy = {0:.3f}".format(E_bhfull2[0]))
# numpy.linalg computation of the eigenvalues of the Hamiltonian with the Fock Hilbert space
M_bh2 = H_bh2.to_dense()
E_bhla2 = min(np.linalg.eig(M_bh2)[0])
print("Ground state energy:", E_bhla2.real)
# scipy sparse computation of the eigenvalues of the Hamiltonian with the extended Hilbert space
H_bhsparse2 = H_bh2.to_sparse()
eig_vals2, eig_vecs2 = eigsh(H_bhsparse2, k=1, which="SA")
print("eigenvalues with scipy sparse:", eig_vals2)
```
</issue>
<code>
[start of netket/operator/_local_operator_compile_helpers.py]
1 # Copyright 2022 The NetKet Authors - All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 This file contains functions generating the numba-packed representation of local
17 operators.
18 """
19
20 import numpy as np
21 import numba
22
23 from netket.hilbert import AbstractHilbert
24 from netket.utils.types import DType
25
26
27 def pack_internals(
28 hilbert: AbstractHilbert,
29 operators_dict: dict,
30 constant,
31 dtype: DType,
32 mel_cutoff: float,
33 ):
34 """
35 Take the internal lazy representation of a local operator and returns the arrays
36 needed for the numba implementation.
37
38 This takes as input a dictionary with Tuples as keys, the `acting_on` and matrices as values.
39 The keys represent the sites upon which the matrix acts.
40 It is assumed that the integer in the tuples are sorted.
41
42 Returns a dictionary with all the data fields
43 """
44 op_acting_on = list(operators_dict.keys())
45 operators = list(operators_dict.values())
46 n_operators = len(operators_dict)
47
48 """Analyze the operator strings and precompute arrays for get_conn inference"""
49 acting_size = np.array([len(aon) for aon in op_acting_on], dtype=np.intp)
50 max_acting_on_sz = np.max(acting_size)
51 max_local_hilbert_size = max(
52 [max(map(hilbert.size_at_index, aon)) for aon in op_acting_on]
53 )
54 max_op_size = max(map(lambda x: x.shape[0], operators))
55
56 acting_on = np.full((n_operators, max_acting_on_sz), -1, dtype=np.intp)
57 for (i, aon) in enumerate(op_acting_on):
58 acting_on[i][: len(aon)] = aon
59
60 local_states = np.full(
61 (n_operators, max_acting_on_sz, max_local_hilbert_size), np.nan
62 )
63 basis = np.full((n_operators, max_acting_on_sz), 1e10, dtype=np.int64)
64
65 diag_mels = np.full((n_operators, max_op_size), np.nan, dtype=dtype)
66 mels = np.full(
67 (n_operators, max_op_size, max_op_size - 1),
68 np.nan,
69 dtype=dtype,
70 )
71 x_prime = np.full(
72 (n_operators, max_op_size, max_op_size - 1, max_acting_on_sz),
73 -1,
74 dtype=np.float64,
75 )
76 n_conns = np.full((n_operators, max_op_size), -1, dtype=np.intp)
77
78 for (i, (aon, op)) in enumerate(operators_dict.items()):
79 aon_size = len(aon)
80 n_local_states_per_site = np.asarray([hilbert.size_at_index(i) for i in aon])
81
82 ## add an operator to local_states
83 for (j, site) in enumerate(aon):
84 local_states[i, j, : hilbert.shape[site]] = np.asarray(
85 hilbert.states_at_index(site)
86 )
87
88 ba = 1
89 for s in range(aon_size):
90 basis[i, s] = ba
91 ba *= hilbert.shape[aon_size - s - 1]
92
93 # eventually could support sparse matrices
94 # if isinstance(op, sparse.spmatrix):
95 # op = op.todense()
96
97 _append_matrix(
98 op,
99 diag_mels[i],
100 mels[i],
101 x_prime[i],
102 n_conns[i],
103 aon_size,
104 local_states[i],
105 mel_cutoff,
106 n_local_states_per_site,
107 )
108
109 nonzero_diagonal = (
110 np.any(np.abs(diag_mels) >= mel_cutoff) or np.abs(constant) >= mel_cutoff
111 )
112
113 max_conn_size = 1 if nonzero_diagonal else 0
114 for op in operators:
115 nnz_mat = np.abs(op) > mel_cutoff
116 nnz_mat[np.diag_indices(nnz_mat.shape[0])] = False
117 nnz_rows = np.sum(nnz_mat, axis=1)
118 max_conn_size += np.max(nnz_rows)
119
120 return {
121 "acting_on": acting_on,
122 "acting_size": acting_size,
123 "diag_mels": diag_mels,
124 "mels": mels,
125 "x_prime": x_prime,
126 "n_conns": n_conns,
127 "local_states": local_states,
128 "basis": basis,
129 "nonzero_diagonal": nonzero_diagonal,
130 "max_conn_size": max_conn_size,
131 }
132
133
134 @numba.jit(nopython=True)
135 def _append_matrix(
136 operator,
137 diag_mels,
138 mels,
139 x_prime,
140 n_conns,
141 acting_size,
142 local_states_per_site,
143 epsilon,
144 hilb_size_per_site,
145 ):
146 op_size = operator.shape[0]
147 assert op_size == operator.shape[1]
148 for i in range(op_size):
149 diag_mels[i] = operator[i, i]
150 n_conns[i] = 0
151 for j in range(op_size):
152 if i != j and np.abs(operator[i, j]) > epsilon:
153 k_conn = n_conns[i]
154 mels[i, k_conn] = operator[i, j]
155 _number_to_state(
156 j,
157 hilb_size_per_site,
158 local_states_per_site[:acting_size, :],
159 x_prime[i, k_conn, :acting_size],
160 )
161 n_conns[i] += 1
162
163
164 @numba.jit(nopython=True)
165 def _number_to_state(number, hilbert_size_per_site, local_states_per_site, out):
166
167 out[:] = local_states_per_site[:, 0]
168 size = out.shape[0]
169
170 ip = number
171 k = size - 1
172 while ip > 0:
173 local_size = hilbert_size_per_site[k]
174 out[k] = local_states_per_site[k, ip % local_size]
175 ip = ip // local_size
176 k -= 1
177
178 return out
179
[end of netket/operator/_local_operator_compile_helpers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/netket/operator/_local_operator_compile_helpers.py b/netket/operator/_local_operator_compile_helpers.py
--- a/netket/operator/_local_operator_compile_helpers.py
+++ b/netket/operator/_local_operator_compile_helpers.py
@@ -88,7 +88,7 @@
ba = 1
for s in range(aon_size):
basis[i, s] = ba
- ba *= hilbert.shape[aon_size - s - 1]
+ ba *= hilbert.shape[aon[aon_size - s - 1]]
# eventually could support sparse matrices
# if isinstance(op, sparse.spmatrix):
| {"golden_diff": "diff --git a/netket/operator/_local_operator_compile_helpers.py b/netket/operator/_local_operator_compile_helpers.py\n--- a/netket/operator/_local_operator_compile_helpers.py\n+++ b/netket/operator/_local_operator_compile_helpers.py\n@@ -88,7 +88,7 @@\n ba = 1\n for s in range(aon_size):\n basis[i, s] = ba\n- ba *= hilbert.shape[aon_size - s - 1]\n+ ba *= hilbert.shape[aon[aon_size - s - 1]]\n \n # eventually could support sparse matrices\n # if isinstance(op, sparse.spmatrix):\n", "issue": "Bose-Hubbard model fails in extended Hilbert space for newer versions\nHello, \r\nwe are working on an extended Hilbert space in order to calculate the ground state energy of the Bose-Hubbard model. We added spin 1/2 sites between every boson site in the one-dimensional chain, but we left the Hamiltonian unchanged. We expected that the ground state energy should be the same in this extended Hilbert space, but in the newer versions of Netket (post 3.3.3), this energy differs from the actual solution. Even more, every time we run the exact diagonalization of the Hamiltonian matrix (either with the full_ed or lanczos_ed method), its eigenvalues change; we also diagonalized the same Hamiltonian matrix with the linalg package of numpy and the result also differs every time we do it. This doesn't happen in the older versions; the problem begins in the 3.3.4 version of Netket.\r\n\r\nWe claim that the problem resides in the way that the newer versions of Netket interpret the Hamiltonian matrices with mixed Hilbert spaces. When we do the same Bose-Hubbard model in a Fock-Boson space, the calculation gives the correct value of the ground state energy.\r\n\r\nHere is the code that we run in both versions of Netket, and a video of the discrepancy. \r\n\r\nNotebook ran in Nektet 3.3.3: https://www.youtube.com/watch?v=ENhRJfYg7dg\r\nNotebook ran in Netket 3.4.1: https://www.youtube.com/watch?v=Q3XfWrnR7LU\r\n\r\nYou can download the Jupyter Script here:\r\nhttps://www.dropbox.com/s/dza4kbyem2ycg6v/BoseHubbardNetket.ipynb?dl=0\r\n\r\n```\r\n# Extended Hilbert space\r\nn_max = 3\r\nL = 3 # Number of spin sites\r\nhil_gen = nk.hilbert.Fock(n_max = n_max)*nk.hilbert.Spin(1/2)\r\nhi_ext = hil_gen\r\nfor i in range(L-1):\r\n hi_ext = hi_ext * hil_gen \r\nhi_ext = hi_ext * nk.hilbert.Fock(n_max = n_max)\r\n# Boson-Fock Hilbert space\r\nN = 4 # Number of bosons in the chain\r\n# Chain graph\r\ng = nk.graph.Chain(length=N, pbc=False)\r\nhi_fock = nk.hilbert.Fock(n_max=n_max, N=N)\r\n\r\n# Bose-Hubbard Hamiltonian for extended Hilbert space\r\nJ = 0.5\r\nU = 0.5\r\nh = create(hi_ext,0)*destroy(hi_ext,2) + create(hi_ext,2)*destroy(hi_ext,4) + create(hi_ext,4)*destroy(hi_ext,6)\r\nh_hc = h.H.collect()\r\nh_u = (number(hi_ext,0)*(number(hi_ext,0)-1) + number(hi_ext,2)*(number(hi_ext,2)-1) + number(hi_ext,4)*(number(hi_ext,4)-1) + \r\n number(hi_ext,6)*(number(hi_ext,6)-1))\r\nH_bh = -J*(h + h_hc) + (U/2)*h_u\r\n\r\n# Bose-Hubbard Hamiltonian for the Fock-Bose Hilbert space\r\nH_bh2 = nk.operator.BoseHubbard(hilbert=hi_fock,U=0.5,J=0.5,graph=g)\r\n\r\n# lanczos computation of the eigenvalues of the Hamiltonian with the extended Hilbert space\r\nE_bh, ket_bh = nk.exact.lanczos_ed(H_bh, compute_eigenvectors=True)\r\nprint(\"Exact ground state energy = {0:.3f}\".format(E_bh[0]))\r\n\r\n# full computation of the eigenvalues of the Hamiltonian with the extended Hilbert space\r\nE_bhfull = nk.exact.full_ed(H_bh)\r\nprint(\"Exact ground state energy = {0:.3f}\".format(E_bhfull[0]))\r\n\r\n# numpy.linalg computation of the eigenvalues of the Hamiltonian with the extended Hilbert space\r\nM_bh = H_bh.to_dense()\r\nE_bhla = min(np.linalg.eig(M_bh)[0])\r\nprint(\"Ground state energy:\", E_bhla.real)\r\n\r\n# scipy sparse computation of the eigenvalues of the Hamiltonian with the extended Hilbert space\r\nH_bhsparse = H_bh.to_sparse()\r\neig_vals, eig_vecs = eigsh(H_bhsparse, k=1, which=\"SA\")\r\nprint(\"eigenvalues with scipy sparse:\", eig_vals)\r\n\r\n# lanczos computation of the eigenvalues of the Hamiltonian with the Fock Hilbert space\r\nE_bh2, ket_bh2 = nk.exact.lanczos_ed(H_bh2, compute_eigenvectors=True)\r\nprint(\"Exact ground state energy = {0:.3f}\".format(E_bh2[0]))\r\n\r\n# full computation of the eigenvalues of the Hamiltonian with the Fock Hilbert space\r\nE_bhfull2= nk.exact.full_ed(H_bh2)\r\nprint(\"Exact ground state energy = {0:.3f}\".format(E_bhfull2[0]))\r\n\r\n# numpy.linalg computation of the eigenvalues of the Hamiltonian with the Fock Hilbert space\r\nM_bh2 = H_bh2.to_dense()\r\nE_bhla2 = min(np.linalg.eig(M_bh2)[0])\r\nprint(\"Ground state energy:\", E_bhla2.real)\r\n\r\n# scipy sparse computation of the eigenvalues of the Hamiltonian with the extended Hilbert space\r\nH_bhsparse2 = H_bh2.to_sparse()\r\neig_vals2, eig_vecs2 = eigsh(H_bhsparse2, k=1, which=\"SA\")\r\nprint(\"eigenvalues with scipy sparse:\", eig_vals2)\r\n```\n", "before_files": [{"content": "# Copyright 2022 The NetKet Authors - All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nThis file contains functions generating the numba-packed representation of local\noperators.\n\"\"\"\n\nimport numpy as np\nimport numba\n\nfrom netket.hilbert import AbstractHilbert\nfrom netket.utils.types import DType\n\n\ndef pack_internals(\n hilbert: AbstractHilbert,\n operators_dict: dict,\n constant,\n dtype: DType,\n mel_cutoff: float,\n):\n \"\"\"\n Take the internal lazy representation of a local operator and returns the arrays\n needed for the numba implementation.\n\n This takes as input a dictionary with Tuples as keys, the `acting_on` and matrices as values.\n The keys represent the sites upon which the matrix acts.\n It is assumed that the integer in the tuples are sorted.\n\n Returns a dictionary with all the data fields\n \"\"\"\n op_acting_on = list(operators_dict.keys())\n operators = list(operators_dict.values())\n n_operators = len(operators_dict)\n\n \"\"\"Analyze the operator strings and precompute arrays for get_conn inference\"\"\"\n acting_size = np.array([len(aon) for aon in op_acting_on], dtype=np.intp)\n max_acting_on_sz = np.max(acting_size)\n max_local_hilbert_size = max(\n [max(map(hilbert.size_at_index, aon)) for aon in op_acting_on]\n )\n max_op_size = max(map(lambda x: x.shape[0], operators))\n\n acting_on = np.full((n_operators, max_acting_on_sz), -1, dtype=np.intp)\n for (i, aon) in enumerate(op_acting_on):\n acting_on[i][: len(aon)] = aon\n\n local_states = np.full(\n (n_operators, max_acting_on_sz, max_local_hilbert_size), np.nan\n )\n basis = np.full((n_operators, max_acting_on_sz), 1e10, dtype=np.int64)\n\n diag_mels = np.full((n_operators, max_op_size), np.nan, dtype=dtype)\n mels = np.full(\n (n_operators, max_op_size, max_op_size - 1),\n np.nan,\n dtype=dtype,\n )\n x_prime = np.full(\n (n_operators, max_op_size, max_op_size - 1, max_acting_on_sz),\n -1,\n dtype=np.float64,\n )\n n_conns = np.full((n_operators, max_op_size), -1, dtype=np.intp)\n\n for (i, (aon, op)) in enumerate(operators_dict.items()):\n aon_size = len(aon)\n n_local_states_per_site = np.asarray([hilbert.size_at_index(i) for i in aon])\n\n ## add an operator to local_states\n for (j, site) in enumerate(aon):\n local_states[i, j, : hilbert.shape[site]] = np.asarray(\n hilbert.states_at_index(site)\n )\n\n ba = 1\n for s in range(aon_size):\n basis[i, s] = ba\n ba *= hilbert.shape[aon_size - s - 1]\n\n # eventually could support sparse matrices\n # if isinstance(op, sparse.spmatrix):\n # op = op.todense()\n\n _append_matrix(\n op,\n diag_mels[i],\n mels[i],\n x_prime[i],\n n_conns[i],\n aon_size,\n local_states[i],\n mel_cutoff,\n n_local_states_per_site,\n )\n\n nonzero_diagonal = (\n np.any(np.abs(diag_mels) >= mel_cutoff) or np.abs(constant) >= mel_cutoff\n )\n\n max_conn_size = 1 if nonzero_diagonal else 0\n for op in operators:\n nnz_mat = np.abs(op) > mel_cutoff\n nnz_mat[np.diag_indices(nnz_mat.shape[0])] = False\n nnz_rows = np.sum(nnz_mat, axis=1)\n max_conn_size += np.max(nnz_rows)\n\n return {\n \"acting_on\": acting_on,\n \"acting_size\": acting_size,\n \"diag_mels\": diag_mels,\n \"mels\": mels,\n \"x_prime\": x_prime,\n \"n_conns\": n_conns,\n \"local_states\": local_states,\n \"basis\": basis,\n \"nonzero_diagonal\": nonzero_diagonal,\n \"max_conn_size\": max_conn_size,\n }\n\n\[email protected](nopython=True)\ndef _append_matrix(\n operator,\n diag_mels,\n mels,\n x_prime,\n n_conns,\n acting_size,\n local_states_per_site,\n epsilon,\n hilb_size_per_site,\n):\n op_size = operator.shape[0]\n assert op_size == operator.shape[1]\n for i in range(op_size):\n diag_mels[i] = operator[i, i]\n n_conns[i] = 0\n for j in range(op_size):\n if i != j and np.abs(operator[i, j]) > epsilon:\n k_conn = n_conns[i]\n mels[i, k_conn] = operator[i, j]\n _number_to_state(\n j,\n hilb_size_per_site,\n local_states_per_site[:acting_size, :],\n x_prime[i, k_conn, :acting_size],\n )\n n_conns[i] += 1\n\n\[email protected](nopython=True)\ndef _number_to_state(number, hilbert_size_per_site, local_states_per_site, out):\n\n out[:] = local_states_per_site[:, 0]\n size = out.shape[0]\n\n ip = number\n k = size - 1\n while ip > 0:\n local_size = hilbert_size_per_site[k]\n out[k] = local_states_per_site[k, ip % local_size]\n ip = ip // local_size\n k -= 1\n\n return out\n", "path": "netket/operator/_local_operator_compile_helpers.py"}]} | 3,652 | 138 |
gh_patches_debug_31447 | rasdani/github-patches | git_diff | sunpy__sunpy-1551 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove the need to have astropy installed before installing SunPy
Currently you can not have a clean python environment and do a `pip install sunpy` you have to have astropy + numpy installed first.
</issue>
<code>
[start of sunpy/io/setup_package.py]
1 from __future__ import absolute_import
2
3 import os
4 import platform
5
6 from distutils.core import Extension
7 from glob import glob
8
9 from astropy_helpers import setup_helpers
10 from astropy.extern import six
11
12
13 def get_extensions():
14
15 if platform.system() == 'Windows' or six.PY3:
16 return list()
17 else:
18 # 'numpy' will be replaced with the proper path to the numpy includes
19 cfg = setup_helpers.DistutilsExtensionArgs()
20 cfg['include_dirs'].append('numpy')
21 cfg['sources'].extend(glob(os.path.join(os.path.dirname(__file__), 'src', 'ana', '*.c')))
22 cfg['extra_compile_args'].extend(['-std=c99', '-O3'])
23 # Squash some warnings
24 cfg['extra_compile_args'].extend(['-Wno-unused-but-set-variable',
25 '-Wno-unused-variable',
26 '-Wno-unused-result'])
27
28 e = Extension('sunpy.io._pyana', **cfg)
29 return [e]
30
31 def requires_2to3():
32 return False
33
[end of sunpy/io/setup_package.py]
[start of setup.py]
1 #!/usr/bin/env python
2 # This file is based havily on the astropy version here:
3 # https://github.com/astropy/package-template/blob/master/setup.py
4 # Which is licensed under the astropy license.
5
6 import glob
7 import os
8 import sys
9
10 import ah_bootstrap
11 from setuptools import setup
12
13 # A dirty hack to get around some early import/configurations ambiguities
14 if sys.version_info[0] >= 3:
15 import builtins
16 else:
17 import __builtin__ as builtins
18 builtins._ASTROPY_SETUP_ = True
19
20 # -- Read the Docs Setup -----------------------------------------------------
21
22 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
23
24 if on_rtd:
25 os.environ['HOME'] = '/home/docs/checkouts/readthedocs.org/user_builds/sunpy/'
26 os.environ['SUNPY_CONFIGDIR'] = '/home/docs/checkouts/readthedocs.org/user_builds/sunpy/'
27
28 from astropy_helpers.setup_helpers import (
29 register_commands, adjust_compiler, get_debug_option, get_package_info)
30 from astropy_helpers.git_helpers import get_git_devstr
31 from astropy_helpers.version_helpers import generate_version_py
32 from sunpy.tests.setup_command import SunPyTest
33
34 # Get some values from the setup.cfg
35 from distutils import config
36 conf = config.ConfigParser()
37 conf.read(['setup.cfg'])
38 metadata = dict(conf.items('metadata'))
39
40 PACKAGENAME = metadata.get('package_name', 'packagename')
41 DESCRIPTION = metadata.get('description', 'SunPy: Python for Solar Physics')
42 AUTHOR = metadata.get('author', 'The SunPy Community')
43 AUTHOR_EMAIL = metadata.get('author_email', '[email protected]')
44 LICENSE = metadata.get('license', 'BSD 2-Clause')
45 URL = metadata.get('url', 'http://sunpy.org')
46
47 LONG_DESCRIPTION = "SunPy is a Python library for solar physics data analysis."
48
49 # Store the package name in a built-in variable so it's easy
50 # to get from other parts of the setup infrastructure
51 builtins._ASTROPY_PACKAGE_NAME_ = PACKAGENAME
52
53 # VERSION should be PEP386 compatible (http://www.python.org/dev/peps/pep-0386)
54 VERSION = '0.7.dev'
55
56 # Indicates if this version is a release version
57 RELEASE = 'dev' not in VERSION
58
59 if not RELEASE:
60 VERSION += get_git_devstr(False)
61
62 # Populate the dict of setup command overrides; this should be done before
63 # invoking any other functionality from distutils since it can potentially
64 # modify distutils' behavior.
65 cmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)
66
67 # Overwrite the Astropy Testing framework
68 cmdclassd['test'] = type('SunPyTest', (SunPyTest,),
69 {'package_name': 'sunpy'})
70
71 # Adjust the compiler in case the default on this platform is to use a
72 # broken one.
73 adjust_compiler(PACKAGENAME)
74
75 # Freeze build information in version.py
76 generate_version_py(PACKAGENAME, VERSION, RELEASE,
77 get_debug_option(PACKAGENAME))
78
79 # Treat everything in scripts except README.rst as a script to be installed
80 scripts = [fname for fname in glob.glob(os.path.join('scripts', '*'))
81 if os.path.basename(fname) != 'README.rst']
82
83
84 # Get configuration information from all of the various subpackages.
85 # See the docstring for setup_helpers.update_package_files for more
86 # details.
87 package_info = get_package_info()
88
89 # Add the project-global data
90 package_info['package_data'].setdefault(PACKAGENAME, [])
91
92 # Include all .c files, recursively, including those generated by
93 # Cython, since we can not do this in MANIFEST.in with a "dynamic"
94 # directory name.
95 c_files = []
96 for root, dirs, files in os.walk(PACKAGENAME):
97 for filename in files:
98 if filename.endswith('.c'):
99 c_files.append(
100 os.path.join(
101 os.path.relpath(root, PACKAGENAME), filename))
102 package_info['package_data'][PACKAGENAME].extend(c_files)
103
104 extras_require = {'database': ["sqlalchemy"],
105 'image': ["scikit-image"],
106 'jpeg2000': ["glymur"],
107 'net': ["suds", "beautifulsoup4", "requests"]}
108 extras_require['all'] = extras_require['database'] + extras_require['image'] + \
109 extras_require['net'] + ["wcsaxes>=0.6"]
110
111 setup(name=PACKAGENAME,
112 version=VERSION,
113 description=DESCRIPTION,
114 scripts=scripts,
115 setup_requires=['numpy>1.7.1'],
116 install_requires=['numpy>1.7.1',
117 'astropy>=1.0.0',
118 'scipy',
119 'pandas>=0.12.0',
120 'matplotlib>=1.1'],
121 extras_require=extras_require,
122 provides=[PACKAGENAME],
123 author=AUTHOR,
124 author_email=AUTHOR_EMAIL,
125 license=LICENSE,
126 url=URL,
127 long_description=LONG_DESCRIPTION,
128 cmdclass=cmdclassd,
129 zip_safe=False,
130 use_2to3=False,
131 include_package_data=True,
132 **package_info
133 )
134
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,7 +29,6 @@
register_commands, adjust_compiler, get_debug_option, get_package_info)
from astropy_helpers.git_helpers import get_git_devstr
from astropy_helpers.version_helpers import generate_version_py
-from sunpy.tests.setup_command import SunPyTest
# Get some values from the setup.cfg
from distutils import config
@@ -64,9 +63,14 @@
# modify distutils' behavior.
cmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)
-# Overwrite the Astropy Testing framework
-cmdclassd['test'] = type('SunPyTest', (SunPyTest,),
- {'package_name': 'sunpy'})
+try:
+ from sunpy.tests.setup_command import SunPyTest
+ # Overwrite the Astropy Testing framework
+ cmdclassd['test'] = type('SunPyTest', (SunPyTest,),
+ {'package_name': 'sunpy'})
+except Exception:
+ # Catch everything, if it doesn't work, we still want SunPy to install.
+ pass
# Adjust the compiler in case the default on this platform is to use a
# broken one.
diff --git a/sunpy/io/setup_package.py b/sunpy/io/setup_package.py
--- a/sunpy/io/setup_package.py
+++ b/sunpy/io/setup_package.py
@@ -1,18 +1,18 @@
from __future__ import absolute_import
import os
+import sys
import platform
from distutils.core import Extension
from glob import glob
from astropy_helpers import setup_helpers
-from astropy.extern import six
def get_extensions():
- if platform.system() == 'Windows' or six.PY3:
+ if platform.system() == 'Windows' or sys.version_info.major == 3:
return list()
else:
# 'numpy' will be replaced with the proper path to the numpy includes
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,7 +29,6 @@\n register_commands, adjust_compiler, get_debug_option, get_package_info)\n from astropy_helpers.git_helpers import get_git_devstr\n from astropy_helpers.version_helpers import generate_version_py\n-from sunpy.tests.setup_command import SunPyTest\n \n # Get some values from the setup.cfg\n from distutils import config\n@@ -64,9 +63,14 @@\n # modify distutils' behavior.\n cmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)\n \n-# Overwrite the Astropy Testing framework\n-cmdclassd['test'] = type('SunPyTest', (SunPyTest,),\n- {'package_name': 'sunpy'})\n+try:\n+ from sunpy.tests.setup_command import SunPyTest\n+ # Overwrite the Astropy Testing framework\n+ cmdclassd['test'] = type('SunPyTest', (SunPyTest,),\n+ {'package_name': 'sunpy'})\n+except Exception:\n+ # Catch everything, if it doesn't work, we still want SunPy to install.\n+ pass\n \n # Adjust the compiler in case the default on this platform is to use a\n # broken one.\ndiff --git a/sunpy/io/setup_package.py b/sunpy/io/setup_package.py\n--- a/sunpy/io/setup_package.py\n+++ b/sunpy/io/setup_package.py\n@@ -1,18 +1,18 @@\n from __future__ import absolute_import\n \n import os\n+import sys\n import platform\n \n from distutils.core import Extension\n from glob import glob\n \n from astropy_helpers import setup_helpers\n-from astropy.extern import six\n \n \n def get_extensions():\n \n- if platform.system() == 'Windows' or six.PY3:\n+ if platform.system() == 'Windows' or sys.version_info.major == 3:\n return list()\n else:\n # 'numpy' will be replaced with the proper path to the numpy includes\n", "issue": "Remove the need to have astropy installed before installing SunPy\nCurrently you can not have a clean python environment and do a `pip install sunpy` you have to have astropy + numpy installed first.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport os\nimport platform\n\nfrom distutils.core import Extension\nfrom glob import glob\n\nfrom astropy_helpers import setup_helpers\nfrom astropy.extern import six\n\n\ndef get_extensions():\n\n if platform.system() == 'Windows' or six.PY3:\n return list()\n else:\n # 'numpy' will be replaced with the proper path to the numpy includes\n cfg = setup_helpers.DistutilsExtensionArgs()\n cfg['include_dirs'].append('numpy')\n cfg['sources'].extend(glob(os.path.join(os.path.dirname(__file__), 'src', 'ana', '*.c')))\n cfg['extra_compile_args'].extend(['-std=c99', '-O3'])\n # Squash some warnings\n cfg['extra_compile_args'].extend(['-Wno-unused-but-set-variable',\n '-Wno-unused-variable',\n '-Wno-unused-result'])\n\n e = Extension('sunpy.io._pyana', **cfg)\n return [e]\n\ndef requires_2to3():\n return False\n", "path": "sunpy/io/setup_package.py"}, {"content": "#!/usr/bin/env python\n# This file is based havily on the astropy version here:\n# https://github.com/astropy/package-template/blob/master/setup.py\n# Which is licensed under the astropy license.\n\nimport glob\nimport os\nimport sys\n\nimport ah_bootstrap\nfrom setuptools import setup\n\n# A dirty hack to get around some early import/configurations ambiguities\nif sys.version_info[0] >= 3:\n import builtins\nelse:\n import __builtin__ as builtins\nbuiltins._ASTROPY_SETUP_ = True\n\n# -- Read the Docs Setup -----------------------------------------------------\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\nif on_rtd:\n os.environ['HOME'] = '/home/docs/checkouts/readthedocs.org/user_builds/sunpy/'\n os.environ['SUNPY_CONFIGDIR'] = '/home/docs/checkouts/readthedocs.org/user_builds/sunpy/'\n\nfrom astropy_helpers.setup_helpers import (\n register_commands, adjust_compiler, get_debug_option, get_package_info)\nfrom astropy_helpers.git_helpers import get_git_devstr\nfrom astropy_helpers.version_helpers import generate_version_py\nfrom sunpy.tests.setup_command import SunPyTest\n\n# Get some values from the setup.cfg\nfrom distutils import config\nconf = config.ConfigParser()\nconf.read(['setup.cfg'])\nmetadata = dict(conf.items('metadata'))\n\nPACKAGENAME = metadata.get('package_name', 'packagename')\nDESCRIPTION = metadata.get('description', 'SunPy: Python for Solar Physics')\nAUTHOR = metadata.get('author', 'The SunPy Community')\nAUTHOR_EMAIL = metadata.get('author_email', '[email protected]')\nLICENSE = metadata.get('license', 'BSD 2-Clause')\nURL = metadata.get('url', 'http://sunpy.org')\n\nLONG_DESCRIPTION = \"SunPy is a Python library for solar physics data analysis.\"\n\n# Store the package name in a built-in variable so it's easy\n# to get from other parts of the setup infrastructure\nbuiltins._ASTROPY_PACKAGE_NAME_ = PACKAGENAME\n\n# VERSION should be PEP386 compatible (http://www.python.org/dev/peps/pep-0386)\nVERSION = '0.7.dev'\n\n# Indicates if this version is a release version\nRELEASE = 'dev' not in VERSION\n\nif not RELEASE:\n VERSION += get_git_devstr(False)\n\n# Populate the dict of setup command overrides; this should be done before\n# invoking any other functionality from distutils since it can potentially\n# modify distutils' behavior.\ncmdclassd = register_commands(PACKAGENAME, VERSION, RELEASE)\n\n# Overwrite the Astropy Testing framework\ncmdclassd['test'] = type('SunPyTest', (SunPyTest,),\n {'package_name': 'sunpy'})\n\n# Adjust the compiler in case the default on this platform is to use a\n# broken one.\nadjust_compiler(PACKAGENAME)\n\n# Freeze build information in version.py\ngenerate_version_py(PACKAGENAME, VERSION, RELEASE,\n get_debug_option(PACKAGENAME))\n\n# Treat everything in scripts except README.rst as a script to be installed\nscripts = [fname for fname in glob.glob(os.path.join('scripts', '*'))\n if os.path.basename(fname) != 'README.rst']\n\n\n# Get configuration information from all of the various subpackages.\n# See the docstring for setup_helpers.update_package_files for more\n# details.\npackage_info = get_package_info()\n\n# Add the project-global data\npackage_info['package_data'].setdefault(PACKAGENAME, [])\n\n# Include all .c files, recursively, including those generated by\n# Cython, since we can not do this in MANIFEST.in with a \"dynamic\"\n# directory name.\nc_files = []\nfor root, dirs, files in os.walk(PACKAGENAME):\n for filename in files:\n if filename.endswith('.c'):\n c_files.append(\n os.path.join(\n os.path.relpath(root, PACKAGENAME), filename))\npackage_info['package_data'][PACKAGENAME].extend(c_files)\n\nextras_require = {'database': [\"sqlalchemy\"],\n 'image': [\"scikit-image\"],\n 'jpeg2000': [\"glymur\"],\n 'net': [\"suds\", \"beautifulsoup4\", \"requests\"]}\nextras_require['all'] = extras_require['database'] + extras_require['image'] + \\\n extras_require['net'] + [\"wcsaxes>=0.6\"]\n\nsetup(name=PACKAGENAME,\n version=VERSION,\n description=DESCRIPTION,\n scripts=scripts,\n setup_requires=['numpy>1.7.1'],\n install_requires=['numpy>1.7.1',\n 'astropy>=1.0.0',\n 'scipy',\n 'pandas>=0.12.0',\n 'matplotlib>=1.1'],\n extras_require=extras_require,\n provides=[PACKAGENAME],\n author=AUTHOR,\n author_email=AUTHOR_EMAIL,\n license=LICENSE,\n url=URL,\n long_description=LONG_DESCRIPTION,\n cmdclass=cmdclassd,\n zip_safe=False,\n use_2to3=False,\n include_package_data=True,\n **package_info\n )\n", "path": "setup.py"}]} | 2,292 | 438 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.