problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_16747 | rasdani/github-patches | git_diff | streamlit__streamlit-6321 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug in clearing cache of cached function via .clear() in streamlit==1.20.0
### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
In streamlit==1.20.0 I am unable to clear cache of a cached functions via `.clear()` (this was possible in streamlit 1.19.0).
I suspect this is due to changes made in latest streamlit version to `streamlit\runtime\caching\`, and in particular to `storage` module: now, indeed, even if `st.cache_data` is used without any custom argument - and, then, with `persist=None` - streamlit try to clear also local cache, and searching for the local cache folder eventually raise the error `FileNotFoundError: [Errno 2] No such file or directory` / `FileNotFoundError: [WinError 3] The system cannot find the path specified`
### Reproducible Code Example
```Python
import streamlit as st
import time
@st.cache_data
def dummy():
st.write('Hello from dummy function! I will wait for 5 seconds...')
time.sleep(5)
st.write('Done!')
left, right = st.columns(2)
with left:
if st.button('Run dummy function'):
dummy()
with right:
if st.button('Clear cache'):
dummy.clear()
```
### Steps To Reproduce
1. `pip install streamlit==1.20.0`
2. Copy above snippet to `app.py`
3. `streamlit run app.py`
4. Click a first time on "Run dummy function" button (-> see cache spinner)
5. Click a second time on "Run dummy function" (-> confirm that function has been cached)
6. Click on "Clear cache" button (-> `FileNotFoundError` should be raised)
### Expected Behavior
As in previous streamlit 1.19.0 version, cache can be cleared without searching for local cache if `persist = None | False`
### Current Behavior
Cannot properly clear cache due to `FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\XXXXXXXXX\\.streamlit\\cache'` (on local Windows machine) or `FileNotFoundError: [Errno 2] No such file or directory: '/root/.streamlit/cache'` (on remote containerized execution)
Full traceback:
```
FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\Users\\XXXXXXXXX\\.streamlit\\cache'
File "C:\Users\XXXXXXXXX\Miniconda3\envs\hc\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "XXXXXXXXX\app.py", line 16, in <module>
dummy.clear()
File "C:\Users\XXXXXXXXX\Miniconda3\envs\hc\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 331, in clear
cache.clear()
File "C:\Users\XXXXXXXXX\Miniconda3\envs\hc\lib\site-packages\streamlit\runtime\caching\cache_utils.py", line 137, in clear
self._clear()
File "C:\Users\XXXXXXXXX\Miniconda3\envs\hc\lib\site-packages\streamlit\runtime\caching\cache_data_api.py", line 659, in _clear
self.storage.clear()
File "C:\Users\XXXXXXXXX\Miniconda3\envs\hc\lib\site-packages\streamlit\runtime\caching\storage\in_memory_cache_storage_wrapper.py", line 107, in clear
self._persist_storage.clear()
File "C:\Users\XXXXXXXXX\Miniconda3\envs\hc\lib\site-packages\streamlit\runtime\caching\storage\local_disk_cache_storage.py", line 174, in clear
for file_name in os.listdir(cache_dir):
```
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.20.0
- Python version: 3.10.8
- Operating System: Windows 10 Enterprise v22H2 build 19045.2604
- Browser: Brave
- Virtual environment: miniconda environment
### Additional Information
_No response_
### Are you willing to submit a PR?
- [ ] Yes, I am willing to submit a PR!
</issue>
<code>
[start of lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py]
1 # Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Declares the LocalDiskCacheStorageManager class, which is used
16 to create LocalDiskCacheStorage instances wrapped by InMemoryCacheStorageWrapper,
17 InMemoryCacheStorageWrapper wrapper allows to have first layer of in-memory cache,
18 before accessing to LocalDiskCacheStorage itself.
19
20 Declares the LocalDiskCacheStorage class, which is used to store cached
21 values on disk.
22
23 How these classes work together
24 -------------------------------
25
26 - LocalDiskCacheStorageManager : each instance of this is able
27 to create LocalDiskCacheStorage instances wrapped by InMemoryCacheStorageWrapper,
28 and to clear data from cache storage folder. It is also LocalDiskCacheStorageManager
29 responsibility to check if the context is valid for the storage, and to log warning
30 if the context is not valid.
31
32 - LocalDiskCacheStorage : each instance of this is able to get, set, delete, and clear
33 entries from disk for a single `@st.cache_data` decorated function if `persist="disk"`
34 is used in CacheStorageContext.
35
36
37 βββββββββββββββββββββββββββββββββ
38 β LocalDiskCacheStorageManager β
39 β β
40 β - clear_all β
41 β - check_context β
42 β β
43 ββββ¬βββββββββββββββββββββββββββββ
44 β
45 β ββββββββββββββββββββββββββββββββ
46 β β β
47 β create(context)β InMemoryCacheStorageWrapper β
48 ββββββββββββββββββΊ β
49 β βββββββββββββββββββββββ β
50 β β β β
51 β β LocalDiskStorage β β
52 β β β β
53 β βββββββββββββββββββββββ β
54 β β
55 ββββββββββββββββββββββββββββββββ
56
57 """
58
59 from __future__ import annotations
60
61 import math
62 import os
63 import shutil
64
65 from streamlit import util
66 from streamlit.file_util import get_streamlit_file_path, streamlit_read, streamlit_write
67 from streamlit.logger import get_logger
68 from streamlit.runtime.caching.storage.cache_storage_protocol import (
69 CacheStorage,
70 CacheStorageContext,
71 CacheStorageError,
72 CacheStorageKeyNotFoundError,
73 CacheStorageManager,
74 )
75 from streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper import (
76 InMemoryCacheStorageWrapper,
77 )
78
79 # Streamlit directory where persisted @st.cache_data objects live.
80 # (This is the same directory that @st.cache persisted objects live.
81 # But @st.cache_data uses a different extension, so they don't overlap.)
82 _CACHE_DIR_NAME = "cache"
83
84 # The extension for our persisted @st.cache_data objects.
85 # (`@st.cache_data` was originally called `@st.memo`)
86 _CACHED_FILE_EXTENSION = "memo"
87
88 _LOGGER = get_logger(__name__)
89
90
91 class LocalDiskCacheStorageManager(CacheStorageManager):
92 def create(self, context: CacheStorageContext) -> CacheStorage:
93 """Creates a new cache storage instance wrapped with in-memory cache layer"""
94 persist_storage = LocalDiskCacheStorage(context)
95 return InMemoryCacheStorageWrapper(
96 persist_storage=persist_storage, context=context
97 )
98
99 def clear_all(self) -> None:
100 cache_path = get_cache_folder_path()
101 if os.path.isdir(cache_path):
102 shutil.rmtree(cache_path)
103
104 def check_context(self, context: CacheStorageContext) -> None:
105 if (
106 context.persist == "disk"
107 and context.ttl_seconds is not None
108 and not math.isinf(context.ttl_seconds)
109 ):
110 _LOGGER.warning(
111 f"The cached function '{context.function_display_name}' has a TTL "
112 "that will be ignored. Persistent cached functions currently don't "
113 "support TTL."
114 )
115
116
117 class LocalDiskCacheStorage(CacheStorage):
118 """Cache storage that persists data to disk
119 This is the default cache persistence layer for `@st.cache_data`
120 """
121
122 def __init__(self, context: CacheStorageContext):
123 self.function_key = context.function_key
124 self.persist = context.persist
125 self._ttl_seconds = context.ttl_seconds
126 self._max_entries = context.max_entries
127
128 @property
129 def ttl_seconds(self) -> float:
130 return self._ttl_seconds if self._ttl_seconds is not None else math.inf
131
132 @property
133 def max_entries(self) -> float:
134 return float(self._max_entries) if self._max_entries is not None else math.inf
135
136 def get(self, key: str) -> bytes:
137 """
138 Returns the stored value for the key if persisted,
139 raise CacheStorageKeyNotFoundError if not found, or not configured
140 with persist="disk"
141 """
142 if self.persist == "disk":
143 path = self._get_cache_file_path(key)
144 try:
145 with streamlit_read(path, binary=True) as input:
146 value = input.read()
147 _LOGGER.debug("Disk cache HIT: %s", key)
148 return bytes(value)
149 except FileNotFoundError:
150 raise CacheStorageKeyNotFoundError("Key not found in disk cache")
151 except Exception as ex:
152 _LOGGER.error(ex)
153 raise CacheStorageError("Unable to read from cache") from ex
154 else:
155 raise CacheStorageKeyNotFoundError(
156 f"Local disk cache storage is disabled (persist={self.persist})"
157 )
158
159 def set(self, key: str, value: bytes) -> None:
160 """Sets the value for a given key"""
161 if self.persist == "disk":
162 path = self._get_cache_file_path(key)
163 try:
164 with streamlit_write(path, binary=True) as output:
165 output.write(value)
166 except util.Error as e:
167 _LOGGER.debug(e)
168 # Clean up file so we don't leave zero byte files.
169 try:
170 os.remove(path)
171 except (FileNotFoundError, IOError, OSError):
172 # If we can't remove the file, it's not a big deal.
173 pass
174 raise CacheStorageError("Unable to write to cache") from e
175
176 def delete(self, key: str) -> None:
177 """Delete a cache file from disk. If the file does not exist on disk,
178 return silently. If another exception occurs, log it. Does not throw.
179 """
180 if self.persist == "disk":
181 path = self._get_cache_file_path(key)
182 try:
183 os.remove(path)
184 except FileNotFoundError:
185 # The file is already removed.
186 pass
187 except Exception as ex:
188 _LOGGER.exception(
189 "Unable to remove a file from the disk cache", exc_info=ex
190 )
191
192 def clear(self) -> None:
193 """Delete all keys for the current storage"""
194 cache_dir = get_cache_folder_path()
195
196 for file_name in os.listdir(cache_dir):
197 if self._is_cache_file(file_name):
198 os.remove(os.path.join(cache_dir, file_name))
199
200 def close(self) -> None:
201 """Dummy implementation of close, we don't need to actually "close" anything"""
202
203 def _get_cache_file_path(self, value_key: str) -> str:
204 """Return the path of the disk cache file for the given value."""
205 cache_dir = get_cache_folder_path()
206 return os.path.join(
207 cache_dir, f"{self.function_key}-{value_key}.{_CACHED_FILE_EXTENSION}"
208 )
209
210 def _is_cache_file(self, fname: str) -> bool:
211 """Return true if the given file name is a cache file for this storage."""
212 return fname.startswith(f"{self.function_key}-") and fname.endswith(
213 f".{_CACHED_FILE_EXTENSION}"
214 )
215
216
217 def get_cache_folder_path() -> str:
218 return get_streamlit_file_path(_CACHE_DIR_NAME)
219
[end of lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py b/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py
--- a/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py
+++ b/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py
@@ -193,9 +193,13 @@
"""Delete all keys for the current storage"""
cache_dir = get_cache_folder_path()
- for file_name in os.listdir(cache_dir):
- if self._is_cache_file(file_name):
- os.remove(os.path.join(cache_dir, file_name))
+ if os.path.isdir(cache_dir):
+ # We try to remove all files in the cache directory that start with
+ # the function key, whether `clear` called for `self.persist`
+ # storage or not, to avoid leaving orphaned files in the cache directory.
+ for file_name in os.listdir(cache_dir):
+ if self._is_cache_file(file_name):
+ os.remove(os.path.join(cache_dir, file_name))
def close(self) -> None:
"""Dummy implementation of close, we don't need to actually "close" anything"""
| {"golden_diff": "diff --git a/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py b/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py\n--- a/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py\n+++ b/lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py\n@@ -193,9 +193,13 @@\n \"\"\"Delete all keys for the current storage\"\"\"\n cache_dir = get_cache_folder_path()\n \n- for file_name in os.listdir(cache_dir):\n- if self._is_cache_file(file_name):\n- os.remove(os.path.join(cache_dir, file_name))\n+ if os.path.isdir(cache_dir):\n+ # We try to remove all files in the cache directory that start with\n+ # the function key, whether `clear` called for `self.persist`\n+ # storage or not, to avoid leaving orphaned files in the cache directory.\n+ for file_name in os.listdir(cache_dir):\n+ if self._is_cache_file(file_name):\n+ os.remove(os.path.join(cache_dir, file_name))\n \n def close(self) -> None:\n \"\"\"Dummy implementation of close, we don't need to actually \"close\" anything\"\"\"\n", "issue": "Bug in clearing cache of cached function via .clear() in streamlit==1.20.0\n### Checklist\n\n- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.\n- [X] I added a very descriptive title to this issue.\n- [X] I have provided sufficient information below to help reproduce this issue.\n\n### Summary\n\nIn streamlit==1.20.0 I am unable to clear cache of a cached functions via `.clear()` (this was possible in streamlit 1.19.0).\r\n\r\nI suspect this is due to changes made in latest streamlit version to `streamlit\\runtime\\caching\\`, and in particular to `storage` module: now, indeed, even if `st.cache_data` is used without any custom argument - and, then, with `persist=None` - streamlit try to clear also local cache, and searching for the local cache folder eventually raise the error `FileNotFoundError: [Errno 2] No such file or directory` / `FileNotFoundError: [WinError 3] The system cannot find the path specified`\n\n### Reproducible Code Example\n\n```Python\nimport streamlit as st\r\nimport time\r\n\r\[email protected]_data\r\ndef dummy():\r\n st.write('Hello from dummy function! I will wait for 5 seconds...')\r\n time.sleep(5)\r\n st.write('Done!')\r\n\r\nleft, right = st.columns(2)\r\nwith left:\r\n if st.button('Run dummy function'):\r\n dummy()\r\nwith right:\r\n if st.button('Clear cache'):\r\n dummy.clear()\n```\n\n\n### Steps To Reproduce\n\n1. `pip install streamlit==1.20.0`\r\n2. Copy above snippet to `app.py`\r\n3. `streamlit run app.py`\r\n4. Click a first time on \"Run dummy function\" button (-> see cache spinner)\r\n5. Click a second time on \"Run dummy function\" (-> confirm that function has been cached)\r\n6. Click on \"Clear cache\" button (-> `FileNotFoundError` should be raised)\n\n### Expected Behavior\n\nAs in previous streamlit 1.19.0 version, cache can be cleared without searching for local cache if `persist = None | False`\n\n### Current Behavior\n\nCannot properly clear cache due to `FileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\\\Users\\\\XXXXXXXXX\\\\.streamlit\\\\cache'` (on local Windows machine) or `FileNotFoundError: [Errno 2] No such file or directory: '/root/.streamlit/cache'` (on remote containerized execution)\r\n\r\nFull traceback:\r\n\r\n```\r\nFileNotFoundError: [WinError 3] The system cannot find the path specified: 'C:\\\\Users\\\\XXXXXXXXX\\\\.streamlit\\\\cache'\r\n\r\nFile \"C:\\Users\\XXXXXXXXX\\Miniconda3\\envs\\hc\\lib\\site-packages\\streamlit\\runtime\\scriptrunner\\script_runner.py\", line 565, in _run_script\r\n exec(code, module.__dict__)\r\nFile \"XXXXXXXXX\\app.py\", line 16, in <module>\r\n dummy.clear()\r\nFile \"C:\\Users\\XXXXXXXXX\\Miniconda3\\envs\\hc\\lib\\site-packages\\streamlit\\runtime\\caching\\cache_utils.py\", line 331, in clear\r\n cache.clear()\r\nFile \"C:\\Users\\XXXXXXXXX\\Miniconda3\\envs\\hc\\lib\\site-packages\\streamlit\\runtime\\caching\\cache_utils.py\", line 137, in clear\r\n self._clear()\r\nFile \"C:\\Users\\XXXXXXXXX\\Miniconda3\\envs\\hc\\lib\\site-packages\\streamlit\\runtime\\caching\\cache_data_api.py\", line 659, in _clear\r\n self.storage.clear()\r\nFile \"C:\\Users\\XXXXXXXXX\\Miniconda3\\envs\\hc\\lib\\site-packages\\streamlit\\runtime\\caching\\storage\\in_memory_cache_storage_wrapper.py\", line 107, in clear\r\n self._persist_storage.clear()\r\nFile \"C:\\Users\\XXXXXXXXX\\Miniconda3\\envs\\hc\\lib\\site-packages\\streamlit\\runtime\\caching\\storage\\local_disk_cache_storage.py\", line 174, in clear\r\n for file_name in os.listdir(cache_dir):\r\n```\n\n### Is this a regression?\n\n- [X] Yes, this used to work in a previous version.\n\n### Debug info\n\n- Streamlit version: 1.20.0\r\n- Python version: 3.10.8\r\n- Operating System: Windows 10 Enterprise v22H2 build 19045.2604\r\n- Browser: Brave\r\n- Virtual environment: miniconda environment\r\n\n\n### Additional Information\n\n_No response_\n\n### Are you willing to submit a PR?\n\n- [ ] Yes, I am willing to submit a PR!\n", "before_files": [{"content": "# Copyright (c) Streamlit Inc. (2018-2022) Snowflake Inc. (2022)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Declares the LocalDiskCacheStorageManager class, which is used\nto create LocalDiskCacheStorage instances wrapped by InMemoryCacheStorageWrapper,\nInMemoryCacheStorageWrapper wrapper allows to have first layer of in-memory cache,\nbefore accessing to LocalDiskCacheStorage itself.\n\nDeclares the LocalDiskCacheStorage class, which is used to store cached\nvalues on disk.\n\nHow these classes work together\n-------------------------------\n\n- LocalDiskCacheStorageManager : each instance of this is able\nto create LocalDiskCacheStorage instances wrapped by InMemoryCacheStorageWrapper,\nand to clear data from cache storage folder. It is also LocalDiskCacheStorageManager\nresponsibility to check if the context is valid for the storage, and to log warning\nif the context is not valid.\n\n- LocalDiskCacheStorage : each instance of this is able to get, set, delete, and clear\nentries from disk for a single `@st.cache_data` decorated function if `persist=\"disk\"`\nis used in CacheStorageContext.\n\n\n \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 LocalDiskCacheStorageManager \u2502\n \u2502 \u2502\n \u2502 - clear_all \u2502\n \u2502 - check_context \u2502\n \u2502 \u2502\n \u2514\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n \u2502 \u2502 \u2502\n \u2502 create(context)\u2502 InMemoryCacheStorageWrapper \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u25ba \u2502\n \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\n \u2502 \u2502 \u2502 \u2502\n \u2502 \u2502 LocalDiskStorage \u2502 \u2502\n \u2502 \u2502 \u2502 \u2502\n \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\n \u2502 \u2502\n \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n\n\"\"\"\n\nfrom __future__ import annotations\n\nimport math\nimport os\nimport shutil\n\nfrom streamlit import util\nfrom streamlit.file_util import get_streamlit_file_path, streamlit_read, streamlit_write\nfrom streamlit.logger import get_logger\nfrom streamlit.runtime.caching.storage.cache_storage_protocol import (\n CacheStorage,\n CacheStorageContext,\n CacheStorageError,\n CacheStorageKeyNotFoundError,\n CacheStorageManager,\n)\nfrom streamlit.runtime.caching.storage.in_memory_cache_storage_wrapper import (\n InMemoryCacheStorageWrapper,\n)\n\n# Streamlit directory where persisted @st.cache_data objects live.\n# (This is the same directory that @st.cache persisted objects live.\n# But @st.cache_data uses a different extension, so they don't overlap.)\n_CACHE_DIR_NAME = \"cache\"\n\n# The extension for our persisted @st.cache_data objects.\n# (`@st.cache_data` was originally called `@st.memo`)\n_CACHED_FILE_EXTENSION = \"memo\"\n\n_LOGGER = get_logger(__name__)\n\n\nclass LocalDiskCacheStorageManager(CacheStorageManager):\n def create(self, context: CacheStorageContext) -> CacheStorage:\n \"\"\"Creates a new cache storage instance wrapped with in-memory cache layer\"\"\"\n persist_storage = LocalDiskCacheStorage(context)\n return InMemoryCacheStorageWrapper(\n persist_storage=persist_storage, context=context\n )\n\n def clear_all(self) -> None:\n cache_path = get_cache_folder_path()\n if os.path.isdir(cache_path):\n shutil.rmtree(cache_path)\n\n def check_context(self, context: CacheStorageContext) -> None:\n if (\n context.persist == \"disk\"\n and context.ttl_seconds is not None\n and not math.isinf(context.ttl_seconds)\n ):\n _LOGGER.warning(\n f\"The cached function '{context.function_display_name}' has a TTL \"\n \"that will be ignored. Persistent cached functions currently don't \"\n \"support TTL.\"\n )\n\n\nclass LocalDiskCacheStorage(CacheStorage):\n \"\"\"Cache storage that persists data to disk\n This is the default cache persistence layer for `@st.cache_data`\n \"\"\"\n\n def __init__(self, context: CacheStorageContext):\n self.function_key = context.function_key\n self.persist = context.persist\n self._ttl_seconds = context.ttl_seconds\n self._max_entries = context.max_entries\n\n @property\n def ttl_seconds(self) -> float:\n return self._ttl_seconds if self._ttl_seconds is not None else math.inf\n\n @property\n def max_entries(self) -> float:\n return float(self._max_entries) if self._max_entries is not None else math.inf\n\n def get(self, key: str) -> bytes:\n \"\"\"\n Returns the stored value for the key if persisted,\n raise CacheStorageKeyNotFoundError if not found, or not configured\n with persist=\"disk\"\n \"\"\"\n if self.persist == \"disk\":\n path = self._get_cache_file_path(key)\n try:\n with streamlit_read(path, binary=True) as input:\n value = input.read()\n _LOGGER.debug(\"Disk cache HIT: %s\", key)\n return bytes(value)\n except FileNotFoundError:\n raise CacheStorageKeyNotFoundError(\"Key not found in disk cache\")\n except Exception as ex:\n _LOGGER.error(ex)\n raise CacheStorageError(\"Unable to read from cache\") from ex\n else:\n raise CacheStorageKeyNotFoundError(\n f\"Local disk cache storage is disabled (persist={self.persist})\"\n )\n\n def set(self, key: str, value: bytes) -> None:\n \"\"\"Sets the value for a given key\"\"\"\n if self.persist == \"disk\":\n path = self._get_cache_file_path(key)\n try:\n with streamlit_write(path, binary=True) as output:\n output.write(value)\n except util.Error as e:\n _LOGGER.debug(e)\n # Clean up file so we don't leave zero byte files.\n try:\n os.remove(path)\n except (FileNotFoundError, IOError, OSError):\n # If we can't remove the file, it's not a big deal.\n pass\n raise CacheStorageError(\"Unable to write to cache\") from e\n\n def delete(self, key: str) -> None:\n \"\"\"Delete a cache file from disk. If the file does not exist on disk,\n return silently. If another exception occurs, log it. Does not throw.\n \"\"\"\n if self.persist == \"disk\":\n path = self._get_cache_file_path(key)\n try:\n os.remove(path)\n except FileNotFoundError:\n # The file is already removed.\n pass\n except Exception as ex:\n _LOGGER.exception(\n \"Unable to remove a file from the disk cache\", exc_info=ex\n )\n\n def clear(self) -> None:\n \"\"\"Delete all keys for the current storage\"\"\"\n cache_dir = get_cache_folder_path()\n\n for file_name in os.listdir(cache_dir):\n if self._is_cache_file(file_name):\n os.remove(os.path.join(cache_dir, file_name))\n\n def close(self) -> None:\n \"\"\"Dummy implementation of close, we don't need to actually \"close\" anything\"\"\"\n\n def _get_cache_file_path(self, value_key: str) -> str:\n \"\"\"Return the path of the disk cache file for the given value.\"\"\"\n cache_dir = get_cache_folder_path()\n return os.path.join(\n cache_dir, f\"{self.function_key}-{value_key}.{_CACHED_FILE_EXTENSION}\"\n )\n\n def _is_cache_file(self, fname: str) -> bool:\n \"\"\"Return true if the given file name is a cache file for this storage.\"\"\"\n return fname.startswith(f\"{self.function_key}-\") and fname.endswith(\n f\".{_CACHED_FILE_EXTENSION}\"\n )\n\n\ndef get_cache_folder_path() -> str:\n return get_streamlit_file_path(_CACHE_DIR_NAME)\n", "path": "lib/streamlit/runtime/caching/storage/local_disk_cache_storage.py"}]} | 3,945 | 253 |
gh_patches_debug_15489 | rasdani/github-patches | git_diff | pyro-ppl__pyro-3220 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`ProvenanceTensor` bug when used with `torch.set_default_device`
When trying to debug the code from the [forum post](https://forum.pyro.ai/t/running-pyro-plate-on-gpu/5097/8) I came across this bug:
```py
import torch
from pyro.ops.provenance import ProvenanceTensor
device = torch.device("cuda")
torch.set_default_device(device)
x = torch.tensor([1., 2., 3.])
y = ProvenanceTensor(x, frozenset(["x"]))
print(torch.as_tensor(y))
```
returns `tensor([], device='cuda:0')`
**Two observations:**
1. This doesn't happen if `torch.set_default_device` is not used: `print(torch.as_tensor(y.cuda()))` works fine
2. When `torch.set_default_device` is used then the following code gets invoked
https://github.com/pytorch/pytorch/blob/main/torch/utils/_device.py#L72-L76
where `func(*args, **kwargs)` returns `tensor([], device='cuda:0')`. This doesn't happen when `.cuda()` is used like in 1).
</issue>
<code>
[start of pyro/ops/provenance.py]
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from functools import singledispatch
5 from typing import Tuple
6
7 import torch
8
9
10 class ProvenanceTensor(torch.Tensor):
11 """
12 Provenance tracking implementation in Pytorch.
13
14 This class wraps a :class:`torch.Tensor` to track provenance through
15 PyTorch ops, where provenance is a user-defined frozenset of objects. The
16 provenance of the output tensors of any op is the union of provenances of
17 input tensors.
18
19 - To start tracking provenance, wrap a :class:`torch.Tensor` in a
20 :class:`ProvenanceTensor` with user-defined initial provenance.
21 - To read the provenance of a tensor use :meth:`get_provenance` .
22 - To detach provenance during a computation (similar to
23 :meth:`~torch.Tensor.detach` to detach gradients during Pytorch
24 computations), use the :meth:`detach_provenance` . This is useful to
25 distinguish direct vs indirect provenance.
26
27 Example::
28
29 >>> a = ProvenanceTensor(torch.randn(3), frozenset({"a"}))
30 >>> b = ProvenanceTensor(torch.randn(3), frozenset({"b"}))
31 >>> c = torch.randn(3)
32 >>> assert get_provenance(a + b + c) == frozenset({"a", "b"})
33 >>> assert get_provenance(a + detach_provenance(b) + c) == frozenset({"a"})
34
35 **References**
36
37 [1] David Wingate, Noah Goodman, Andreas StuhlmΓΌller, Jeffrey Siskind (2011)
38 Nonstandard Interpretations of Probabilistic Programs for Efficient Inference
39 http://papers.neurips.cc/paper/4309-nonstandard-interpretations-of-probabilistic-programs-for-efficient-inference.pdf
40
41 :param torch.Tensor data: An initial tensor to start tracking.
42 :param frozenset provenance: An initial provenance set.
43 """
44
45 def __new__(cls, data: torch.Tensor, provenance=frozenset(), **kwargs):
46 assert not isinstance(data, ProvenanceTensor)
47 if not provenance:
48 return data
49 return super().__new__(cls)
50
51 def __init__(self, data, provenance=frozenset()):
52 assert isinstance(provenance, frozenset)
53 if isinstance(data, ProvenanceTensor):
54 provenance |= data._provenance
55 data = data._t
56 self._t = data
57 self._provenance = provenance
58
59 def __repr__(self):
60 return "Provenance:\n{}\nTensor:\n{}".format(self._provenance, self._t)
61
62 @classmethod
63 def __torch_function__(cls, func, types, args=(), kwargs=None):
64 if kwargs is None:
65 kwargs = {}
66 # collect provenance information from args
67 provenance = frozenset()
68 # extract ProvenanceTensor._t data from args and kwargs
69 _args = []
70 for arg in args:
71 _arg, _provenance = extract_provenance(arg)
72 _args.append(_arg)
73 provenance |= _provenance
74 _kwargs = {}
75 for k, v in kwargs.items():
76 _v, _provenance = extract_provenance(v)
77 _kwargs[k] = _v
78 provenance |= provenance
79 ret = func(*_args, **_kwargs)
80 _ret = track_provenance(ret, provenance)
81 return _ret
82
83
84 @singledispatch
85 def track_provenance(x, provenance: frozenset):
86 """
87 Adds provenance info to the :class:`torch.Tensor` leaves of a data structure.
88
89 :param x: an object to add provenence info to.
90 :param frozenset provenance: A provenence set.
91 :returns: A provenence-tracking version of ``x``.
92 """
93 return x
94
95
96 track_provenance.register(torch.Tensor)(ProvenanceTensor)
97
98
99 @track_provenance.register(frozenset)
100 @track_provenance.register(list)
101 @track_provenance.register(set)
102 @track_provenance.register(tuple)
103 def _track_provenance_list(x, provenance: frozenset):
104 return type(x)(track_provenance(part, provenance) for part in x)
105
106
107 @track_provenance.register
108 def _track_provenance_provenancetensor(x: ProvenanceTensor, provenance: frozenset):
109 x_value, old_provenance = extract_provenance(x)
110 return track_provenance(x_value, old_provenance | provenance)
111
112
113 @singledispatch
114 def extract_provenance(x) -> Tuple[object, frozenset]:
115 """
116 Extracts the provenance of a data structure possibly containing
117 :class:`torch.Tensor` s as leaves, and separates into a detached object and
118 provenance.
119
120 :param x: An input data structure.
121 :returns: a tuple ``(detached_value, provenance)``
122 :rtype: tuple
123 """
124 return x, frozenset()
125
126
127 @extract_provenance.register(ProvenanceTensor)
128 def _extract_provenance_tensor(x):
129 return x._t, x._provenance
130
131
132 @extract_provenance.register(frozenset)
133 @extract_provenance.register(list)
134 @extract_provenance.register(set)
135 @extract_provenance.register(tuple)
136 def _extract_provenance_list(x):
137 provenance = frozenset()
138 values = []
139 for part in x:
140 v, p = extract_provenance(part)
141 values.append(v)
142 provenance |= p
143 value = type(x)(values)
144 return value, provenance
145
146
147 def get_provenance(x) -> frozenset:
148 """
149 Reads the provenance of a recursive datastructure possibly containing
150 :class:`torch.Tensor` s.
151
152 :param torch.Tensor tensor: An input tensor.
153 :returns: A provenance frozenset.
154 :rtype: frozenset
155 """
156 _, provenance = extract_provenance(x)
157 return provenance
158
159
160 def detach_provenance(x):
161 """
162 Blocks provenance tracking through a tensor, similar to :meth:`torch.Tensor.detach`.
163
164 :param torch.Tensor tensor: An input tensor.
165 :returns: A tensor sharing the same data but with no provenance.
166 :rtype: torch.Tensor
167 """
168 value, _ = extract_provenance(x)
169 return value
170
[end of pyro/ops/provenance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyro/ops/provenance.py b/pyro/ops/provenance.py
--- a/pyro/ops/provenance.py
+++ b/pyro/ops/provenance.py
@@ -46,14 +46,15 @@
assert not isinstance(data, ProvenanceTensor)
if not provenance:
return data
- return super().__new__(cls)
+ ret = data.as_subclass(cls)
+ ret._t = data # this makes sure that detach_provenance always
+ # returns the same object. This is important when
+ # using the tensor as key in a dict, e.g. the global
+ # param store
+ return ret
def __init__(self, data, provenance=frozenset()):
assert isinstance(provenance, frozenset)
- if isinstance(data, ProvenanceTensor):
- provenance |= data._provenance
- data = data._t
- self._t = data
self._provenance = provenance
def __repr__(self):
| {"golden_diff": "diff --git a/pyro/ops/provenance.py b/pyro/ops/provenance.py\n--- a/pyro/ops/provenance.py\n+++ b/pyro/ops/provenance.py\n@@ -46,14 +46,15 @@\n assert not isinstance(data, ProvenanceTensor)\n if not provenance:\n return data\n- return super().__new__(cls)\n+ ret = data.as_subclass(cls)\n+ ret._t = data # this makes sure that detach_provenance always\n+ # returns the same object. This is important when\n+ # using the tensor as key in a dict, e.g. the global\n+ # param store\n+ return ret\n \n def __init__(self, data, provenance=frozenset()):\n assert isinstance(provenance, frozenset)\n- if isinstance(data, ProvenanceTensor):\n- provenance |= data._provenance\n- data = data._t\n- self._t = data\n self._provenance = provenance\n \n def __repr__(self):\n", "issue": "`ProvenanceTensor` bug when used with `torch.set_default_device`\nWhen trying to debug the code from the [forum post](https://forum.pyro.ai/t/running-pyro-plate-on-gpu/5097/8) I came across this bug:\r\n\r\n```py\r\nimport torch\r\nfrom pyro.ops.provenance import ProvenanceTensor\r\n\r\ndevice = torch.device(\"cuda\")\r\ntorch.set_default_device(device)\r\nx = torch.tensor([1., 2., 3.])\r\ny = ProvenanceTensor(x, frozenset([\"x\"]))\r\nprint(torch.as_tensor(y))\r\n```\r\nreturns `tensor([], device='cuda:0')`\r\n\r\n**Two observations:**\r\n1. This doesn't happen if `torch.set_default_device` is not used: `print(torch.as_tensor(y.cuda()))` works fine\r\n2. When `torch.set_default_device` is used then the following code gets invoked\r\nhttps://github.com/pytorch/pytorch/blob/main/torch/utils/_device.py#L72-L76\r\nwhere `func(*args, **kwargs)` returns `tensor([], device='cuda:0')`. This doesn't happen when `.cuda()` is used like in 1).\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom functools import singledispatch\nfrom typing import Tuple\n\nimport torch\n\n\nclass ProvenanceTensor(torch.Tensor):\n \"\"\"\n Provenance tracking implementation in Pytorch.\n\n This class wraps a :class:`torch.Tensor` to track provenance through\n PyTorch ops, where provenance is a user-defined frozenset of objects. The\n provenance of the output tensors of any op is the union of provenances of\n input tensors.\n\n - To start tracking provenance, wrap a :class:`torch.Tensor` in a\n :class:`ProvenanceTensor` with user-defined initial provenance.\n - To read the provenance of a tensor use :meth:`get_provenance` .\n - To detach provenance during a computation (similar to\n :meth:`~torch.Tensor.detach` to detach gradients during Pytorch\n computations), use the :meth:`detach_provenance` . This is useful to\n distinguish direct vs indirect provenance.\n\n Example::\n\n >>> a = ProvenanceTensor(torch.randn(3), frozenset({\"a\"}))\n >>> b = ProvenanceTensor(torch.randn(3), frozenset({\"b\"}))\n >>> c = torch.randn(3)\n >>> assert get_provenance(a + b + c) == frozenset({\"a\", \"b\"})\n >>> assert get_provenance(a + detach_provenance(b) + c) == frozenset({\"a\"})\n\n **References**\n\n [1] David Wingate, Noah Goodman, Andreas Stuhlm\u00fcller, Jeffrey Siskind (2011)\n Nonstandard Interpretations of Probabilistic Programs for Efficient Inference\n http://papers.neurips.cc/paper/4309-nonstandard-interpretations-of-probabilistic-programs-for-efficient-inference.pdf\n\n :param torch.Tensor data: An initial tensor to start tracking.\n :param frozenset provenance: An initial provenance set.\n \"\"\"\n\n def __new__(cls, data: torch.Tensor, provenance=frozenset(), **kwargs):\n assert not isinstance(data, ProvenanceTensor)\n if not provenance:\n return data\n return super().__new__(cls)\n\n def __init__(self, data, provenance=frozenset()):\n assert isinstance(provenance, frozenset)\n if isinstance(data, ProvenanceTensor):\n provenance |= data._provenance\n data = data._t\n self._t = data\n self._provenance = provenance\n\n def __repr__(self):\n return \"Provenance:\\n{}\\nTensor:\\n{}\".format(self._provenance, self._t)\n\n @classmethod\n def __torch_function__(cls, func, types, args=(), kwargs=None):\n if kwargs is None:\n kwargs = {}\n # collect provenance information from args\n provenance = frozenset()\n # extract ProvenanceTensor._t data from args and kwargs\n _args = []\n for arg in args:\n _arg, _provenance = extract_provenance(arg)\n _args.append(_arg)\n provenance |= _provenance\n _kwargs = {}\n for k, v in kwargs.items():\n _v, _provenance = extract_provenance(v)\n _kwargs[k] = _v\n provenance |= provenance\n ret = func(*_args, **_kwargs)\n _ret = track_provenance(ret, provenance)\n return _ret\n\n\n@singledispatch\ndef track_provenance(x, provenance: frozenset):\n \"\"\"\n Adds provenance info to the :class:`torch.Tensor` leaves of a data structure.\n\n :param x: an object to add provenence info to.\n :param frozenset provenance: A provenence set.\n :returns: A provenence-tracking version of ``x``.\n \"\"\"\n return x\n\n\ntrack_provenance.register(torch.Tensor)(ProvenanceTensor)\n\n\n@track_provenance.register(frozenset)\n@track_provenance.register(list)\n@track_provenance.register(set)\n@track_provenance.register(tuple)\ndef _track_provenance_list(x, provenance: frozenset):\n return type(x)(track_provenance(part, provenance) for part in x)\n\n\n@track_provenance.register\ndef _track_provenance_provenancetensor(x: ProvenanceTensor, provenance: frozenset):\n x_value, old_provenance = extract_provenance(x)\n return track_provenance(x_value, old_provenance | provenance)\n\n\n@singledispatch\ndef extract_provenance(x) -> Tuple[object, frozenset]:\n \"\"\"\n Extracts the provenance of a data structure possibly containing\n :class:`torch.Tensor` s as leaves, and separates into a detached object and\n provenance.\n\n :param x: An input data structure.\n :returns: a tuple ``(detached_value, provenance)``\n :rtype: tuple\n \"\"\"\n return x, frozenset()\n\n\n@extract_provenance.register(ProvenanceTensor)\ndef _extract_provenance_tensor(x):\n return x._t, x._provenance\n\n\n@extract_provenance.register(frozenset)\n@extract_provenance.register(list)\n@extract_provenance.register(set)\n@extract_provenance.register(tuple)\ndef _extract_provenance_list(x):\n provenance = frozenset()\n values = []\n for part in x:\n v, p = extract_provenance(part)\n values.append(v)\n provenance |= p\n value = type(x)(values)\n return value, provenance\n\n\ndef get_provenance(x) -> frozenset:\n \"\"\"\n Reads the provenance of a recursive datastructure possibly containing\n :class:`torch.Tensor` s.\n\n :param torch.Tensor tensor: An input tensor.\n :returns: A provenance frozenset.\n :rtype: frozenset\n \"\"\"\n _, provenance = extract_provenance(x)\n return provenance\n\n\ndef detach_provenance(x):\n \"\"\"\n Blocks provenance tracking through a tensor, similar to :meth:`torch.Tensor.detach`.\n\n :param torch.Tensor tensor: An input tensor.\n :returns: A tensor sharing the same data but with no provenance.\n :rtype: torch.Tensor\n \"\"\"\n value, _ = extract_provenance(x)\n return value\n", "path": "pyro/ops/provenance.py"}]} | 2,635 | 241 |
gh_patches_debug_67296 | rasdani/github-patches | git_diff | qutip__qutip-1390 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installation of qutip shows various warnings on ubuntu 20.04.1
**Describe the bug**
Installing the qutip on ubuntu 20.04.1 shows the following warnings:
```
$ sudo apt install python3-qutip
β¦
Entpacken von python3-qutip (4.4.1-6build1) ...
python3-qutip (4.4.1-6build1) wird eingerichtet ...
/usr/lib/python3/dist-packages/qutip/_mkl/spmv.py:53: SyntaxWarning: "is"
with a literal. Did you mean "=="?
if x.ndim is 1:
/usr/lib/python3/dist-packages/qutip/qobjevo.py:776: SyntaxWarning: "is no
t" with a literal. Did you mean "!="?
if self.compiled and self.compiled.split()[2] is not "cte":
/usr/lib/python3/dist-packages/qutip/qobjevo.py:1045: SyntaxWarning: "is"
with a literal. Did you mean "=="?
elif op1.type is "array":
/usr/lib/python3/dist-packages/qutip/qobjevo.py:1070: SyntaxWarning: "is"
with a literal. Did you mean "=="?
elif self.ops[_set[0]].type is "string":
/usr/lib/python3/dist-packages/qutip/qobjevo.py:1079: SyntaxWarning: "is"
with a literal. Did you mean "=="?
elif self.ops[_set[0]].type is "array":
/usr/lib/python3/dist-packages/qutip/qobjevo.py:1534: SyntaxWarning: "is n
ot" with a literal. Did you mean "!="?
for key in self.__dict__ if key is not "compiled_qobjevo"}
```
**To Reproduce**
Install qutip on ubuntu 20.04.1 via `sudo apt install python3-qutip`.
**Expected behavior**
No warnings during the installation of qutip.
**Your Environment**
```
>>> qutip.about()
QuTiP: Quantum Toolbox in Python
================================
Copyright (c) QuTiP team 2011 and later.
Original developers: R. J. Johansson & P. D. Nation.
Current admin team: Alexander Pitchford, Paul D. Nation, Nathan Shammah, Shahnawaz Ahmed, Neill Lambert, and Eric Giguère.
Project Manager: Franco Nori.
Currently developed through wide collaboration. See https://github.com/qutip for details.
QuTiP Version: 4.4.1
Numpy Version: 1.17.4
Scipy Version: 1.4.1
Cython Version: 0.29.14
Matplotlib Version: 3.1.2
Python Version: 3.8.5
Number of CPUs: 2
BLAS Info: OPENBLAS
OPENMP Installed: True
INTEL MKL Ext: False
Platform Info: Linux (x86_64)
Installation path: /usr/lib/python3/dist-packages/qutip
```
</issue>
<code>
[start of qutip/_mkl/spmv.py]
1 # This file is part of QuTiP: Quantum Toolbox in Python.
2 #
3 # Copyright (c) 2011 and later, Paul D. Nation.
4 # All rights reserved.
5 #
6 # Redistribution and use in source and binary forms, with or without
7 # modification, are permitted provided that the following conditions are
8 # met:
9 #
10 # 1. Redistributions of source code must retain the above copyright notice,
11 # this list of conditions and the following disclaimer.
12 #
13 # 2. Redistributions in binary form must reproduce the above copyright
14 # notice, this list of conditions and the following disclaimer in the
15 # documentation and/or other materials provided with the distribution.
16 #
17 # 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names
18 # of its contributors may be used to endorse or promote products derived
19 # from this software without specific prior written permission.
20 #
21 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
22 # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
23 # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
24 # PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
25 # HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
26 # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
27 # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
28 # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
29 # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
30 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
31 # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 ###############################################################################
33 import numpy as np
34 import scipy.sparse as sp
35 import ctypes
36 from ctypes import POINTER,c_int,c_char,c_double, byref
37 from numpy import ctypeslib
38 import qutip.settings as qset
39 zcsrgemv = qset.mkl_lib.mkl_cspblas_zcsrgemv
40
41 def mkl_spmv(A, x):
42 """
43 sparse csr_spmv using MKL
44 """
45 (m,n) = A.shape
46
47 # Pointers to data of the matrix
48 data = A.data.ctypes.data_as(ctypeslib.ndpointer(np.complex128, ndim=1, flags='C'))
49 indptr = A.indptr.ctypes.data_as(POINTER(c_int))
50 indices = A.indices.ctypes.data_as(POINTER(c_int))
51
52 # Allocate output, using same conventions as input
53 if x.ndim is 1:
54 y = np.empty(m,dtype=np.complex,order='C')
55 elif x.ndim==2 and x.shape[1]==1:
56 y = np.empty((m,1),dtype=np.complex,order='C')
57 else:
58 raise Exception('Input vector must be 1D row or 2D column vector')
59
60 np_x = x.ctypes.data_as(ctypeslib.ndpointer(np.complex128, ndim=1, flags='C'))
61 np_y = y.ctypes.data_as(ctypeslib.ndpointer(np.complex128, ndim=1, flags='C'))
62 # now call MKL. This returns the answer in np_y, which points to y
63 zcsrgemv(byref(c_char(bytes(b'N'))), byref(c_int(m)), data ,indptr, indices, np_x, np_y )
64 return y
65
[end of qutip/_mkl/spmv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutip/_mkl/spmv.py b/qutip/_mkl/spmv.py
--- a/qutip/_mkl/spmv.py
+++ b/qutip/_mkl/spmv.py
@@ -50,7 +50,7 @@
indices = A.indices.ctypes.data_as(POINTER(c_int))
# Allocate output, using same conventions as input
- if x.ndim is 1:
+ if x.ndim == 1:
y = np.empty(m,dtype=np.complex,order='C')
elif x.ndim==2 and x.shape[1]==1:
y = np.empty((m,1),dtype=np.complex,order='C')
| {"golden_diff": "diff --git a/qutip/_mkl/spmv.py b/qutip/_mkl/spmv.py\n--- a/qutip/_mkl/spmv.py\n+++ b/qutip/_mkl/spmv.py\n@@ -50,7 +50,7 @@\n indices = A.indices.ctypes.data_as(POINTER(c_int))\n \n # Allocate output, using same conventions as input\n- if x.ndim is 1:\n+ if x.ndim == 1:\n y = np.empty(m,dtype=np.complex,order='C')\n elif x.ndim==2 and x.shape[1]==1:\n y = np.empty((m,1),dtype=np.complex,order='C')\n", "issue": "Installation of qutip shows various warnings on ubuntu 20.04.1\n**Describe the bug**\r\nInstalling the qutip on ubuntu 20.04.1 shows the following warnings:\r\n```\r\n$ sudo apt install python3-qutip \r\n\u2026\r\nEntpacken von python3-qutip (4.4.1-6build1) ...\r\npython3-qutip (4.4.1-6build1) wird eingerichtet ...\r\n/usr/lib/python3/dist-packages/qutip/_mkl/spmv.py:53: SyntaxWarning: \"is\" \r\nwith a literal. Did you mean \"==\"?\r\n if x.ndim is 1:\r\n/usr/lib/python3/dist-packages/qutip/qobjevo.py:776: SyntaxWarning: \"is no\r\nt\" with a literal. Did you mean \"!=\"?\r\n if self.compiled and self.compiled.split()[2] is not \"cte\":\r\n/usr/lib/python3/dist-packages/qutip/qobjevo.py:1045: SyntaxWarning: \"is\" \r\nwith a literal. Did you mean \"==\"?\r\n elif op1.type is \"array\":\r\n/usr/lib/python3/dist-packages/qutip/qobjevo.py:1070: SyntaxWarning: \"is\" \r\nwith a literal. Did you mean \"==\"?\r\n elif self.ops[_set[0]].type is \"string\":\r\n/usr/lib/python3/dist-packages/qutip/qobjevo.py:1079: SyntaxWarning: \"is\" \r\nwith a literal. Did you mean \"==\"?\r\n elif self.ops[_set[0]].type is \"array\":\r\n/usr/lib/python3/dist-packages/qutip/qobjevo.py:1534: SyntaxWarning: \"is n\r\not\" with a literal. Did you mean \"!=\"?\r\n for key in self.__dict__ if key is not \"compiled_qobjevo\"}\r\n```\r\n\r\n\r\n**To Reproduce**\r\nInstall qutip on ubuntu 20.04.1 via `sudo apt install python3-qutip`.\r\n\r\n**Expected behavior**\r\nNo warnings during the installation of qutip.\r\n\r\n**Your Environment**\r\n```\r\n>>> qutip.about()\r\n\r\nQuTiP: Quantum Toolbox in Python\r\n================================\r\nCopyright (c) QuTiP team 2011 and later.\r\nOriginal developers: R. J. Johansson & P. D. Nation.\r\nCurrent admin team: Alexander Pitchford, Paul D. Nation, Nathan Shammah, Shahnawaz Ahmed, Neill Lambert, and Eric Gigu\u00e8re.\r\nProject Manager: Franco Nori.\r\nCurrently developed through wide collaboration. See https://github.com/qutip for details.\r\n\r\nQuTiP Version: 4.4.1\r\nNumpy Version: 1.17.4\r\nScipy Version: 1.4.1\r\nCython Version: 0.29.14\r\nMatplotlib Version: 3.1.2\r\nPython Version: 3.8.5\r\nNumber of CPUs: 2\r\nBLAS Info: OPENBLAS\r\nOPENMP Installed: True\r\nINTEL MKL Ext: False\r\nPlatform Info: Linux (x86_64)\r\nInstallation path: /usr/lib/python3/dist-packages/qutip\r\n```\n", "before_files": [{"content": "# This file is part of QuTiP: Quantum Toolbox in Python.\n#\n# Copyright (c) 2011 and later, Paul D. Nation.\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are\n# met:\n#\n# 1. Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n#\n# 2. Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n#\n# 3. Neither the name of the QuTiP: Quantum Toolbox in Python nor the names\n# of its contributors may be used to endorse or promote products derived\n# from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n# \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A\n# PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n# HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n###############################################################################\nimport numpy as np\nimport scipy.sparse as sp\nimport ctypes\nfrom ctypes import POINTER,c_int,c_char,c_double, byref\nfrom numpy import ctypeslib\nimport qutip.settings as qset\nzcsrgemv = qset.mkl_lib.mkl_cspblas_zcsrgemv\n\ndef mkl_spmv(A, x):\n \"\"\"\n sparse csr_spmv using MKL\n \"\"\"\n (m,n) = A.shape\n\n # Pointers to data of the matrix\n data = A.data.ctypes.data_as(ctypeslib.ndpointer(np.complex128, ndim=1, flags='C'))\n indptr = A.indptr.ctypes.data_as(POINTER(c_int))\n indices = A.indices.ctypes.data_as(POINTER(c_int))\n\n # Allocate output, using same conventions as input\n if x.ndim is 1:\n y = np.empty(m,dtype=np.complex,order='C')\n elif x.ndim==2 and x.shape[1]==1:\n y = np.empty((m,1),dtype=np.complex,order='C')\n else:\n raise Exception('Input vector must be 1D row or 2D column vector')\n \n np_x = x.ctypes.data_as(ctypeslib.ndpointer(np.complex128, ndim=1, flags='C'))\n np_y = y.ctypes.data_as(ctypeslib.ndpointer(np.complex128, ndim=1, flags='C'))\n # now call MKL. This returns the answer in np_y, which points to y\n zcsrgemv(byref(c_char(bytes(b'N'))), byref(c_int(m)), data ,indptr, indices, np_x, np_y ) \n return y\n", "path": "qutip/_mkl/spmv.py"}]} | 2,124 | 154 |
gh_patches_debug_21565 | rasdani/github-patches | git_diff | OCA__server-tools-508 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix travis build on 9.0 branch
the 9.0 branch of the project is red, with really strange errors about computed fields. There have been such errors since the merge of #469 but for some reason they were neither fatal nor breaking the build.
Splitting `database_cleanup` tests in .travis.yml seems to let the other modules go green, but the tests for database_cleanup are red.
</issue>
<code>
[start of database_cleanup/models/purge_wizard.py]
1 # -*- coding: utf-8 -*-
2 # Β© 2014-2016 Therp BV <http://therp.nl>
3 # License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).
4 import logging
5 from openerp import _, api, fields, models
6 from openerp.exceptions import AccessDenied
7
8
9 class CleanupPurgeLine(models.AbstractModel):
10 """ Abstract base class for the purge wizard lines """
11 _name = 'cleanup.purge.line'
12 _order = 'name'
13
14 name = fields.Char('Name', readonly=True)
15 purged = fields.Boolean('Purged', readonly=True)
16 wizard_id = fields.Many2one('cleanup.purge.wizard')
17
18 logger = logging.getLogger('openerp.addons.database_cleanup')
19
20 @api.multi
21 def purge(self):
22 raise NotImplementedError
23
24 @api.model
25 def create(self, values):
26 # make sure the user trying this is actually supposed to do it
27 if not self.env.ref('database_cleanup.menu_database_cleanup')\
28 .parent_id._filter_visible_menus():
29 raise AccessDenied
30 return super(CleanupPurgeLine, self).create(values)
31
32
33 class PurgeWizard(models.AbstractModel):
34 """ Abstract base class for the purge wizards """
35 _name = 'cleanup.purge.wizard'
36 _description = 'Purge stuff'
37
38 @api.model
39 def default_get(self, fields_list):
40 res = super(PurgeWizard, self).default_get(fields_list)
41 if 'purge_line_ids' in fields_list:
42 res['purge_line_ids'] = self.find()
43 return res
44
45 @api.multi
46 def find(self):
47 raise NotImplementedError
48
49 @api.multi
50 def purge_all(self):
51 self.mapped('purge_line_ids').purge()
52 return True
53
54 @api.model
55 def get_wizard_action(self):
56 wizard = self.create({})
57 return {
58 'type': 'ir.actions.act_window',
59 'name': wizard.display_name,
60 'views': [(False, 'form')],
61 'res_model': self._name,
62 'res_id': wizard.id,
63 'flags': {
64 'action_buttons': False,
65 'sidebar': False,
66 },
67 }
68
69 @api.multi
70 def select_lines(self):
71 return {
72 'type': 'ir.actions.act_window',
73 'name': _('Select lines to purge'),
74 'views': [(False, 'tree'), (False, 'form')],
75 'res_model': self._fields['purge_line_ids'].comodel_name,
76 'domain': [('wizard_id', 'in', self.ids)],
77 }
78
79 @api.multi
80 def name_get(self):
81 return [
82 (this.id, self._description)
83 for this in self
84 ]
85
86 @api.model
87 def create(self, values):
88 # make sure the user trying this is actually supposed to do it
89 if not self.env.ref('database_cleanup.menu_database_cleanup')\
90 .parent_id._filter_visible_menus():
91 raise AccessDenied
92 return super(PurgeWizard, self).create(values)
93
94 purge_line_ids = fields.One2many('cleanup.purge.line', 'wizard_id')
95
[end of database_cleanup/models/purge_wizard.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/database_cleanup/models/purge_wizard.py b/database_cleanup/models/purge_wizard.py
--- a/database_cleanup/models/purge_wizard.py
+++ b/database_cleanup/models/purge_wizard.py
@@ -24,8 +24,8 @@
@api.model
def create(self, values):
# make sure the user trying this is actually supposed to do it
- if not self.env.ref('database_cleanup.menu_database_cleanup')\
- .parent_id._filter_visible_menus():
+ if self.env.ref(
+ 'base.group_erp_manager') not in self.env.user.groups_id:
raise AccessDenied
return super(CleanupPurgeLine, self).create(values)
@@ -86,8 +86,8 @@
@api.model
def create(self, values):
# make sure the user trying this is actually supposed to do it
- if not self.env.ref('database_cleanup.menu_database_cleanup')\
- .parent_id._filter_visible_menus():
+ if self.env.ref(
+ 'base.group_erp_manager') not in self.env.user.groups_id:
raise AccessDenied
return super(PurgeWizard, self).create(values)
| {"golden_diff": "diff --git a/database_cleanup/models/purge_wizard.py b/database_cleanup/models/purge_wizard.py\n--- a/database_cleanup/models/purge_wizard.py\n+++ b/database_cleanup/models/purge_wizard.py\n@@ -24,8 +24,8 @@\n @api.model\n def create(self, values):\n # make sure the user trying this is actually supposed to do it\n- if not self.env.ref('database_cleanup.menu_database_cleanup')\\\n- .parent_id._filter_visible_menus():\n+ if self.env.ref(\n+ 'base.group_erp_manager') not in self.env.user.groups_id:\n raise AccessDenied\n return super(CleanupPurgeLine, self).create(values)\n \n@@ -86,8 +86,8 @@\n @api.model\n def create(self, values):\n # make sure the user trying this is actually supposed to do it\n- if not self.env.ref('database_cleanup.menu_database_cleanup')\\\n- .parent_id._filter_visible_menus():\n+ if self.env.ref(\n+ 'base.group_erp_manager') not in self.env.user.groups_id:\n raise AccessDenied\n return super(PurgeWizard, self).create(values)\n", "issue": "Fix travis build on 9.0 branch\nthe 9.0 branch of the project is red, with really strange errors about computed fields. There have been such errors since the merge of #469 but for some reason they were neither fatal nor breaking the build. \n\nSplitting `database_cleanup` tests in .travis.yml seems to let the other modules go green, but the tests for database_cleanup are red. \n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# \u00a9 2014-2016 Therp BV <http://therp.nl>\n# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl.html).\nimport logging\nfrom openerp import _, api, fields, models\nfrom openerp.exceptions import AccessDenied\n\n\nclass CleanupPurgeLine(models.AbstractModel):\n \"\"\" Abstract base class for the purge wizard lines \"\"\"\n _name = 'cleanup.purge.line'\n _order = 'name'\n\n name = fields.Char('Name', readonly=True)\n purged = fields.Boolean('Purged', readonly=True)\n wizard_id = fields.Many2one('cleanup.purge.wizard')\n\n logger = logging.getLogger('openerp.addons.database_cleanup')\n\n @api.multi\n def purge(self):\n raise NotImplementedError\n\n @api.model\n def create(self, values):\n # make sure the user trying this is actually supposed to do it\n if not self.env.ref('database_cleanup.menu_database_cleanup')\\\n .parent_id._filter_visible_menus():\n raise AccessDenied\n return super(CleanupPurgeLine, self).create(values)\n\n\nclass PurgeWizard(models.AbstractModel):\n \"\"\" Abstract base class for the purge wizards \"\"\"\n _name = 'cleanup.purge.wizard'\n _description = 'Purge stuff'\n\n @api.model\n def default_get(self, fields_list):\n res = super(PurgeWizard, self).default_get(fields_list)\n if 'purge_line_ids' in fields_list:\n res['purge_line_ids'] = self.find()\n return res\n\n @api.multi\n def find(self):\n raise NotImplementedError\n\n @api.multi\n def purge_all(self):\n self.mapped('purge_line_ids').purge()\n return True\n\n @api.model\n def get_wizard_action(self):\n wizard = self.create({})\n return {\n 'type': 'ir.actions.act_window',\n 'name': wizard.display_name,\n 'views': [(False, 'form')],\n 'res_model': self._name,\n 'res_id': wizard.id,\n 'flags': {\n 'action_buttons': False,\n 'sidebar': False,\n },\n }\n\n @api.multi\n def select_lines(self):\n return {\n 'type': 'ir.actions.act_window',\n 'name': _('Select lines to purge'),\n 'views': [(False, 'tree'), (False, 'form')],\n 'res_model': self._fields['purge_line_ids'].comodel_name,\n 'domain': [('wizard_id', 'in', self.ids)],\n }\n\n @api.multi\n def name_get(self):\n return [\n (this.id, self._description)\n for this in self\n ]\n\n @api.model\n def create(self, values):\n # make sure the user trying this is actually supposed to do it\n if not self.env.ref('database_cleanup.menu_database_cleanup')\\\n .parent_id._filter_visible_menus():\n raise AccessDenied\n return super(PurgeWizard, self).create(values)\n\n purge_line_ids = fields.One2many('cleanup.purge.line', 'wizard_id')\n", "path": "database_cleanup/models/purge_wizard.py"}]} | 1,501 | 257 |
gh_patches_debug_17088 | rasdani/github-patches | git_diff | mne-tools__mne-bids-199 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
align readme.md and readme.rst
our README and the landing page of our homepage are very dissimilar: This is not good.
We should align them somewhat.
</issue>
<code>
[start of setup.py]
1 #! /usr/bin/env python
2 from setuptools import setup, find_packages
3
4 descr = """Experimental code for BIDS using MNE."""
5
6 DISTNAME = 'mne-bids'
7 DESCRIPTION = descr
8 MAINTAINER = 'Mainak Jas'
9 MAINTAINER_EMAIL = '[email protected]'
10 URL = 'https://mne-tools.github.io/mne-bids/'
11 LICENSE = 'BSD (3-clause)'
12 DOWNLOAD_URL = 'http://github.com/mne-tools/mne-bids'
13 VERSION = '0.2.dev0'
14
15 if __name__ == "__main__":
16 setup(name=DISTNAME,
17 maintainer=MAINTAINER,
18 maintainer_email=MAINTAINER_EMAIL,
19 description=DESCRIPTION,
20 license=LICENSE,
21 url=URL,
22 version=VERSION,
23 download_url=DOWNLOAD_URL,
24 long_description=open('README.md').read(),
25 long_description_content_type='text/markdown',
26 classifiers=[
27 'Intended Audience :: Science/Research',
28 'Intended Audience :: Developers',
29 'License :: OSI Approved',
30 'Programming Language :: Python',
31 'Topic :: Software Development',
32 'Topic :: Scientific/Engineering',
33 'Operating System :: Microsoft :: Windows',
34 'Operating System :: POSIX',
35 'Operating System :: Unix',
36 'Operating System :: MacOS',
37 ],
38 platforms='any',
39 packages=find_packages(),
40 scripts=['bin/mne_bids']
41 )
42
[end of setup.py]
[start of doc/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # mne_bids documentation build configuration file, created by
4 # sphinx-quickstart on Wed Sep 6 04:42:26 2017.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 # If extensions (or modules to document with autodoc) are in another directory,
16 # add these directories to sys.path here. If the directory is relative to the
17 # documentation root, use os.path.abspath to make it absolute, like shown here.
18 #
19 # import os
20 # import sys
21 # sys.path.insert(0, os.path.abspath('.'))
22
23 from datetime import date
24 import sphinx_gallery # noqa
25 import sphinx_bootstrap_theme
26
27 # -- General configuration ------------------------------------------------
28
29 # If your documentation needs a minimal Sphinx version, state it here.
30 #
31 # needs_sphinx = '1.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = [
37 'sphinx.ext.autodoc',
38 'sphinx.ext.mathjax',
39 'sphinx.ext.viewcode',
40 'numpydoc',
41 'sphinx.ext.autosummary',
42 'sphinx.ext.doctest',
43 'sphinx_gallery.gen_gallery'
44 ]
45
46 # generate autosummary even if no references
47 autosummary_generate = True
48
49 # Add any paths that contain templates here, relative to this directory.
50 templates_path = ['_templates']
51
52 # The suffix(es) of source filenames.
53 # You can specify multiple suffix as a list of string:
54 #
55 # source_suffix = ['.rst', '.md']
56 source_suffix = '.rst'
57
58 # The master toctree document.
59 master_doc = 'index'
60
61 # General information about the project.
62 project = u'mne_bids'
63 td = date.today()
64 copyright = u'%s, MNE Developers. Last updated on %s' % (td.year,
65 td.isoformat())
66
67 author = u'Mainak Jas'
68
69 # The version info for the project you're documenting, acts as replacement for
70 # |version| and |release|, also used in various other places throughout the
71 # built documents.
72 #
73 # The short X.Y version.
74 version = u'0.2.dev0'
75 # The full version, including alpha/beta/rc tags.
76 release = u'0.2.dev0'
77
78 # The language for content autogenerated by Sphinx. Refer to documentation
79 # for a list of supported languages.
80 #
81 # This is also used if you do content translation via gettext catalogs.
82 # Usually you set "language" from the command line for these cases.
83 language = None
84
85 # List of patterns, relative to source directory, that match files and
86 # directories to ignore when looking for source files.
87 # This patterns also effect to html_static_path and html_extra_path
88 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
89
90 # The name of the Pygments (syntax highlighting) style to use.
91 pygments_style = 'sphinx'
92
93 # If true, `todo` and `todoList` produce output, else they produce nothing.
94 todo_include_todos = False
95
96
97 # -- Options for HTML output ----------------------------------------------
98
99 # The theme to use for HTML and HTML Help pages. See the documentation for
100 # a list of builtin themes.
101 #
102 html_theme = 'bootstrap'
103 html_theme_path = sphinx_bootstrap_theme.get_html_theme_path()
104
105 # Theme options are theme-specific and customize the look and feel of a theme
106 # further. For a list of options available for each theme, see the
107 # documentation.
108 #
109 html_theme_options = {
110 'navbar_title': 'MNE-BIDS',
111 'bootswatch_theme': "flatly",
112 'navbar_sidebarrel': False,
113 'bootstrap_version': "3",
114 'navbar_links': [
115 ("Gallery", "auto_examples/index"),
116 ("API", "api"),
117 ("Github", "https://github.com/mne-tools/mne-bids", True),
118 ]}
119
120 # Add any paths that contain custom static files (such as style sheets) here,
121 # relative to this directory. They are copied after the builtin static files,
122 # so a file named "default.css" will overwrite the builtin "default.css".
123 html_static_path = ['_static']
124
125
126 # -- Options for HTMLHelp output ------------------------------------------
127
128 # Output file base name for HTML help builder.
129 htmlhelp_basename = 'mne_bidsdoc'
130
131
132 # -- Options for LaTeX output ---------------------------------------------
133
134 latex_elements = {
135 # The paper size ('letterpaper' or 'a4paper').
136 #
137 # 'papersize': 'letterpaper',
138
139 # The font size ('10pt', '11pt' or '12pt').
140 #
141 # 'pointsize': '10pt',
142
143 # Additional stuff for the LaTeX preamble.
144 #
145 # 'preamble': '',
146
147 # Latex figure (float) alignment
148 #
149 # 'figure_align': 'htbp',
150 }
151
152 # Grouping the document tree into LaTeX files. List of tuples
153 # (source start file, target name, title,
154 # author, documentclass [howto, manual, or own class]).
155 latex_documents = [
156 (master_doc, 'mne_bids.tex', u'mne\\_bids Documentation',
157 u'Mainak Jas', 'manual'),
158 ]
159
160
161 # -- Options for manual page output ---------------------------------------
162
163 # One entry per manual page. List of tuples
164 # (source start file, name, description, authors, manual section).
165 man_pages = [
166 (master_doc, 'mne_bids', u'mne_bids Documentation',
167 [author], 1)
168 ]
169
170
171 # -- Options for Texinfo output -------------------------------------------
172
173 # Grouping the document tree into Texinfo files. List of tuples
174 # (source start file, target name, title, author,
175 # dir menu entry, description, category)
176 texinfo_documents = [
177 (master_doc, 'mne_bids', u'mne_bids Documentation',
178 author, 'mne_bids', 'One line description of project.',
179 'Miscellaneous'),
180 ]
181
182
183 sphinx_gallery_conf = {
184 'examples_dirs': '../examples',
185 'gallery_dirs': 'auto_examples',
186 'filename_pattern': '^((?!sgskip).)*$',
187 'backreferences_dir': 'generated',
188 'reference_url': {
189 'mne': 'http://mne-tools.github.io/stable/',
190 'numpy': 'http://docs.scipy.org/doc/numpy-1.9.1',
191 'scipy': 'http://docs.scipy.org/doc/scipy-0.17.0/reference'
192 }
193 }
194
[end of doc/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/conf.py b/doc/conf.py
--- a/doc/conf.py
+++ b/doc/conf.py
@@ -114,6 +114,7 @@
'navbar_links': [
("Gallery", "auto_examples/index"),
("API", "api"),
+ ("What's new", "whats_new"),
("Github", "https://github.com/mne-tools/mne-bids", True),
]}
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -21,8 +21,8 @@
url=URL,
version=VERSION,
download_url=DOWNLOAD_URL,
- long_description=open('README.md').read(),
- long_description_content_type='text/markdown',
+ long_description=open('README.rst').read(),
+ long_description_content_type='text/x-rst',
classifiers=[
'Intended Audience :: Science/Research',
'Intended Audience :: Developers',
| {"golden_diff": "diff --git a/doc/conf.py b/doc/conf.py\n--- a/doc/conf.py\n+++ b/doc/conf.py\n@@ -114,6 +114,7 @@\n 'navbar_links': [\n (\"Gallery\", \"auto_examples/index\"),\n (\"API\", \"api\"),\n+ (\"What's new\", \"whats_new\"),\n (\"Github\", \"https://github.com/mne-tools/mne-bids\", True),\n ]}\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -21,8 +21,8 @@\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n- long_description=open('README.md').read(),\n- long_description_content_type='text/markdown',\n+ long_description=open('README.rst').read(),\n+ long_description_content_type='text/x-rst',\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n", "issue": "align readme.md and readme.rst\nour README and the landing page of our homepage are very dissimilar: This is not good.\r\n\r\nWe should align them somewhat.\n", "before_files": [{"content": "#! /usr/bin/env python\nfrom setuptools import setup, find_packages\n\ndescr = \"\"\"Experimental code for BIDS using MNE.\"\"\"\n\nDISTNAME = 'mne-bids'\nDESCRIPTION = descr\nMAINTAINER = 'Mainak Jas'\nMAINTAINER_EMAIL = '[email protected]'\nURL = 'https://mne-tools.github.io/mne-bids/'\nLICENSE = 'BSD (3-clause)'\nDOWNLOAD_URL = 'http://github.com/mne-tools/mne-bids'\nVERSION = '0.2.dev0'\n\nif __name__ == \"__main__\":\n setup(name=DISTNAME,\n maintainer=MAINTAINER,\n maintainer_email=MAINTAINER_EMAIL,\n description=DESCRIPTION,\n license=LICENSE,\n url=URL,\n version=VERSION,\n download_url=DOWNLOAD_URL,\n long_description=open('README.md').read(),\n long_description_content_type='text/markdown',\n classifiers=[\n 'Intended Audience :: Science/Research',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved',\n 'Programming Language :: Python',\n 'Topic :: Software Development',\n 'Topic :: Scientific/Engineering',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX',\n 'Operating System :: Unix',\n 'Operating System :: MacOS',\n ],\n platforms='any',\n packages=find_packages(),\n scripts=['bin/mne_bids']\n )\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n#\n# mne_bids documentation build configuration file, created by\n# sphinx-quickstart on Wed Sep 6 04:42:26 2017.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\nfrom datetime import date\nimport sphinx_gallery # noqa\nimport sphinx_bootstrap_theme\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\n# needs_sphinx = '1.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.mathjax',\n 'sphinx.ext.viewcode',\n 'numpydoc',\n 'sphinx.ext.autosummary',\n 'sphinx.ext.doctest',\n 'sphinx_gallery.gen_gallery'\n]\n\n# generate autosummary even if no references\nautosummary_generate = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = u'mne_bids'\ntd = date.today()\ncopyright = u'%s, MNE Developers. Last updated on %s' % (td.year,\n td.isoformat())\n\nauthor = u'Mainak Jas'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\n# The short X.Y version.\nversion = u'0.2.dev0'\n# The full version, including alpha/beta/rc tags.\nrelease = u'0.2.dev0'\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'bootstrap'\nhtml_theme_path = sphinx_bootstrap_theme.get_html_theme_path()\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\nhtml_theme_options = {\n 'navbar_title': 'MNE-BIDS',\n 'bootswatch_theme': \"flatly\",\n 'navbar_sidebarrel': False,\n 'bootstrap_version': \"3\",\n 'navbar_links': [\n (\"Gallery\", \"auto_examples/index\"),\n (\"API\", \"api\"),\n (\"Github\", \"https://github.com/mne-tools/mne-bids\", True),\n ]}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'mne_bidsdoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n (master_doc, 'mne_bids.tex', u'mne\\\\_bids Documentation',\n u'Mainak Jas', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'mne_bids', u'mne_bids Documentation',\n [author], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'mne_bids', u'mne_bids Documentation',\n author, 'mne_bids', 'One line description of project.',\n 'Miscellaneous'),\n]\n\n\nsphinx_gallery_conf = {\n 'examples_dirs': '../examples',\n 'gallery_dirs': 'auto_examples',\n 'filename_pattern': '^((?!sgskip).)*$',\n 'backreferences_dir': 'generated',\n 'reference_url': {\n 'mne': 'http://mne-tools.github.io/stable/',\n 'numpy': 'http://docs.scipy.org/doc/numpy-1.9.1',\n 'scipy': 'http://docs.scipy.org/doc/scipy-0.17.0/reference'\n }\n}\n", "path": "doc/conf.py"}]} | 2,898 | 210 |
gh_patches_debug_40944 | rasdani/github-patches | git_diff | keras-team__keras-nlp-908 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return All Beams from Beam Search Utility
Follow-up to https://github.com/keras-team/keras-nlp/issues/750#issuecomment-1439963933
- Return all beams (tensor of shape `(batch_size, num_beams, sequence_length)`).
- Return the beam scores as well (tensor of shape `(batch_size, num_beams)`).
The changes are to be made to this utility function: https://github.com/keras-team/keras-nlp/blob/master/keras_nlp/utils/text_generation.py#L213.
@mattdangerw, do we want to sort the outputs based on log prob and return? Or is that unnecessary overhead?
Some basic tinkering done because a user wanted a quick fix: https://github.com/abheesht17/keras-nlp/blob/beam_search_return_all_seqs/keras_nlp/utils/text_generation.py#L400-L414
</issue>
<code>
[start of keras_nlp/samplers/beam_sampler.py]
1 # Copyright 2023 The KerasNLP Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Beam Sampler."""
15
16 import tensorflow as tf
17 from tensorflow import keras
18 from tensorflow.compiler.tf2xla.python.xla import dynamic_update_slice
19
20 from keras_nlp.api_export import keras_nlp_export
21 from keras_nlp.samplers.sampler import Sampler
22 from keras_nlp.samplers.sampler import call_args_docstring
23 from keras_nlp.utils.python_utils import format_docstring
24
25
26 @format_docstring(call_args=call_args_docstring)
27 @keras_nlp_export("keras_nlp.samplers.BeamSampler")
28 class BeamSampler(Sampler):
29 """Beam Sampler class.
30
31 This sampler implements beam search algorithm. At each time-step, beam
32 search keeps the beams (sequences) of the top `num_beams` highest
33 accumulated probabilities, and uses each one of the beams to predict
34 candidate next tokens.
35
36 Args:
37 num_beams: int. The number of beams that should be kept at each
38 time-step. `num_beams` should be strictly positive.
39
40 Call Args:
41 {{call_args}}
42
43 Examples:
44 ```python
45 # Use a simple alphabet of lowercase characters to [0, 26).
46 int_lookup = {i: chr(i + ord('a')) for i in range(26)}
47 char_lookup = {v: k for k, v in int_lookup.items()}
48 batch_size, length, vocab_size = 1, 12, len(int_lookup)
49
50 def next(prompt, state, index):
51 # A uniform distribution over our alphabet.
52 logits = tf.ones((batch_size, vocab_size))
53 return logits, state
54
55 output = keras_nlp.samplers.BeamSampler()(
56 next=next,
57 prompt=tf.fill((batch_size, length,), char_lookup['z']),
58 index=5,
59 )
60 print(["".join([int_lookup[i] for i in s]) for s in output.numpy()])
61 # >>> "zzzzzaaaaaaa"
62 ```
63 """
64
65 def __init__(
66 self,
67 num_beams=5,
68 ):
69 super().__init__()
70 self.num_beams = num_beams
71
72 def __call__(
73 self,
74 next,
75 prompt,
76 state=None,
77 index=0,
78 mask=None,
79 end_token_id=None,
80 ):
81 batch_size, max_length = tf.shape(prompt)[0], tf.shape(prompt)[1]
82 # Make sure max length and start index are the same dtype.
83 index = tf.cast(index, max_length.dtype)
84
85 def create_beams(x):
86 """Add initial beam state."""
87 return tf.repeat(x, self.num_beams, axis=0)
88
89 def flatten_beams(x):
90 """Combine the beam dim and batch dim."""
91 flat_shape = [batch_size * self.num_beams] + x.shape.as_list()[2:]
92 return tf.reshape(x, shape=flat_shape)
93
94 def unflatten_beams(x):
95 """Separate the beam dim and batch dim."""
96 unflat_shape = [batch_size, self.num_beams] + x.shape.as_list()[1:]
97 return tf.reshape(x, shape=unflat_shape)
98
99 mask = tf.zeros_like(prompt, dtype=tf.bool) if mask is None else mask
100 # `tf.while_loop` will not accept `None` as a value for `loop_vars`.
101 state = () if state is None else state
102 # Add extra sequences for each beam.
103 prompt, mask = create_beams(prompt), create_beams(mask)
104 state = tf.nest.map_structure(create_beams, state)
105 # Setup the initial beam log-likelihoods.
106 # On the first loop, make sure only the original beam is considered.
107 log_probs = tf.constant([[0.0] + [-1e9] * (self.num_beams - 1)])
108 log_probs = flatten_beams(tf.repeat(log_probs, batch_size, axis=0))
109
110 def cond(prompt, state, index, log_probs):
111 if end_token_id is None:
112 return True
113 # Stop if all sequences have produced a *new* end_token_id.
114 end_tokens = (prompt == end_token_id) & (~mask)
115 prompt_done = tf.reduce_any(end_tokens, axis=-1)
116 return not tf.reduce_all(prompt_done)
117
118 def body(prompt, state, index, log_probs):
119 # Compute the softmax distribution for the next token.
120 logits, state = next(prompt, state, index)
121 vocab_size = tf.shape(logits)[-1]
122 probs = keras.activations.softmax(logits)
123
124 # Compute the running log-likelihood of each new candidate.
125 next_log_probs = tf.math.log(probs) + log_probs[..., tf.newaxis]
126 # Reshape `preds` to shape `(batch_size, num_beams * vocab_size)`.
127 next_log_probs = tf.reshape(next_log_probs, shape=[batch_size, -1])
128
129 # Compute the top beam indices and next tokens.
130 next_log_probs, indices = tf.math.top_k(
131 next_log_probs, k=self.num_beams, sorted=False
132 )
133 beam_indices = indices // vocab_size
134 next_token = flatten_beams(indices % vocab_size)
135 # We need `ensure_shape` as `top_k` will change the static shape.
136 next_log_probs = flatten_beams(next_log_probs)
137 log_probs = tf.ensure_shape(next_log_probs, log_probs.shape)
138
139 def gather_beams(x):
140 x = unflatten_beams(x)
141 x = tf.gather(x, beam_indices, axis=1, batch_dims=1)
142 return flatten_beams(x)
143
144 prompt = gather_beams(prompt)
145 state = tf.nest.map_structure(gather_beams, state)
146
147 # Update each beam with the next token.
148 next_token = tf.cast(next_token, prompt.dtype)
149 # Don't overwrite anywhere mask is True.
150 next_token = tf.where(mask[:, index], prompt[:, index], next_token)
151 # Update the prompt with the next token.
152 next_token = next_token[:, tf.newaxis]
153 prompt = dynamic_update_slice(prompt, next_token, [0, index])
154 # Return the iteration of the loop state.
155 return (prompt, state, index + 1, log_probs)
156
157 prompt, _, _, log_probs = tf.while_loop(
158 cond=cond,
159 body=body,
160 loop_vars=(prompt, state, index, log_probs),
161 maximum_iterations=(max_length - index),
162 )
163
164 # Gather the top beam at each batch index.
165 prompt, log_probs = unflatten_beams(prompt), unflatten_beams(log_probs)
166 top_beams = tf.math.argmax(log_probs, axis=-1)[:, tf.newaxis]
167 prompt = tf.gather(prompt, top_beams, axis=1, batch_dims=1)
168 return tf.squeeze(prompt, axis=1)
169
170 def get_config(self):
171 config = super().get_config()
172 config.update(
173 {
174 "num_beams": self.num_beams,
175 }
176 )
177 return config
178
[end of keras_nlp/samplers/beam_sampler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/keras_nlp/samplers/beam_sampler.py b/keras_nlp/samplers/beam_sampler.py
--- a/keras_nlp/samplers/beam_sampler.py
+++ b/keras_nlp/samplers/beam_sampler.py
@@ -36,11 +36,14 @@
Args:
num_beams: int. The number of beams that should be kept at each
time-step. `num_beams` should be strictly positive.
+ return_all_beams: bool. When set to `True`, the sampler will return the top prompt,
+ all prompts and their respective probabilities score.
Call Args:
{{call_args}}
Examples:
+ Return only the beam with the highest accumulated probability.
```python
# Use a simple alphabet of lowercase characters to [0, 26).
int_lookup = {i: chr(i + ord('a')) for i in range(26)}
@@ -60,14 +63,41 @@
print(["".join([int_lookup[i] for i in s]) for s in output.numpy()])
# >>> "zzzzzaaaaaaa"
```
+ Return all beams and their probabilities.
+ ```python
+ # Use a simple alphabet of lowercase characters to [0, 26).
+ int_lookup = {i: chr(i + ord('a')) for i in range(26)}
+ char_lookup = {v: k for k, v in int_lookup.items()}
+ batch_size, length, vocab_size = 1, 12, len(int_lookup)
+
+ def next(prompt, state, index):
+ # A uniform distribution over our alphabet.
+ logits = tf.ones((batch_size, vocab_size))
+ return logits, state
+
+ output = keras_nlp.samplers.BeamSampler(return_all_beams=True)(
+ next=next,
+ prompt=tf.fill((batch_size, length,), char_lookup['z']),
+ index=5,
+ )
+
+ print(output[0].shape)
+ # >>> (1, 5, 12)
+ print(output[1].shape)
+ # >>> (1, 5)
+ print(["".join([int_lookup[i] for i in s]) for s in output[0][0].numpy()])
+ # >>> "zzzzzaaaaaaa"
+ ```
"""
def __init__(
self,
num_beams=5,
+ return_all_beams=False,
):
super().__init__()
self.num_beams = num_beams
+ self.return_all_beams = return_all_beams
def __call__(
self,
@@ -161,17 +191,32 @@
maximum_iterations=(max_length - index),
)
- # Gather the top beam at each batch index.
- prompt, log_probs = unflatten_beams(prompt), unflatten_beams(log_probs)
- top_beams = tf.math.argmax(log_probs, axis=-1)[:, tf.newaxis]
- prompt = tf.gather(prompt, top_beams, axis=1, batch_dims=1)
- return tf.squeeze(prompt, axis=1)
+ all_prompts = unflatten_beams(prompt)
+ all_log_probs = unflatten_beams(log_probs)
+
+ if self.return_all_beams:
+ sorted_indices = tf.argsort(
+ all_log_probs, axis=-1, direction="DESCENDING"
+ )
+ sorted_log_probs = tf.gather(
+ all_log_probs, sorted_indices, axis=-1, batch_dims=1
+ )
+ sorted_prompts = tf.gather(
+ all_prompts, sorted_indices, axis=1, batch_dims=1
+ )
+ return sorted_prompts, sorted_log_probs
+ else:
+ # Gather the top beam at each batch index.
+ top_beams = tf.math.argmax(all_log_probs, axis=-1)[:, tf.newaxis]
+ prompt = tf.gather(all_prompts, top_beams, axis=1, batch_dims=1)
+ return tf.squeeze(prompt, axis=1)
def get_config(self):
config = super().get_config()
config.update(
{
"num_beams": self.num_beams,
+ "return_all_beams": self.return_all_beams,
}
)
return config
| {"golden_diff": "diff --git a/keras_nlp/samplers/beam_sampler.py b/keras_nlp/samplers/beam_sampler.py\n--- a/keras_nlp/samplers/beam_sampler.py\n+++ b/keras_nlp/samplers/beam_sampler.py\n@@ -36,11 +36,14 @@\n Args:\n num_beams: int. The number of beams that should be kept at each\n time-step. `num_beams` should be strictly positive.\n+ return_all_beams: bool. When set to `True`, the sampler will return the top prompt,\n+ all prompts and their respective probabilities score.\n \n Call Args:\n {{call_args}}\n \n Examples:\n+ Return only the beam with the highest accumulated probability.\n ```python\n # Use a simple alphabet of lowercase characters to [0, 26).\n int_lookup = {i: chr(i + ord('a')) for i in range(26)}\n@@ -60,14 +63,41 @@\n print([\"\".join([int_lookup[i] for i in s]) for s in output.numpy()])\n # >>> \"zzzzzaaaaaaa\"\n ```\n+ Return all beams and their probabilities.\n+ ```python\n+ # Use a simple alphabet of lowercase characters to [0, 26).\n+ int_lookup = {i: chr(i + ord('a')) for i in range(26)}\n+ char_lookup = {v: k for k, v in int_lookup.items()}\n+ batch_size, length, vocab_size = 1, 12, len(int_lookup)\n+\n+ def next(prompt, state, index):\n+ # A uniform distribution over our alphabet.\n+ logits = tf.ones((batch_size, vocab_size))\n+ return logits, state\n+\n+ output = keras_nlp.samplers.BeamSampler(return_all_beams=True)(\n+ next=next,\n+ prompt=tf.fill((batch_size, length,), char_lookup['z']),\n+ index=5,\n+ )\n+\n+ print(output[0].shape)\n+ # >>> (1, 5, 12)\n+ print(output[1].shape)\n+ # >>> (1, 5)\n+ print([\"\".join([int_lookup[i] for i in s]) for s in output[0][0].numpy()])\n+ # >>> \"zzzzzaaaaaaa\"\n+ ```\n \"\"\"\n \n def __init__(\n self,\n num_beams=5,\n+ return_all_beams=False,\n ):\n super().__init__()\n self.num_beams = num_beams\n+ self.return_all_beams = return_all_beams\n \n def __call__(\n self,\n@@ -161,17 +191,32 @@\n maximum_iterations=(max_length - index),\n )\n \n- # Gather the top beam at each batch index.\n- prompt, log_probs = unflatten_beams(prompt), unflatten_beams(log_probs)\n- top_beams = tf.math.argmax(log_probs, axis=-1)[:, tf.newaxis]\n- prompt = tf.gather(prompt, top_beams, axis=1, batch_dims=1)\n- return tf.squeeze(prompt, axis=1)\n+ all_prompts = unflatten_beams(prompt)\n+ all_log_probs = unflatten_beams(log_probs)\n+\n+ if self.return_all_beams:\n+ sorted_indices = tf.argsort(\n+ all_log_probs, axis=-1, direction=\"DESCENDING\"\n+ )\n+ sorted_log_probs = tf.gather(\n+ all_log_probs, sorted_indices, axis=-1, batch_dims=1\n+ )\n+ sorted_prompts = tf.gather(\n+ all_prompts, sorted_indices, axis=1, batch_dims=1\n+ )\n+ return sorted_prompts, sorted_log_probs\n+ else:\n+ # Gather the top beam at each batch index.\n+ top_beams = tf.math.argmax(all_log_probs, axis=-1)[:, tf.newaxis]\n+ prompt = tf.gather(all_prompts, top_beams, axis=1, batch_dims=1)\n+ return tf.squeeze(prompt, axis=1)\n \n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"num_beams\": self.num_beams,\n+ \"return_all_beams\": self.return_all_beams,\n }\n )\n return config\n", "issue": "Return All Beams from Beam Search Utility\nFollow-up to https://github.com/keras-team/keras-nlp/issues/750#issuecomment-1439963933\r\n\r\n- Return all beams (tensor of shape `(batch_size, num_beams, sequence_length)`).\r\n- Return the beam scores as well (tensor of shape `(batch_size, num_beams)`).\r\n\r\nThe changes are to be made to this utility function: https://github.com/keras-team/keras-nlp/blob/master/keras_nlp/utils/text_generation.py#L213.\r\n\r\n@mattdangerw, do we want to sort the outputs based on log prob and return? Or is that unnecessary overhead?\r\n\r\nSome basic tinkering done because a user wanted a quick fix: https://github.com/abheesht17/keras-nlp/blob/beam_search_return_all_seqs/keras_nlp/utils/text_generation.py#L400-L414\n", "before_files": [{"content": "# Copyright 2023 The KerasNLP Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Beam Sampler.\"\"\"\n\nimport tensorflow as tf\nfrom tensorflow import keras\nfrom tensorflow.compiler.tf2xla.python.xla import dynamic_update_slice\n\nfrom keras_nlp.api_export import keras_nlp_export\nfrom keras_nlp.samplers.sampler import Sampler\nfrom keras_nlp.samplers.sampler import call_args_docstring\nfrom keras_nlp.utils.python_utils import format_docstring\n\n\n@format_docstring(call_args=call_args_docstring)\n@keras_nlp_export(\"keras_nlp.samplers.BeamSampler\")\nclass BeamSampler(Sampler):\n \"\"\"Beam Sampler class.\n\n This sampler implements beam search algorithm. At each time-step, beam\n search keeps the beams (sequences) of the top `num_beams` highest\n accumulated probabilities, and uses each one of the beams to predict\n candidate next tokens.\n\n Args:\n num_beams: int. The number of beams that should be kept at each\n time-step. `num_beams` should be strictly positive.\n\n Call Args:\n {{call_args}}\n\n Examples:\n ```python\n # Use a simple alphabet of lowercase characters to [0, 26).\n int_lookup = {i: chr(i + ord('a')) for i in range(26)}\n char_lookup = {v: k for k, v in int_lookup.items()}\n batch_size, length, vocab_size = 1, 12, len(int_lookup)\n\n def next(prompt, state, index):\n # A uniform distribution over our alphabet.\n logits = tf.ones((batch_size, vocab_size))\n return logits, state\n\n output = keras_nlp.samplers.BeamSampler()(\n next=next,\n prompt=tf.fill((batch_size, length,), char_lookup['z']),\n index=5,\n )\n print([\"\".join([int_lookup[i] for i in s]) for s in output.numpy()])\n # >>> \"zzzzzaaaaaaa\"\n ```\n \"\"\"\n\n def __init__(\n self,\n num_beams=5,\n ):\n super().__init__()\n self.num_beams = num_beams\n\n def __call__(\n self,\n next,\n prompt,\n state=None,\n index=0,\n mask=None,\n end_token_id=None,\n ):\n batch_size, max_length = tf.shape(prompt)[0], tf.shape(prompt)[1]\n # Make sure max length and start index are the same dtype.\n index = tf.cast(index, max_length.dtype)\n\n def create_beams(x):\n \"\"\"Add initial beam state.\"\"\"\n return tf.repeat(x, self.num_beams, axis=0)\n\n def flatten_beams(x):\n \"\"\"Combine the beam dim and batch dim.\"\"\"\n flat_shape = [batch_size * self.num_beams] + x.shape.as_list()[2:]\n return tf.reshape(x, shape=flat_shape)\n\n def unflatten_beams(x):\n \"\"\"Separate the beam dim and batch dim.\"\"\"\n unflat_shape = [batch_size, self.num_beams] + x.shape.as_list()[1:]\n return tf.reshape(x, shape=unflat_shape)\n\n mask = tf.zeros_like(prompt, dtype=tf.bool) if mask is None else mask\n # `tf.while_loop` will not accept `None` as a value for `loop_vars`.\n state = () if state is None else state\n # Add extra sequences for each beam.\n prompt, mask = create_beams(prompt), create_beams(mask)\n state = tf.nest.map_structure(create_beams, state)\n # Setup the initial beam log-likelihoods.\n # On the first loop, make sure only the original beam is considered.\n log_probs = tf.constant([[0.0] + [-1e9] * (self.num_beams - 1)])\n log_probs = flatten_beams(tf.repeat(log_probs, batch_size, axis=0))\n\n def cond(prompt, state, index, log_probs):\n if end_token_id is None:\n return True\n # Stop if all sequences have produced a *new* end_token_id.\n end_tokens = (prompt == end_token_id) & (~mask)\n prompt_done = tf.reduce_any(end_tokens, axis=-1)\n return not tf.reduce_all(prompt_done)\n\n def body(prompt, state, index, log_probs):\n # Compute the softmax distribution for the next token.\n logits, state = next(prompt, state, index)\n vocab_size = tf.shape(logits)[-1]\n probs = keras.activations.softmax(logits)\n\n # Compute the running log-likelihood of each new candidate.\n next_log_probs = tf.math.log(probs) + log_probs[..., tf.newaxis]\n # Reshape `preds` to shape `(batch_size, num_beams * vocab_size)`.\n next_log_probs = tf.reshape(next_log_probs, shape=[batch_size, -1])\n\n # Compute the top beam indices and next tokens.\n next_log_probs, indices = tf.math.top_k(\n next_log_probs, k=self.num_beams, sorted=False\n )\n beam_indices = indices // vocab_size\n next_token = flatten_beams(indices % vocab_size)\n # We need `ensure_shape` as `top_k` will change the static shape.\n next_log_probs = flatten_beams(next_log_probs)\n log_probs = tf.ensure_shape(next_log_probs, log_probs.shape)\n\n def gather_beams(x):\n x = unflatten_beams(x)\n x = tf.gather(x, beam_indices, axis=1, batch_dims=1)\n return flatten_beams(x)\n\n prompt = gather_beams(prompt)\n state = tf.nest.map_structure(gather_beams, state)\n\n # Update each beam with the next token.\n next_token = tf.cast(next_token, prompt.dtype)\n # Don't overwrite anywhere mask is True.\n next_token = tf.where(mask[:, index], prompt[:, index], next_token)\n # Update the prompt with the next token.\n next_token = next_token[:, tf.newaxis]\n prompt = dynamic_update_slice(prompt, next_token, [0, index])\n # Return the iteration of the loop state.\n return (prompt, state, index + 1, log_probs)\n\n prompt, _, _, log_probs = tf.while_loop(\n cond=cond,\n body=body,\n loop_vars=(prompt, state, index, log_probs),\n maximum_iterations=(max_length - index),\n )\n\n # Gather the top beam at each batch index.\n prompt, log_probs = unflatten_beams(prompt), unflatten_beams(log_probs)\n top_beams = tf.math.argmax(log_probs, axis=-1)[:, tf.newaxis]\n prompt = tf.gather(prompt, top_beams, axis=1, batch_dims=1)\n return tf.squeeze(prompt, axis=1)\n\n def get_config(self):\n config = super().get_config()\n config.update(\n {\n \"num_beams\": self.num_beams,\n }\n )\n return config\n", "path": "keras_nlp/samplers/beam_sampler.py"}]} | 2,806 | 964 |
gh_patches_debug_5371 | rasdani/github-patches | git_diff | getnikola__nikola-3511 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect time zone guessing on macOS
On macOS Big Sur 11.2, we guess the full path to the zoneinfo file (instead of just the zone name).
```
$ nikola init
β¦
Time zone [private/var/db/timezone/tz/2021a.1.0/zoneinfo/Europe/Warsaw]:
```
</issue>
<code>
[start of nikola/packages/tzlocal/unix.py]
1 """Unix support for tzlocal."""
2 import os
3 import re
4
5 import dateutil.tz
6
7 _cache_tz = None
8
9
10 def _try_tz_from_env():
11 tzenv = os.environ.get("TZ")
12 if tzenv and tzenv[0] == ":":
13 tzenv = tzenv[1:]
14 try:
15 if tzenv:
16 dateutil.tz.gettz(tzenv)
17 return tzenv
18 except Exception:
19 pass
20
21
22 def _get_localzone(_root="/"):
23 """Try to find the local timezone configuration.
24
25 The parameter _root makes the function look for files like /etc/localtime
26 beneath the _root directory. This is primarily used by the tests.
27 In normal usage you call the function without parameters.
28 """
29 tzenv = _try_tz_from_env()
30 if tzenv:
31 return tzenv
32
33 # Are we under Termux on Android?
34 if os.path.exists("/system/bin/getprop"):
35 import subprocess
36
37 androidtz = (
38 subprocess.check_output(["getprop", "persist.sys.timezone"])
39 .strip()
40 .decode()
41 )
42 return androidtz
43
44 # Now look for distribution specific configuration files
45 # that contain the timezone name.
46 for configfile in ("etc/timezone", "var/db/zoneinfo"):
47 tzpath = os.path.join(_root, configfile)
48 try:
49 with open(tzpath, "rb") as tzfile:
50 data = tzfile.read()
51
52 # Issue #3 was that /etc/timezone was a zoneinfo file.
53 # That's a misconfiguration, but we need to handle it gracefully:
54 if data[:5] == b"TZif2":
55 continue
56
57 etctz = data.strip().decode()
58 if not etctz:
59 # Empty file, skip
60 continue
61 for etctz in data.decode().splitlines():
62 # Get rid of host definitions and comments:
63 if " " in etctz:
64 etctz, dummy = etctz.split(" ", 1)
65 if "#" in etctz:
66 etctz, dummy = etctz.split("#", 1)
67 if not etctz:
68 continue
69 tz = etctz.replace(" ", "_")
70 return tz
71
72 except IOError:
73 # File doesn't exist or is a directory
74 continue
75
76 # CentOS has a ZONE setting in /etc/sysconfig/clock,
77 # OpenSUSE has a TIMEZONE setting in /etc/sysconfig/clock and
78 # Gentoo has a TIMEZONE setting in /etc/conf.d/clock
79 # We look through these files for a timezone:
80
81 zone_re = re.compile(r"\s*ZONE\s*=\s*\"")
82 timezone_re = re.compile(r"\s*TIMEZONE\s*=\s*\"")
83 end_re = re.compile('"')
84
85 for filename in ("etc/sysconfig/clock", "etc/conf.d/clock"):
86 tzpath = os.path.join(_root, filename)
87 try:
88 with open(tzpath, "rt") as tzfile:
89 data = tzfile.readlines()
90
91 for line in data:
92 # Look for the ZONE= setting.
93 match = zone_re.match(line)
94 if match is None:
95 # No ZONE= setting. Look for the TIMEZONE= setting.
96 match = timezone_re.match(line)
97 if match is not None:
98 # Some setting existed
99 line = line[match.end():]
100 etctz = line[: end_re.search(line).start()]
101
102 # We found a timezone
103 tz = etctz.replace(" ", "_")
104 return tz
105
106 except IOError:
107 # File doesn't exist or is a directory
108 continue
109
110 # systemd distributions use symlinks that include the zone name,
111 # see manpage of localtime(5) and timedatectl(1)
112 tzpath = os.path.join(_root, "etc/localtime")
113 if os.path.exists(tzpath) and os.path.islink(tzpath):
114 tzpath = os.path.realpath(tzpath)
115 start = tzpath.find("/") + 1
116 while start != 0:
117 tzpath = tzpath[start:]
118 try:
119 dateutil.tz.gettz(tzpath)
120 return tzpath
121 except Exception:
122 pass
123 start = tzpath.find("/") + 1
124
125 # Nothing found, return UTC
126 return None
127
128
129 def get_localzone():
130 """Get the computers configured local timezone, if any."""
131 global _cache_tz
132 if _cache_tz is None:
133 _cache_tz = _get_localzone()
134
135 return _cache_tz
136
137
138 def reload_localzone():
139 """Reload the cached localzone. You need to call this if the timezone has changed."""
140 global _cache_tz
141 _cache_tz = _get_localzone()
142 return _cache_tz
143
[end of nikola/packages/tzlocal/unix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nikola/packages/tzlocal/unix.py b/nikola/packages/tzlocal/unix.py
--- a/nikola/packages/tzlocal/unix.py
+++ b/nikola/packages/tzlocal/unix.py
@@ -116,8 +116,9 @@
while start != 0:
tzpath = tzpath[start:]
try:
- dateutil.tz.gettz(tzpath)
- return tzpath
+ tested_tz = dateutil.tz.gettz(tzpath)
+ if tested_tz:
+ return tzpath
except Exception:
pass
start = tzpath.find("/") + 1
| {"golden_diff": "diff --git a/nikola/packages/tzlocal/unix.py b/nikola/packages/tzlocal/unix.py\n--- a/nikola/packages/tzlocal/unix.py\n+++ b/nikola/packages/tzlocal/unix.py\n@@ -116,8 +116,9 @@\n while start != 0:\n tzpath = tzpath[start:]\n try:\n- dateutil.tz.gettz(tzpath)\n- return tzpath\n+ tested_tz = dateutil.tz.gettz(tzpath)\n+ if tested_tz:\n+ return tzpath\n except Exception:\n pass\n start = tzpath.find(\"/\") + 1\n", "issue": "Incorrect time zone guessing on macOS\nOn macOS Big Sur 11.2, we guess the full path to the zoneinfo file (instead of just the zone name).\r\n\r\n```\r\n$ nikola init\r\n\u2026\r\nTime zone [private/var/db/timezone/tz/2021a.1.0/zoneinfo/Europe/Warsaw]:\r\n```\n", "before_files": [{"content": "\"\"\"Unix support for tzlocal.\"\"\"\nimport os\nimport re\n\nimport dateutil.tz\n\n_cache_tz = None\n\n\ndef _try_tz_from_env():\n tzenv = os.environ.get(\"TZ\")\n if tzenv and tzenv[0] == \":\":\n tzenv = tzenv[1:]\n try:\n if tzenv:\n dateutil.tz.gettz(tzenv)\n return tzenv\n except Exception:\n pass\n\n\ndef _get_localzone(_root=\"/\"):\n \"\"\"Try to find the local timezone configuration.\n\n The parameter _root makes the function look for files like /etc/localtime\n beneath the _root directory. This is primarily used by the tests.\n In normal usage you call the function without parameters.\n \"\"\"\n tzenv = _try_tz_from_env()\n if tzenv:\n return tzenv\n\n # Are we under Termux on Android?\n if os.path.exists(\"/system/bin/getprop\"):\n import subprocess\n\n androidtz = (\n subprocess.check_output([\"getprop\", \"persist.sys.timezone\"])\n .strip()\n .decode()\n )\n return androidtz\n\n # Now look for distribution specific configuration files\n # that contain the timezone name.\n for configfile in (\"etc/timezone\", \"var/db/zoneinfo\"):\n tzpath = os.path.join(_root, configfile)\n try:\n with open(tzpath, \"rb\") as tzfile:\n data = tzfile.read()\n\n # Issue #3 was that /etc/timezone was a zoneinfo file.\n # That's a misconfiguration, but we need to handle it gracefully:\n if data[:5] == b\"TZif2\":\n continue\n\n etctz = data.strip().decode()\n if not etctz:\n # Empty file, skip\n continue\n for etctz in data.decode().splitlines():\n # Get rid of host definitions and comments:\n if \" \" in etctz:\n etctz, dummy = etctz.split(\" \", 1)\n if \"#\" in etctz:\n etctz, dummy = etctz.split(\"#\", 1)\n if not etctz:\n continue\n tz = etctz.replace(\" \", \"_\")\n return tz\n\n except IOError:\n # File doesn't exist or is a directory\n continue\n\n # CentOS has a ZONE setting in /etc/sysconfig/clock,\n # OpenSUSE has a TIMEZONE setting in /etc/sysconfig/clock and\n # Gentoo has a TIMEZONE setting in /etc/conf.d/clock\n # We look through these files for a timezone:\n\n zone_re = re.compile(r\"\\s*ZONE\\s*=\\s*\\\"\")\n timezone_re = re.compile(r\"\\s*TIMEZONE\\s*=\\s*\\\"\")\n end_re = re.compile('\"')\n\n for filename in (\"etc/sysconfig/clock\", \"etc/conf.d/clock\"):\n tzpath = os.path.join(_root, filename)\n try:\n with open(tzpath, \"rt\") as tzfile:\n data = tzfile.readlines()\n\n for line in data:\n # Look for the ZONE= setting.\n match = zone_re.match(line)\n if match is None:\n # No ZONE= setting. Look for the TIMEZONE= setting.\n match = timezone_re.match(line)\n if match is not None:\n # Some setting existed\n line = line[match.end():]\n etctz = line[: end_re.search(line).start()]\n\n # We found a timezone\n tz = etctz.replace(\" \", \"_\")\n return tz\n\n except IOError:\n # File doesn't exist or is a directory\n continue\n\n # systemd distributions use symlinks that include the zone name,\n # see manpage of localtime(5) and timedatectl(1)\n tzpath = os.path.join(_root, \"etc/localtime\")\n if os.path.exists(tzpath) and os.path.islink(tzpath):\n tzpath = os.path.realpath(tzpath)\n start = tzpath.find(\"/\") + 1\n while start != 0:\n tzpath = tzpath[start:]\n try:\n dateutil.tz.gettz(tzpath)\n return tzpath\n except Exception:\n pass\n start = tzpath.find(\"/\") + 1\n\n # Nothing found, return UTC\n return None\n\n\ndef get_localzone():\n \"\"\"Get the computers configured local timezone, if any.\"\"\"\n global _cache_tz\n if _cache_tz is None:\n _cache_tz = _get_localzone()\n\n return _cache_tz\n\n\ndef reload_localzone():\n \"\"\"Reload the cached localzone. You need to call this if the timezone has changed.\"\"\"\n global _cache_tz\n _cache_tz = _get_localzone()\n return _cache_tz\n", "path": "nikola/packages/tzlocal/unix.py"}]} | 2,010 | 149 |
gh_patches_debug_2221 | rasdani/github-patches | git_diff | pre-commit__pre-commit-2740 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
/dev/null not found with pre-commit 3.0.2
### search you tried in the issue tracker
/dev/null
### describe your issue
After upgrading to pre-commit 3.0.2, one of my users (on up-to-date macos) is reporting that invoking ruby actions fails with `/dev/null` not found.
Relevant output:
```
rubocop..................................................................Failed
- hook id: rubocop
- exit code: 2
/dev/null not found
/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/bundler/definition.rb:36:in `build'
/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/bundler.rb:207:in `definition'
/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/bundler.rb:190:in `load'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:270:in `gem_config_path'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:65:in `block (2 levels) in resolve_inheritance_from_gems'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:63:in `reverse_each'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:63:in `block in resolve_inheritance_from_gems'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:57:in `each_pair'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:57:in `resolve_inheritance_from_gems'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader.rb:49:in `load_file'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader.rb:104:in `configuration_from_file'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_store.rb:68:in `for_dir'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_store.rb:47:in `for_pwd'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/cli.rb:147:in `apply_default_formatter'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/cli.rb:47:in `run'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/exe/rubocop:19:in `block in <top (required)>'
/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/benchmark.rb:311:in `realtime'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/exe/rubocop:19:in `<top (required)>'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/rubocop:25:in `load'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/rubocop:25:in `<main>'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/ruby_executable_hooks:22:in `eval'
/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/ruby_executable_hooks:22:in `<main>'
```
This looks closely related to #2727.
For what it's worth, I did confirm that the user actually has a working `/dev/null` on their system.
### pre-commit --version
pre-commit 3.0.2
### .pre-commit-config.yaml
```yaml
repos:
- repo: https://github.com/rubocop/rubocop.git
rev: '0f7416a0b3ea4a3d4edb1f2091ce8706ea3e6640'
hooks:
- id: rubocop
additional_dependencies: ["standard:1.22.1"]
```
### ~/.cache/pre-commit/pre-commit.log (if present)
_No response_
</issue>
<code>
[start of pre_commit/languages/ruby.py]
1 from __future__ import annotations
2
3 import contextlib
4 import functools
5 import os.path
6 import shutil
7 import tarfile
8 from typing import Generator
9 from typing import Sequence
10
11 import pre_commit.constants as C
12 from pre_commit.envcontext import envcontext
13 from pre_commit.envcontext import PatchesT
14 from pre_commit.envcontext import UNSET
15 from pre_commit.envcontext import Var
16 from pre_commit.languages import helpers
17 from pre_commit.prefix import Prefix
18 from pre_commit.util import CalledProcessError
19 from pre_commit.util import resource_bytesio
20
21 ENVIRONMENT_DIR = 'rbenv'
22 health_check = helpers.basic_health_check
23 run_hook = helpers.basic_run_hook
24
25
26 @functools.lru_cache(maxsize=1)
27 def get_default_version() -> str:
28 if all(helpers.exe_exists(exe) for exe in ('ruby', 'gem')):
29 return 'system'
30 else:
31 return C.DEFAULT
32
33
34 def get_env_patch(
35 venv: str,
36 language_version: str,
37 ) -> PatchesT:
38 patches: PatchesT = (
39 ('GEM_HOME', os.path.join(venv, 'gems')),
40 ('GEM_PATH', UNSET),
41 ('BUNDLE_IGNORE_CONFIG', '1'),
42 ('BUNDLE_GEMFILE', os.devnull),
43 )
44 if language_version == 'system':
45 patches += (
46 (
47 'PATH', (
48 os.path.join(venv, 'gems', 'bin'), os.pathsep,
49 Var('PATH'),
50 ),
51 ),
52 )
53 else: # pragma: win32 no cover
54 patches += (
55 ('RBENV_ROOT', venv),
56 (
57 'PATH', (
58 os.path.join(venv, 'gems', 'bin'), os.pathsep,
59 os.path.join(venv, 'shims'), os.pathsep,
60 os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),
61 ),
62 ),
63 )
64 if language_version not in {'system', 'default'}: # pragma: win32 no cover
65 patches += (('RBENV_VERSION', language_version),)
66
67 return patches
68
69
70 @contextlib.contextmanager
71 def in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:
72 envdir = helpers.environment_dir(prefix, ENVIRONMENT_DIR, version)
73 with envcontext(get_env_patch(envdir, version)):
74 yield
75
76
77 def _extract_resource(filename: str, dest: str) -> None:
78 with resource_bytesio(filename) as bio:
79 with tarfile.open(fileobj=bio) as tf:
80 tf.extractall(dest)
81
82
83 def _install_rbenv(
84 prefix: Prefix,
85 version: str,
86 ) -> None: # pragma: win32 no cover
87 envdir = helpers.environment_dir(prefix, ENVIRONMENT_DIR, version)
88
89 _extract_resource('rbenv.tar.gz', prefix.path('.'))
90 shutil.move(prefix.path('rbenv'), envdir)
91
92 # Only install ruby-build if the version is specified
93 if version != C.DEFAULT:
94 plugins_dir = os.path.join(envdir, 'plugins')
95 _extract_resource('ruby-download.tar.gz', plugins_dir)
96 _extract_resource('ruby-build.tar.gz', plugins_dir)
97
98
99 def _install_ruby(
100 prefix: Prefix,
101 version: str,
102 ) -> None: # pragma: win32 no cover
103 try:
104 helpers.run_setup_cmd(prefix, ('rbenv', 'download', version))
105 except CalledProcessError: # pragma: no cover (usually find with download)
106 # Failed to download from mirror for some reason, build it instead
107 helpers.run_setup_cmd(prefix, ('rbenv', 'install', version))
108
109
110 def install_environment(
111 prefix: Prefix, version: str, additional_dependencies: Sequence[str],
112 ) -> None:
113 if version != 'system': # pragma: win32 no cover
114 _install_rbenv(prefix, version)
115 with in_env(prefix, version):
116 # Need to call this before installing so rbenv's directories
117 # are set up
118 helpers.run_setup_cmd(prefix, ('rbenv', 'init', '-'))
119 if version != C.DEFAULT:
120 _install_ruby(prefix, version)
121 # Need to call this after installing to set up the shims
122 helpers.run_setup_cmd(prefix, ('rbenv', 'rehash'))
123
124 with in_env(prefix, version):
125 helpers.run_setup_cmd(
126 prefix, ('gem', 'build', *prefix.star('.gemspec')),
127 )
128 helpers.run_setup_cmd(
129 prefix,
130 (
131 'gem', 'install',
132 '--no-document', '--no-format-executable',
133 '--no-user-install',
134 *prefix.star('.gem'), *additional_dependencies,
135 ),
136 )
137
[end of pre_commit/languages/ruby.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py
--- a/pre_commit/languages/ruby.py
+++ b/pre_commit/languages/ruby.py
@@ -39,7 +39,6 @@
('GEM_HOME', os.path.join(venv, 'gems')),
('GEM_PATH', UNSET),
('BUNDLE_IGNORE_CONFIG', '1'),
- ('BUNDLE_GEMFILE', os.devnull),
)
if language_version == 'system':
patches += (
| {"golden_diff": "diff --git a/pre_commit/languages/ruby.py b/pre_commit/languages/ruby.py\n--- a/pre_commit/languages/ruby.py\n+++ b/pre_commit/languages/ruby.py\n@@ -39,7 +39,6 @@\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('GEM_PATH', UNSET),\n ('BUNDLE_IGNORE_CONFIG', '1'),\n- ('BUNDLE_GEMFILE', os.devnull),\n )\n if language_version == 'system':\n patches += (\n", "issue": "/dev/null not found with pre-commit 3.0.2\n### search you tried in the issue tracker\n\n/dev/null\n\n### describe your issue\n\nAfter upgrading to pre-commit 3.0.2, one of my users (on up-to-date macos) is reporting that invoking ruby actions fails with `/dev/null` not found.\r\n\r\nRelevant output:\r\n\r\n```\r\nrubocop..................................................................Failed\r\n- hook id: rubocop\r\n- exit code: 2\r\n\r\n/dev/null not found\r\n/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/bundler/definition.rb:36:in `build'\r\n/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/bundler.rb:207:in `definition'\r\n/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/bundler.rb:190:in `load'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:270:in `gem_config_path'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:65:in `block (2 levels) in resolve_inheritance_from_gems'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:63:in `reverse_each'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:63:in `block in resolve_inheritance_from_gems'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:57:in `each_pair'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader_resolver.rb:57:in `resolve_inheritance_from_gems'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader.rb:49:in `load_file'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_loader.rb:104:in `configuration_from_file'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_store.rb:68:in `for_dir'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/config_store.rb:47:in `for_pwd'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/cli.rb:147:in `apply_default_formatter'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/lib/rubocop/cli.rb:47:in `run'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/exe/rubocop:19:in `block in <top (required)>'\r\n/Users/user/.rvm/rubies/ruby-3.1.3/lib/ruby/3.1.0/benchmark.rb:311:in `realtime'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/gems/rubocop-1.42.0/exe/rubocop:19:in `<top (required)>'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/rubocop:25:in `load'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/rubocop:25:in `<main>'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/ruby_executable_hooks:22:in `eval'\r\n/Users/user/.cache/pre-commit/repojquw8lys/rbenv-default/gems/bin/ruby_executable_hooks:22:in `<main>'\r\n```\r\n\r\nThis looks closely related to #2727.\r\n\r\nFor what it's worth, I did confirm that the user actually has a working `/dev/null` on their system.\n\n### pre-commit --version\n\npre-commit 3.0.2\n\n### .pre-commit-config.yaml\n\n```yaml\nrepos:\r\n- repo: https://github.com/rubocop/rubocop.git\r\n rev: '0f7416a0b3ea4a3d4edb1f2091ce8706ea3e6640'\r\n hooks:\r\n - id: rubocop\r\n additional_dependencies: [\"standard:1.22.1\"]\n```\n\n\n### ~/.cache/pre-commit/pre-commit.log (if present)\n\n_No response_\n", "before_files": [{"content": "from __future__ import annotations\n\nimport contextlib\nimport functools\nimport os.path\nimport shutil\nimport tarfile\nfrom typing import Generator\nfrom typing import Sequence\n\nimport pre_commit.constants as C\nfrom pre_commit.envcontext import envcontext\nfrom pre_commit.envcontext import PatchesT\nfrom pre_commit.envcontext import UNSET\nfrom pre_commit.envcontext import Var\nfrom pre_commit.languages import helpers\nfrom pre_commit.prefix import Prefix\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import resource_bytesio\n\nENVIRONMENT_DIR = 'rbenv'\nhealth_check = helpers.basic_health_check\nrun_hook = helpers.basic_run_hook\n\n\[email protected]_cache(maxsize=1)\ndef get_default_version() -> str:\n if all(helpers.exe_exists(exe) for exe in ('ruby', 'gem')):\n return 'system'\n else:\n return C.DEFAULT\n\n\ndef get_env_patch(\n venv: str,\n language_version: str,\n) -> PatchesT:\n patches: PatchesT = (\n ('GEM_HOME', os.path.join(venv, 'gems')),\n ('GEM_PATH', UNSET),\n ('BUNDLE_IGNORE_CONFIG', '1'),\n ('BUNDLE_GEMFILE', os.devnull),\n )\n if language_version == 'system':\n patches += (\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n Var('PATH'),\n ),\n ),\n )\n else: # pragma: win32 no cover\n patches += (\n ('RBENV_ROOT', venv),\n (\n 'PATH', (\n os.path.join(venv, 'gems', 'bin'), os.pathsep,\n os.path.join(venv, 'shims'), os.pathsep,\n os.path.join(venv, 'bin'), os.pathsep, Var('PATH'),\n ),\n ),\n )\n if language_version not in {'system', 'default'}: # pragma: win32 no cover\n patches += (('RBENV_VERSION', language_version),)\n\n return patches\n\n\[email protected]\ndef in_env(prefix: Prefix, version: str) -> Generator[None, None, None]:\n envdir = helpers.environment_dir(prefix, ENVIRONMENT_DIR, version)\n with envcontext(get_env_patch(envdir, version)):\n yield\n\n\ndef _extract_resource(filename: str, dest: str) -> None:\n with resource_bytesio(filename) as bio:\n with tarfile.open(fileobj=bio) as tf:\n tf.extractall(dest)\n\n\ndef _install_rbenv(\n prefix: Prefix,\n version: str,\n) -> None: # pragma: win32 no cover\n envdir = helpers.environment_dir(prefix, ENVIRONMENT_DIR, version)\n\n _extract_resource('rbenv.tar.gz', prefix.path('.'))\n shutil.move(prefix.path('rbenv'), envdir)\n\n # Only install ruby-build if the version is specified\n if version != C.DEFAULT:\n plugins_dir = os.path.join(envdir, 'plugins')\n _extract_resource('ruby-download.tar.gz', plugins_dir)\n _extract_resource('ruby-build.tar.gz', plugins_dir)\n\n\ndef _install_ruby(\n prefix: Prefix,\n version: str,\n) -> None: # pragma: win32 no cover\n try:\n helpers.run_setup_cmd(prefix, ('rbenv', 'download', version))\n except CalledProcessError: # pragma: no cover (usually find with download)\n # Failed to download from mirror for some reason, build it instead\n helpers.run_setup_cmd(prefix, ('rbenv', 'install', version))\n\n\ndef install_environment(\n prefix: Prefix, version: str, additional_dependencies: Sequence[str],\n) -> None:\n if version != 'system': # pragma: win32 no cover\n _install_rbenv(prefix, version)\n with in_env(prefix, version):\n # Need to call this before installing so rbenv's directories\n # are set up\n helpers.run_setup_cmd(prefix, ('rbenv', 'init', '-'))\n if version != C.DEFAULT:\n _install_ruby(prefix, version)\n # Need to call this after installing to set up the shims\n helpers.run_setup_cmd(prefix, ('rbenv', 'rehash'))\n\n with in_env(prefix, version):\n helpers.run_setup_cmd(\n prefix, ('gem', 'build', *prefix.star('.gemspec')),\n )\n helpers.run_setup_cmd(\n prefix,\n (\n 'gem', 'install',\n '--no-document', '--no-format-executable',\n '--no-user-install',\n *prefix.star('.gem'), *additional_dependencies,\n ),\n )\n", "path": "pre_commit/languages/ruby.py"}]} | 3,085 | 114 |
gh_patches_debug_26724 | rasdani/github-patches | git_diff | ivy-llc__ivy-14028 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
diag_indices_from
mask_indices
</issue>
<code>
[start of ivy/functional/frontends/jax/numpy/indexing.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import (
4 to_ivy_arrays_and_back,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def diagonal(a, offset=0, axis1=0, axis2=1):
10 return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)
11
12
13 @to_ivy_arrays_and_back
14 def diag(v, k=0):
15 return ivy.diag(v, k=k)
16
17
18 @to_ivy_arrays_and_back
19 def diag_indices(n, ndim=2):
20 idx = ivy.arange(n, dtype=int)
21 return (idx,) * ndim
22
23
24 # take_along_axis
25 @to_ivy_arrays_and_back
26 def take_along_axis(arr, indices, axis, mode="fill"):
27 return ivy.take_along_axis(arr, indices, axis, mode=mode)
28
29
30 @to_ivy_arrays_and_back
31 def tril_indices(n_rows, n_cols=None, k=0):
32 return ivy.tril_indices(n_rows, n_cols, k)
33
34
35 @to_ivy_arrays_and_back
36 def triu_indices(n, k=0, m=None):
37 return ivy.triu_indices(n, m, k)
38
39
40 @to_ivy_arrays_and_back
41 def triu_indices_from(arr, k=0):
42 return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)
43
44
45 def tril_indices_from(arr, k=0):
46 return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
47
48
49 # unravel_index
50 @to_ivy_arrays_and_back
51 def unravel_index(indices, shape):
52 ret = [x.astype("int64") for x in ivy.unravel_index(indices, shape)]
53 return tuple(ret)
54
[end of ivy/functional/frontends/jax/numpy/indexing.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py
--- a/ivy/functional/frontends/jax/numpy/indexing.py
+++ b/ivy/functional/frontends/jax/numpy/indexing.py
@@ -1,3 +1,6 @@
+# global
+import inspect
+
# local
import ivy
from ivy.functional.frontends.jax.func_wrapper import (
@@ -42,6 +45,7 @@
return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)
+@to_ivy_arrays_and_back
def tril_indices_from(arr, k=0):
return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)
@@ -51,3 +55,28 @@
def unravel_index(indices, shape):
ret = [x.astype("int64") for x in ivy.unravel_index(indices, shape)]
return tuple(ret)
+
+
+@to_ivy_arrays_and_back
+def mask_indices(n, mask_func, k=0):
+ mask_func_obj = inspect.unwrap(mask_func)
+ mask_func_name = mask_func_obj.__name__
+ try:
+ ivy_mask_func_obj = getattr(ivy.functional.frontends.jax.numpy, mask_func_name)
+ a = ivy.ones((n, n))
+ mask = ivy_mask_func_obj(a, k=k)
+ indices = ivy.argwhere(mask.ivy_array)
+ return indices[:, 0], indices[:, 1]
+ except AttributeError as e:
+ print(f"Attribute error: {e}")
+
+
+@to_ivy_arrays_and_back
+def diag_indices_from(arr):
+ print(arr)
+ n = arr.shape[0]
+ ndim = ivy.get_num_dims(arr)
+ if not all(arr.shape[i] == n for i in range(ndim)):
+ raise ValueError("All dimensions of input must be of equal length")
+ idx = ivy.arange(n, dtype=int)
+ return (idx,) * ndim
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/indexing.py b/ivy/functional/frontends/jax/numpy/indexing.py\n--- a/ivy/functional/frontends/jax/numpy/indexing.py\n+++ b/ivy/functional/frontends/jax/numpy/indexing.py\n@@ -1,3 +1,6 @@\n+# global\n+import inspect\n+\n # local\n import ivy\n from ivy.functional.frontends.jax.func_wrapper import (\n@@ -42,6 +45,7 @@\n return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)\n \n \n+@to_ivy_arrays_and_back\n def tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n \n@@ -51,3 +55,28 @@\n def unravel_index(indices, shape):\n ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n return tuple(ret)\n+\n+\n+@to_ivy_arrays_and_back\n+def mask_indices(n, mask_func, k=0):\n+ mask_func_obj = inspect.unwrap(mask_func)\n+ mask_func_name = mask_func_obj.__name__\n+ try:\n+ ivy_mask_func_obj = getattr(ivy.functional.frontends.jax.numpy, mask_func_name)\n+ a = ivy.ones((n, n))\n+ mask = ivy_mask_func_obj(a, k=k)\n+ indices = ivy.argwhere(mask.ivy_array)\n+ return indices[:, 0], indices[:, 1]\n+ except AttributeError as e:\n+ print(f\"Attribute error: {e}\")\n+\n+\n+@to_ivy_arrays_and_back\n+def diag_indices_from(arr):\n+ print(arr)\n+ n = arr.shape[0]\n+ ndim = ivy.get_num_dims(arr)\n+ if not all(arr.shape[i] == n for i in range(ndim)):\n+ raise ValueError(\"All dimensions of input must be of equal length\")\n+ idx = ivy.arange(n, dtype=int)\n+ return (idx,) * ndim\n", "issue": "diag_indices_from\n\nmask_indices\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@to_ivy_arrays_and_back\ndef diagonal(a, offset=0, axis1=0, axis2=1):\n return ivy.diagonal(a, offset=offset, axis1=axis1, axis2=axis2)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag_indices(n, ndim=2):\n idx = ivy.arange(n, dtype=int)\n return (idx,) * ndim\n\n\n# take_along_axis\n@to_ivy_arrays_and_back\ndef take_along_axis(arr, indices, axis, mode=\"fill\"):\n return ivy.take_along_axis(arr, indices, axis, mode=mode)\n\n\n@to_ivy_arrays_and_back\ndef tril_indices(n_rows, n_cols=None, k=0):\n return ivy.tril_indices(n_rows, n_cols, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices(n, k=0, m=None):\n return ivy.triu_indices(n, m, k)\n\n\n@to_ivy_arrays_and_back\ndef triu_indices_from(arr, k=0):\n return ivy.triu_indices(arr.shape[-2], arr.shape[-1], k)\n\n\ndef tril_indices_from(arr, k=0):\n return ivy.tril_indices(arr.shape[-2], arr.shape[-1], k)\n\n\n# unravel_index\n@to_ivy_arrays_and_back\ndef unravel_index(indices, shape):\n ret = [x.astype(\"int64\") for x in ivy.unravel_index(indices, shape)]\n return tuple(ret)\n", "path": "ivy/functional/frontends/jax/numpy/indexing.py"}]} | 1,052 | 464 |
gh_patches_debug_20739 | rasdani/github-patches | git_diff | googleapis__python-bigquery-725 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
free(): invalid pointer Aborted error when importing from google.cloud import bigquery in python
I started facing this peculiar issue while import importing BigQuery in python scripts as well as in the terminal.
Can anyone help me out here?
I even tried to downgrade the bigquery package from 2.20 to 2.1 its still give me the same error
```
root@f6968ce75f9d:~# python
Python 3.7.10 (default, May 28 2021, 10:36:10)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from google.cloud import bigquery
free(): invalid pointer
Aborted
```
<img width="858" alt="Screenshot 2021-06-07 at 10 31 45 PM" src="https://user-images.githubusercontent.com/20011105/121060374-2c9cb000-c7e0-11eb-8819-8153137ac41a.png">
</issue>
<code>
[start of setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = "google-cloud-bigquery"
24 description = "Google BigQuery API client library"
25
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = "Development Status :: 5 - Production/Stable"
31 dependencies = [
32 "google-api-core[grpc] >= 1.29.0, < 2.0.0dev",
33 "proto-plus >= 1.10.0",
34 "google-cloud-core >= 1.4.1, < 2.0dev",
35 "google-resumable-media >= 0.6.0, < 2.0dev",
36 "packaging >= 14.3",
37 "protobuf >= 3.12.0",
38 "requests >= 2.18.0, < 3.0.0dev",
39 ]
40 extras = {
41 "bqstorage": [
42 "google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev",
43 # Due to an issue in pip's dependency resolver, the `grpc` extra is not
44 # installed, even though `google-cloud-bigquery-storage` specifies it
45 # as `google-api-core[grpc]`. We thus need to explicitly specify it here.
46 # See: https://github.com/googleapis/python-bigquery/issues/83 The
47 # grpc.Channel.close() method isn't added until 1.32.0.
48 # https://github.com/grpc/grpc/pull/15254
49 "grpcio >= 1.32.0, < 2.0dev",
50 "pyarrow >= 1.0.0, < 5.0dev",
51 ],
52 "pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 5.0dev"],
53 "bignumeric_type": ["pyarrow >= 3.0.0, < 5.0dev"],
54 "tqdm": ["tqdm >= 4.7.4, <5.0.0dev"],
55 "opentelemetry": [
56 "opentelemetry-api >= 0.11b0",
57 "opentelemetry-sdk >= 0.11b0",
58 "opentelemetry-instrumentation >= 0.11b0",
59 ],
60 }
61
62 all_extras = []
63
64 for extra in extras:
65 # Exclude this extra from all to avoid overly strict dependencies on core
66 # libraries such as pyarrow.
67 # https://github.com/googleapis/python-bigquery/issues/563
68 if extra in {"bignumeric_type"}:
69 continue
70 all_extras.extend(extras[extra])
71
72 extras["all"] = all_extras
73
74 # Setup boilerplate below this line.
75
76 package_root = os.path.abspath(os.path.dirname(__file__))
77
78 readme_filename = os.path.join(package_root, "README.rst")
79 with io.open(readme_filename, encoding="utf-8") as readme_file:
80 readme = readme_file.read()
81
82 version = {}
83 with open(os.path.join(package_root, "google/cloud/bigquery/version.py")) as fp:
84 exec(fp.read(), version)
85 version = version["__version__"]
86
87 # Only include packages under the 'google' namespace. Do not include tests,
88 # benchmarks, etc.
89 packages = [
90 package
91 for package in setuptools.PEP420PackageFinder.find()
92 if package.startswith("google")
93 ]
94
95 # Determine which namespaces are needed.
96 namespaces = ["google"]
97 if "google.cloud" in packages:
98 namespaces.append("google.cloud")
99
100
101 setuptools.setup(
102 name=name,
103 version=version,
104 description=description,
105 long_description=readme,
106 author="Google LLC",
107 author_email="[email protected]",
108 license="Apache 2.0",
109 url="https://github.com/googleapis/python-bigquery",
110 classifiers=[
111 release_status,
112 "Intended Audience :: Developers",
113 "License :: OSI Approved :: Apache Software License",
114 "Programming Language :: Python",
115 "Programming Language :: Python :: 3",
116 "Programming Language :: Python :: 3.6",
117 "Programming Language :: Python :: 3.7",
118 "Programming Language :: Python :: 3.8",
119 "Programming Language :: Python :: 3.9",
120 "Operating System :: OS Independent",
121 "Topic :: Internet",
122 ],
123 platforms="Posix; MacOS X; Windows",
124 packages=packages,
125 namespace_packages=namespaces,
126 install_requires=dependencies,
127 extras_require=extras,
128 python_requires=">=3.6, <3.10",
129 include_package_data=True,
130 zip_safe=False,
131 )
132
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -29,6 +29,7 @@
# 'Development Status :: 5 - Production/Stable'
release_status = "Development Status :: 5 - Production/Stable"
dependencies = [
+ "grpcio >= 1.38.1, < 2.0dev", # https://github.com/googleapis/python-bigquery/issues/695
"google-api-core[grpc] >= 1.29.0, < 2.0.0dev",
"proto-plus >= 1.10.0",
"google-cloud-core >= 1.4.1, < 2.0dev",
@@ -46,7 +47,7 @@
# See: https://github.com/googleapis/python-bigquery/issues/83 The
# grpc.Channel.close() method isn't added until 1.32.0.
# https://github.com/grpc/grpc/pull/15254
- "grpcio >= 1.32.0, < 2.0dev",
+ "grpcio >= 1.38.1, < 2.0dev",
"pyarrow >= 1.0.0, < 5.0dev",
],
"pandas": ["pandas>=0.23.0", "pyarrow >= 1.0.0, < 5.0dev"],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -29,6 +29,7 @@\n # 'Development Status :: 5 - Production/Stable'\n release_status = \"Development Status :: 5 - Production/Stable\"\n dependencies = [\n+ \"grpcio >= 1.38.1, < 2.0dev\", # https://github.com/googleapis/python-bigquery/issues/695\n \"google-api-core[grpc] >= 1.29.0, < 2.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n@@ -46,7 +47,7 @@\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n- \"grpcio >= 1.32.0, < 2.0dev\",\n+ \"grpcio >= 1.38.1, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 5.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 5.0dev\"],\n", "issue": "free(): invalid pointer Aborted error when importing from google.cloud import bigquery in python\nI started facing this peculiar issue while import importing BigQuery in python scripts as well as in the terminal. \r\nCan anyone help me out here?\r\nI even tried to downgrade the bigquery package from 2.20 to 2.1 its still give me the same error\r\n\r\n```\r\nroot@f6968ce75f9d:~# python \r\nPython 3.7.10 (default, May 28 2021, 10:36:10) \r\n[GCC 8.3.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from google.cloud import bigquery\r\nfree(): invalid pointer\r\nAborted\r\n```\r\n\r\n<img width=\"858\" alt=\"Screenshot 2021-06-07 at 10 31 45 PM\" src=\"https://user-images.githubusercontent.com/20011105/121060374-2c9cb000-c7e0-11eb-8819-8153137ac41a.png\">\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = \"google-cloud-bigquery\"\ndescription = \"Google BigQuery API client library\"\n\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = \"Development Status :: 5 - Production/Stable\"\ndependencies = [\n \"google-api-core[grpc] >= 1.29.0, < 2.0.0dev\",\n \"proto-plus >= 1.10.0\",\n \"google-cloud-core >= 1.4.1, < 2.0dev\",\n \"google-resumable-media >= 0.6.0, < 2.0dev\",\n \"packaging >= 14.3\",\n \"protobuf >= 3.12.0\",\n \"requests >= 2.18.0, < 3.0.0dev\",\n]\nextras = {\n \"bqstorage\": [\n \"google-cloud-bigquery-storage >= 2.0.0, <3.0.0dev\",\n # Due to an issue in pip's dependency resolver, the `grpc` extra is not\n # installed, even though `google-cloud-bigquery-storage` specifies it\n # as `google-api-core[grpc]`. We thus need to explicitly specify it here.\n # See: https://github.com/googleapis/python-bigquery/issues/83 The\n # grpc.Channel.close() method isn't added until 1.32.0.\n # https://github.com/grpc/grpc/pull/15254\n \"grpcio >= 1.32.0, < 2.0dev\",\n \"pyarrow >= 1.0.0, < 5.0dev\",\n ],\n \"pandas\": [\"pandas>=0.23.0\", \"pyarrow >= 1.0.0, < 5.0dev\"],\n \"bignumeric_type\": [\"pyarrow >= 3.0.0, < 5.0dev\"],\n \"tqdm\": [\"tqdm >= 4.7.4, <5.0.0dev\"],\n \"opentelemetry\": [\n \"opentelemetry-api >= 0.11b0\",\n \"opentelemetry-sdk >= 0.11b0\",\n \"opentelemetry-instrumentation >= 0.11b0\",\n ],\n}\n\nall_extras = []\n\nfor extra in extras:\n # Exclude this extra from all to avoid overly strict dependencies on core\n # libraries such as pyarrow.\n # https://github.com/googleapis/python-bigquery/issues/563\n if extra in {\"bignumeric_type\"}:\n continue\n all_extras.extend(extras[extra])\n\nextras[\"all\"] = all_extras\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, \"README.rst\")\nwith io.open(readme_filename, encoding=\"utf-8\") as readme_file:\n readme = readme_file.read()\n\nversion = {}\nwith open(os.path.join(package_root, \"google/cloud/bigquery/version.py\")) as fp:\n exec(fp.read(), version)\nversion = version[\"__version__\"]\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package\n for package in setuptools.PEP420PackageFinder.find()\n if package.startswith(\"google\")\n]\n\n# Determine which namespaces are needed.\nnamespaces = [\"google\"]\nif \"google.cloud\" in packages:\n namespaces.append(\"google.cloud\")\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author=\"Google LLC\",\n author_email=\"[email protected]\",\n license=\"Apache 2.0\",\n url=\"https://github.com/googleapis/python-bigquery\",\n classifiers=[\n release_status,\n \"Intended Audience :: Developers\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: OS Independent\",\n \"Topic :: Internet\",\n ],\n platforms=\"Posix; MacOS X; Windows\",\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n python_requires=\">=3.6, <3.10\",\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "setup.py"}]} | 2,262 | 322 |
gh_patches_debug_3574 | rasdani/github-patches | git_diff | google__TensorNetwork-746 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add dynamic programming contractor of opt_einsum
We should update this asap
</issue>
<code>
[start of tensornetwork/contractors/opt_einsum_paths/path_contractors.py]
1 # pylint: disable=cyclic-import
2 # Copyright 2019 The TensorNetwork Authors
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Contractors based on `opt_einsum`'s path algorithms."""
16
17 import functools
18 import opt_einsum
19 # pylint: disable=line-too-long
20 from tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling
21 # pylint: disable=line-too-long
22 from tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between
23 from tensornetwork.network_components import Edge, AbstractNode
24 from tensornetwork.contractors.opt_einsum_paths import utils
25 from typing import Any, Optional, Sequence, Iterable
26
27 #TODO (martin): add return types of functions back once TensorNetwork is gone
28 # remove _base_network
29 # _base_nodes -> base
30
31
32 def base(nodes: Iterable[AbstractNode],
33 algorithm: utils.Algorithm,
34 output_edge_order: Optional[Sequence[Edge]] = None,
35 ignore_edge_order: bool = False) -> AbstractNode:
36 """Base method for all `opt_einsum` contractors.
37
38 Args:
39 nodes: A collection of connected nodes.
40 algorithm: `opt_einsum` contraction method to use.
41 output_edge_order: An optional list of edges. Edges of the
42 final node in `nodes_set`
43 are reordered into `output_edge_order`;
44 if final node has more than one edge,
45 `output_edge_order` must be pronvided.
46 ignore_edge_order: An option to ignore the output edge
47 order.
48
49 Returns:
50 Final node after full contraction.
51 """
52 nodes_set = set(nodes)
53 edges = get_all_edges(nodes_set)
54 #output edge order has to be determinded before any contraction
55 #(edges are refreshed after contractions)
56
57 if not ignore_edge_order:
58 if output_edge_order is None:
59 output_edge_order = list(get_subgraph_dangling(nodes))
60 if len(output_edge_order) > 1:
61 raise ValueError("The final node after contraction has more than "
62 "one remaining edge. In this case `output_edge_order` "
63 "has to be provided.")
64
65 if set(output_edge_order) != get_subgraph_dangling(nodes):
66 raise ValueError("output edges are not equal to the remaining "
67 "non-contracted edges of the final node.")
68
69 for edge in edges:
70 if not edge.is_disabled: #if its disabled we already contracted it
71 if edge.is_trace():
72 nodes_set.remove(edge.node1)
73 nodes_set.add(contract_parallel(edge))
74
75 if len(nodes_set) == 1:
76 # There's nothing to contract.
77 if ignore_edge_order:
78 return list(nodes_set)[0]
79 return list(nodes_set)[0].reorder_edges(output_edge_order)
80
81 # Then apply `opt_einsum`'s algorithm
82 path, nodes = utils.get_path(nodes_set, algorithm)
83 for a, b in path:
84 new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)
85 nodes.append(new_node)
86 nodes = utils.multi_remove(nodes, [a, b])
87
88 # if the final node has more than one edge,
89 # output_edge_order has to be specified
90 final_node = nodes[0] # nodes were connected, we checked this
91 if not ignore_edge_order:
92 final_node.reorder_edges(output_edge_order)
93 return final_node
94
95
96 def optimal(nodes: Iterable[AbstractNode],
97 output_edge_order: Optional[Sequence[Edge]] = None,
98 memory_limit: Optional[int] = None,
99 ignore_edge_order: bool = False) -> AbstractNode:
100 """Optimal contraction order via `opt_einsum`.
101
102 This method will find the truly optimal contraction order via
103 `opt_einsum`'s depth first search algorithm. Since this search is
104 exhaustive, if your network is large (n>10), then the search may
105 take longer than just contracting in a suboptimal way.
106
107 Args:
108 nodes: an iterable of Nodes
109 output_edge_order: An optional list of edges.
110 Edges of the final node in `nodes_set`
111 are reordered into `output_edge_order`;
112 if final node has more than one edge,
113 `output_edge_order` must be provided.
114 memory_limit: Maximum number of elements in an array during contractions.
115 ignore_edge_order: An option to ignore the output edge order.
116
117 Returns:
118 The final node after full contraction.
119 """
120 alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)
121 return base(nodes, alg, output_edge_order, ignore_edge_order)
122
123
124 def branch(nodes: Iterable[AbstractNode],
125 output_edge_order: Optional[Sequence[Edge]] = None,
126 memory_limit: Optional[int] = None,
127 nbranch: Optional[int] = None,
128 ignore_edge_order: bool = False) -> AbstractNode:
129 """Branch contraction path via `opt_einsum`.
130
131 This method uses the DFS approach of `optimal` while sorting potential
132 contractions based on a heuristic cost, in order to reduce time spent
133 in exploring paths which are unlikely to be optimal.
134 More details on `branching path`_.
135
136 .. _branching path:
137 https://optimized-einsum.readthedocs.io/en/latest/branching_path.html
138
139 Args:
140 nodes: an iterable of Nodes
141 output_edge_order: An optional list of edges.
142 Edges of the final node in `nodes_set`
143 are reordered into `output_edge_order`;
144 if final node has more than one edge,
145 `output_edge_order` must be provided.
146 memory_limit: Maximum number of elements in an array during contractions.
147 nbranch: Number of best contractions to explore.
148 If None it explores all inner products starting with those that
149 have the best cost heuristic.
150 ignore_edge_order: An option to ignore the output edge order.
151
152 Returns:
153 The final node after full contraction.
154 """
155 alg = functools.partial(
156 opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)
157 return base(nodes, alg, output_edge_order, ignore_edge_order)
158
159
160 def greedy(nodes: Iterable[AbstractNode],
161 output_edge_order: Optional[Sequence[Edge]] = None,
162 memory_limit: Optional[int] = None,
163 ignore_edge_order: bool = False) -> AbstractNode:
164 """Greedy contraction path via `opt_einsum`.
165
166 This provides a more efficient strategy than `optimal` for finding
167 contraction paths in large networks. First contracts pairs of tensors
168 by finding the pair with the lowest cost at each step. Then it performs
169 the outer products. More details on `greedy path`_.
170
171 .. _greedy path:
172 https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html
173
174 Args:
175 nodes: an iterable of Nodes
176 output_edge_order: An optional list of edges.
177 Edges of the final node in `nodes_set`
178 are reordered into `output_edge_order`;
179 if final node has more than one edge,
180 `output_edge_order` must be provided.
181 memory_limit: Maximum number of elements in an array during contractions.
182 ignore_edge_order: An option to ignore the output edge order.
183
184 Returns:
185 The final node after full contraction.
186 """
187 alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)
188 return base(nodes, alg, output_edge_order, ignore_edge_order)
189
190
191 # pylint: disable=too-many-return-statements
192 def auto(nodes: Iterable[AbstractNode],
193 output_edge_order: Optional[Sequence[Edge]] = None,
194 memory_limit: Optional[int] = None,
195 ignore_edge_order: bool = False) -> AbstractNode:
196 """Chooses one of the above algorithms according to network size.
197
198 Default behavior is based on `opt_einsum`'s `auto` contractor.
199
200 Args:
201 nodes: A collection of connected nodes.
202 output_edge_order: An optional list of edges.
203 Edges of the final node in `nodes_set`
204 are reordered into `output_edge_order`;
205 if final node has more than one edge,
206 `output_edge_order` must be provided.
207 memory_limit: Maximum number of elements in an array during contractions.
208 ignore_edge_order: An option to ignore the output edge order.
209
210 Returns:
211 Final node after full contraction.
212 """
213
214 n = len(list(nodes)) #pytype thing
215 _nodes = nodes
216 if n <= 0:
217 raise ValueError("Cannot contract empty tensor network.")
218 if n == 1:
219 if not ignore_edge_order:
220 if output_edge_order is None:
221 output_edge_order = list(
222 (get_all_edges(_nodes) - get_all_nondangling(_nodes)))
223 if len(output_edge_order) > 1:
224 raise ValueError(
225 "The final node after contraction has more than "
226 "one dangling edge. In this case `output_edge_order` "
227 "has to be provided.")
228
229 edges = get_all_nondangling(_nodes)
230 if edges:
231 final_node = contract_parallel(edges.pop())
232 else:
233 final_node = list(_nodes)[0]
234 final_node.reorder_edges(output_edge_order)
235 if not ignore_edge_order:
236 final_node.reorder_edges(output_edge_order)
237 return final_node
238
239 if n < 5:
240 return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)
241 if n < 7:
242 return branch(
243 nodes,
244 output_edge_order=output_edge_order,
245 memory_limit=memory_limit,
246 ignore_edge_order=ignore_edge_order)
247 if n < 9:
248 return branch(
249 nodes,
250 output_edge_order=output_edge_order,
251 memory_limit=memory_limit,
252 nbranch=2,
253 ignore_edge_order=ignore_edge_order)
254 if n < 15:
255 return branch(
256 nodes,
257 output_edge_order=output_edge_order,
258 nbranch=1,
259 ignore_edge_order=ignore_edge_order)
260 return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)
261
262
263 def custom(nodes: Iterable[AbstractNode],
264 optimizer: Any,
265 output_edge_order: Sequence[Edge] = None,
266 memory_limit: Optional[int] = None,
267 ignore_edge_order: bool = False) -> AbstractNode:
268 """Uses a custom path optimizer created by the user to calculate paths.
269
270 The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.
271 See `custom paths`_.
272
273 .. _custom paths:
274 https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html
275
276 Args:
277 nodes: an iterable of Nodes
278 output_edge_order: An optional list of edges.
279 Edges of the final node in `nodes_set`
280 are reordered into `output_edge_order`;
281 if final node has more than one edge,
282 output_edge_order` must be provided.
283 optimizer: A custom `opt_einsum.PathOptimizer` object.
284 memory_limit: Maximum number of elements in an array during contractions.
285 ignore_edge_order: An option to ignore the output edge order.
286
287 Returns:
288 Final node after full contraction.
289 """
290 alg = functools.partial(optimizer, memory_limit=memory_limit)
291 return base(nodes, alg, output_edge_order, ignore_edge_order)
292
[end of tensornetwork/contractors/opt_einsum_paths/path_contractors.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
--- a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
+++ b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py
@@ -117,7 +117,8 @@
Returns:
The final node after full contraction.
"""
- alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)
+ alg = functools.partial(
+ opt_einsum.paths.dynamic_programming, memory_limit=memory_limit)
return base(nodes, alg, output_edge_order, ignore_edge_order)
| {"golden_diff": "diff --git a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n--- a/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n+++ b/tensornetwork/contractors/opt_einsum_paths/path_contractors.py\n@@ -117,7 +117,8 @@\n Returns:\n The final node after full contraction.\n \"\"\"\n- alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)\n+ alg = functools.partial(\n+ opt_einsum.paths.dynamic_programming, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n", "issue": "add dynamic programming contractor of opt_einsum\nWe should update this asap\n", "before_files": [{"content": "# pylint: disable=cyclic-import\n# Copyright 2019 The TensorNetwork Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Contractors based on `opt_einsum`'s path algorithms.\"\"\"\n\nimport functools\nimport opt_einsum\n# pylint: disable=line-too-long\nfrom tensornetwork.network_operations import check_connected, get_all_edges, get_subgraph_dangling\n# pylint: disable=line-too-long\nfrom tensornetwork.network_components import get_all_nondangling, contract_parallel, contract_between\nfrom tensornetwork.network_components import Edge, AbstractNode\nfrom tensornetwork.contractors.opt_einsum_paths import utils\nfrom typing import Any, Optional, Sequence, Iterable\n\n#TODO (martin): add return types of functions back once TensorNetwork is gone\n# remove _base_network\n# _base_nodes -> base\n\n\ndef base(nodes: Iterable[AbstractNode],\n algorithm: utils.Algorithm,\n output_edge_order: Optional[Sequence[Edge]] = None,\n ignore_edge_order: bool = False) -> AbstractNode:\n \"\"\"Base method for all `opt_einsum` contractors.\n\n Args:\n nodes: A collection of connected nodes.\n algorithm: `opt_einsum` contraction method to use.\n output_edge_order: An optional list of edges. Edges of the\n final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be pronvided.\n ignore_edge_order: An option to ignore the output edge\n order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n nodes_set = set(nodes)\n edges = get_all_edges(nodes_set)\n #output edge order has to be determinded before any contraction\n #(edges are refreshed after contractions)\n\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(get_subgraph_dangling(nodes))\n if len(output_edge_order) > 1:\n raise ValueError(\"The final node after contraction has more than \"\n \"one remaining edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n if set(output_edge_order) != get_subgraph_dangling(nodes):\n raise ValueError(\"output edges are not equal to the remaining \"\n \"non-contracted edges of the final node.\")\n\n for edge in edges:\n if not edge.is_disabled: #if its disabled we already contracted it\n if edge.is_trace():\n nodes_set.remove(edge.node1)\n nodes_set.add(contract_parallel(edge))\n\n if len(nodes_set) == 1:\n # There's nothing to contract.\n if ignore_edge_order:\n return list(nodes_set)[0]\n return list(nodes_set)[0].reorder_edges(output_edge_order)\n\n # Then apply `opt_einsum`'s algorithm\n path, nodes = utils.get_path(nodes_set, algorithm)\n for a, b in path:\n new_node = contract_between(nodes[a], nodes[b], allow_outer_product=True)\n nodes.append(new_node)\n nodes = utils.multi_remove(nodes, [a, b])\n\n # if the final node has more than one edge,\n # output_edge_order has to be specified\n final_node = nodes[0] # nodes were connected, we checked this\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n\ndef optimal(nodes: Iterable[AbstractNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> AbstractNode:\n \"\"\"Optimal contraction order via `opt_einsum`.\n\n This method will find the truly optimal contraction order via\n `opt_einsum`'s depth first search algorithm. Since this search is\n exhaustive, if your network is large (n>10), then the search may\n take longer than just contracting in a suboptimal way.\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.optimal, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef branch(nodes: Iterable[AbstractNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n nbranch: Optional[int] = None,\n ignore_edge_order: bool = False) -> AbstractNode:\n \"\"\"Branch contraction path via `opt_einsum`.\n\n This method uses the DFS approach of `optimal` while sorting potential\n contractions based on a heuristic cost, in order to reduce time spent\n in exploring paths which are unlikely to be optimal.\n More details on `branching path`_.\n\n .. _branching path:\n https://optimized-einsum.readthedocs.io/en/latest/branching_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n nbranch: Number of best contractions to explore.\n If None it explores all inner products starting with those that\n have the best cost heuristic.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(\n opt_einsum.paths.branch, memory_limit=memory_limit, nbranch=nbranch)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\ndef greedy(nodes: Iterable[AbstractNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> AbstractNode:\n \"\"\"Greedy contraction path via `opt_einsum`.\n\n This provides a more efficient strategy than `optimal` for finding\n contraction paths in large networks. First contracts pairs of tensors\n by finding the pair with the lowest cost at each step. Then it performs\n the outer products. More details on `greedy path`_.\n\n .. _greedy path:\n https://optimized-einsum.readthedocs.io/en/latest/greedy_path.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n The final node after full contraction.\n \"\"\"\n alg = functools.partial(opt_einsum.paths.greedy, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n\n\n# pylint: disable=too-many-return-statements\ndef auto(nodes: Iterable[AbstractNode],\n output_edge_order: Optional[Sequence[Edge]] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> AbstractNode:\n \"\"\"Chooses one of the above algorithms according to network size.\n\n Default behavior is based on `opt_einsum`'s `auto` contractor.\n\n Args:\n nodes: A collection of connected nodes.\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n `output_edge_order` must be provided.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n\n n = len(list(nodes)) #pytype thing\n _nodes = nodes\n if n <= 0:\n raise ValueError(\"Cannot contract empty tensor network.\")\n if n == 1:\n if not ignore_edge_order:\n if output_edge_order is None:\n output_edge_order = list(\n (get_all_edges(_nodes) - get_all_nondangling(_nodes)))\n if len(output_edge_order) > 1:\n raise ValueError(\n \"The final node after contraction has more than \"\n \"one dangling edge. In this case `output_edge_order` \"\n \"has to be provided.\")\n\n edges = get_all_nondangling(_nodes)\n if edges:\n final_node = contract_parallel(edges.pop())\n else:\n final_node = list(_nodes)[0]\n final_node.reorder_edges(output_edge_order)\n if not ignore_edge_order:\n final_node.reorder_edges(output_edge_order)\n return final_node\n\n if n < 5:\n return optimal(nodes, output_edge_order, memory_limit, ignore_edge_order)\n if n < 7:\n return branch(\n nodes,\n output_edge_order=output_edge_order,\n memory_limit=memory_limit,\n ignore_edge_order=ignore_edge_order)\n if n < 9:\n return branch(\n nodes,\n output_edge_order=output_edge_order,\n memory_limit=memory_limit,\n nbranch=2,\n ignore_edge_order=ignore_edge_order)\n if n < 15:\n return branch(\n nodes,\n output_edge_order=output_edge_order,\n nbranch=1,\n ignore_edge_order=ignore_edge_order)\n return greedy(nodes, output_edge_order, memory_limit, ignore_edge_order)\n\n\ndef custom(nodes: Iterable[AbstractNode],\n optimizer: Any,\n output_edge_order: Sequence[Edge] = None,\n memory_limit: Optional[int] = None,\n ignore_edge_order: bool = False) -> AbstractNode:\n \"\"\"Uses a custom path optimizer created by the user to calculate paths.\n\n The custom path optimizer should inherit `opt_einsum`'s `PathOptimizer`.\n See `custom paths`_.\n\n .. _custom paths:\n https://optimized-einsum.readthedocs.io/en/latest/custom_paths.html\n\n Args:\n nodes: an iterable of Nodes\n output_edge_order: An optional list of edges.\n Edges of the final node in `nodes_set`\n are reordered into `output_edge_order`;\n if final node has more than one edge,\n output_edge_order` must be provided.\n optimizer: A custom `opt_einsum.PathOptimizer` object.\n memory_limit: Maximum number of elements in an array during contractions.\n ignore_edge_order: An option to ignore the output edge order.\n\n Returns:\n Final node after full contraction.\n \"\"\"\n alg = functools.partial(optimizer, memory_limit=memory_limit)\n return base(nodes, alg, output_edge_order, ignore_edge_order)\n", "path": "tensornetwork/contractors/opt_einsum_paths/path_contractors.py"}]} | 3,901 | 162 |
gh_patches_debug_11928 | rasdani/github-patches | git_diff | keras-team__keras-2986 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Why TF-IDF matrix generated by keras.preprocessing.text.Tokenizer() has negative values?
Say, if run the following script:
> > > import keras
> > > tk = keras.preprocessing.text.Tokenizer()
> > > texts = ['I love you.', 'I love you, too.']
> > > tk.fit_on_texts(texts)
> > > tk.texts_to_matrix(texts, mode='tfidf')
The output will be:
array([[ 0. , -1.09861229, -1.09861229, -1.09861229, 0. ],
[ 0. , -1.38629436, -1.38629436, -1.38629436, -1.38629436]])
But tf-idf values seems should be non-negative?
By the way, is there a neat way to get the word by its index, or the vocabulary (in the order of word indices) of the Tokenizer() class? Say, sometimes I want to know what's the most frequent word in the documents, then I want to access word with index 1.
I can do it by running:
> > > vocab = tk.word_index.items()
> > > vocab.sort(key=lambda x:x[1])
This gives:
> > > vocab
[('i', 1), ('you', 2), ('love', 3), ('too', 4)]
But is it somehow hacky?
Thank you!
</issue>
<code>
[start of keras/preprocessing/text.py]
1 # -*- coding: utf-8 -*-
2 '''These preprocessing utilities would greatly benefit
3 from a fast Cython rewrite.
4 '''
5 from __future__ import absolute_import
6
7 import string
8 import sys
9 import numpy as np
10 from six.moves import range
11 from six.moves import zip
12
13 if sys.version_info < (3,):
14 maketrans = string.maketrans
15 else:
16 maketrans = str.maketrans
17
18
19 def base_filter():
20 f = string.punctuation
21 f = f.replace("'", '')
22 f += '\t\n'
23 return f
24
25
26 def text_to_word_sequence(text, filters=base_filter(), lower=True, split=" "):
27 '''prune: sequence of characters to filter out
28 '''
29 if lower:
30 text = text.lower()
31 text = text.translate(maketrans(filters, split*len(filters)))
32 seq = text.split(split)
33 return [_f for _f in seq if _f]
34
35
36 def one_hot(text, n, filters=base_filter(), lower=True, split=" "):
37 seq = text_to_word_sequence(text, filters=filters, lower=lower, split=split)
38 return [(abs(hash(w)) % (n - 1) + 1) for w in seq]
39
40
41 class Tokenizer(object):
42 def __init__(self, nb_words=None, filters=base_filter(),
43 lower=True, split=' ', char_level=False):
44 '''The class allows to vectorize a text corpus, by turning each
45 text into either a sequence of integers (each integer being the index
46 of a token in a dictionary) or into a vector where the coefficient
47 for each token could be binary, based on word count, based on tf-idf...
48
49 # Arguments
50 nb_words: the maximum number of words to keep, based
51 on word frequency. Only the most common `nb_words` words will
52 be kept.
53 filters: a string where each element is a character that will be
54 filtered from the texts. The default is all punctuation, plus
55 tabs and line breaks, minus the `'` character.
56 lower: boolean. Whether to convert the texts to lowercase.
57 split: character or string to use for token splitting.
58 char_level: if True, every character will be treated as a word.
59
60 By default, all punctuation is removed, turning the texts into
61 space-separated sequences of words
62 (words maybe include the `'` character). These sequences are then
63 split into lists of tokens. They will then be indexed or vectorized.
64
65 `0` is a reserved index that won't be assigned to any word.
66 '''
67 self.word_counts = {}
68 self.word_docs = {}
69 self.filters = filters
70 self.split = split
71 self.lower = lower
72 self.nb_words = nb_words
73 self.document_count = 0
74 self.char_level = char_level
75
76 def fit_on_texts(self, texts):
77 '''Required before using texts_to_sequences or texts_to_matrix
78
79 # Arguments
80 texts: can be a list of strings,
81 or a generator of strings (for memory-efficiency)
82 '''
83 self.document_count = 0
84 for text in texts:
85 self.document_count += 1
86 seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)
87 for w in seq:
88 if w in self.word_counts:
89 self.word_counts[w] += 1
90 else:
91 self.word_counts[w] = 1
92 for w in set(seq):
93 if w in self.word_docs:
94 self.word_docs[w] += 1
95 else:
96 self.word_docs[w] = 1
97
98 wcounts = list(self.word_counts.items())
99 wcounts.sort(key=lambda x: x[1], reverse=True)
100 sorted_voc = [wc[0] for wc in wcounts]
101 self.word_index = dict(list(zip(sorted_voc, list(range(1, len(sorted_voc) + 1)))))
102
103 self.index_docs = {}
104 for w, c in list(self.word_docs.items()):
105 self.index_docs[self.word_index[w]] = c
106
107 def fit_on_sequences(self, sequences):
108 '''Required before using sequences_to_matrix
109 (if fit_on_texts was never called)
110 '''
111 self.document_count = len(sequences)
112 self.index_docs = {}
113 for seq in sequences:
114 seq = set(seq)
115 for i in seq:
116 if i not in self.index_docs:
117 self.index_docs[i] = 1
118 else:
119 self.index_docs[i] += 1
120
121 def texts_to_sequences(self, texts):
122 '''Transforms each text in texts in a sequence of integers.
123 Only top "nb_words" most frequent words will be taken into account.
124 Only words known by the tokenizer will be taken into account.
125
126 Returns a list of sequences.
127 '''
128 res = []
129 for vect in self.texts_to_sequences_generator(texts):
130 res.append(vect)
131 return res
132
133 def texts_to_sequences_generator(self, texts):
134 '''Transforms each text in texts in a sequence of integers.
135 Only top "nb_words" most frequent words will be taken into account.
136 Only words known by the tokenizer will be taken into account.
137
138 Yields individual sequences.
139
140 # Arguments:
141 texts: list of strings.
142 '''
143 nb_words = self.nb_words
144 for text in texts:
145 seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)
146 vect = []
147 for w in seq:
148 i = self.word_index.get(w)
149 if i is not None:
150 if nb_words and i >= nb_words:
151 continue
152 else:
153 vect.append(i)
154 yield vect
155
156 def texts_to_matrix(self, texts, mode='binary'):
157 '''Convert a list of texts to a Numpy matrix,
158 according to some vectorization mode.
159
160 # Arguments:
161 texts: list of strings.
162 modes: one of "binary", "count", "tfidf", "freq"
163 '''
164 sequences = self.texts_to_sequences(texts)
165 return self.sequences_to_matrix(sequences, mode=mode)
166
167 def sequences_to_matrix(self, sequences, mode='binary'):
168 '''Converts a list of sequences into a Numpy matrix,
169 according to some vectorization mode.
170
171 # Arguments:
172 sequences: list of sequences
173 (a sequence is a list of integer word indices).
174 modes: one of "binary", "count", "tfidf", "freq"
175 '''
176 if not self.nb_words:
177 if self.word_index:
178 nb_words = len(self.word_index) + 1
179 else:
180 raise Exception('Specify a dimension (nb_words argument), '
181 'or fit on some text data first.')
182 else:
183 nb_words = self.nb_words
184
185 if mode == 'tfidf' and not self.document_count:
186 raise Exception('Fit the Tokenizer on some data '
187 'before using tfidf mode.')
188
189 X = np.zeros((len(sequences), nb_words))
190 for i, seq in enumerate(sequences):
191 if not seq:
192 continue
193 counts = {}
194 for j in seq:
195 if j >= nb_words:
196 continue
197 if j not in counts:
198 counts[j] = 1.
199 else:
200 counts[j] += 1
201 for j, c in list(counts.items()):
202 if mode == 'count':
203 X[i][j] = c
204 elif mode == 'freq':
205 X[i][j] = c / len(seq)
206 elif mode == 'binary':
207 X[i][j] = 1
208 elif mode == 'tfidf':
209 # Use weighting scheme 2 in
210 # https://en.wikipedia.org/wiki/Tf%E2%80%93idf
211 tf = 1 + np.log(c)
212 df = np.log(1 + self.index_docs.get(j, 0) / (1 + self.document_count))
213 X[i][j] = tf / df
214 else:
215 raise Exception('Unknown vectorization mode: ' + str(mode))
216 return X
217
[end of keras/preprocessing/text.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/keras/preprocessing/text.py b/keras/preprocessing/text.py
--- a/keras/preprocessing/text.py
+++ b/keras/preprocessing/text.py
@@ -209,8 +209,8 @@
# Use weighting scheme 2 in
# https://en.wikipedia.org/wiki/Tf%E2%80%93idf
tf = 1 + np.log(c)
- df = np.log(1 + self.index_docs.get(j, 0) / (1 + self.document_count))
- X[i][j] = tf / df
+ idf = np.log(1 + self.document_count / (1 + self.index_docs.get(j, 0)))
+ X[i][j] = tf * idf
else:
raise Exception('Unknown vectorization mode: ' + str(mode))
return X
| {"golden_diff": "diff --git a/keras/preprocessing/text.py b/keras/preprocessing/text.py\n--- a/keras/preprocessing/text.py\n+++ b/keras/preprocessing/text.py\n@@ -209,8 +209,8 @@\n # Use weighting scheme 2 in\n # https://en.wikipedia.org/wiki/Tf%E2%80%93idf\n tf = 1 + np.log(c)\n- df = np.log(1 + self.index_docs.get(j, 0) / (1 + self.document_count))\n- X[i][j] = tf / df\n+ idf = np.log(1 + self.document_count / (1 + self.index_docs.get(j, 0)))\n+ X[i][j] = tf * idf\n else:\n raise Exception('Unknown vectorization mode: ' + str(mode))\n return X\n", "issue": "Why TF-IDF matrix generated by keras.preprocessing.text.Tokenizer() has negative values?\nSay, if run the following script:\n\n> > > import keras\n> > > tk = keras.preprocessing.text.Tokenizer()\n> > > texts = ['I love you.', 'I love you, too.']\n> > > tk.fit_on_texts(texts)\n> > > tk.texts_to_matrix(texts, mode='tfidf')\n\nThe output will be:\narray([[ 0. , -1.09861229, -1.09861229, -1.09861229, 0. ],\n [ 0. , -1.38629436, -1.38629436, -1.38629436, -1.38629436]])\n\nBut tf-idf values seems should be non-negative?\n\nBy the way, is there a neat way to get the word by its index, or the vocabulary (in the order of word indices) of the Tokenizer() class? Say, sometimes I want to know what's the most frequent word in the documents, then I want to access word with index 1.\n\nI can do it by running:\n\n> > > vocab = tk.word_index.items()\n> > > vocab.sort(key=lambda x:x[1])\n\nThis gives:\n\n> > > vocab\n\n[('i', 1), ('you', 2), ('love', 3), ('too', 4)]\nBut is it somehow hacky?\n\nThank you!\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n'''These preprocessing utilities would greatly benefit\nfrom a fast Cython rewrite.\n'''\nfrom __future__ import absolute_import\n\nimport string\nimport sys\nimport numpy as np\nfrom six.moves import range\nfrom six.moves import zip\n\nif sys.version_info < (3,):\n maketrans = string.maketrans\nelse:\n maketrans = str.maketrans\n\n\ndef base_filter():\n f = string.punctuation\n f = f.replace(\"'\", '')\n f += '\\t\\n'\n return f\n\n\ndef text_to_word_sequence(text, filters=base_filter(), lower=True, split=\" \"):\n '''prune: sequence of characters to filter out\n '''\n if lower:\n text = text.lower()\n text = text.translate(maketrans(filters, split*len(filters)))\n seq = text.split(split)\n return [_f for _f in seq if _f]\n\n\ndef one_hot(text, n, filters=base_filter(), lower=True, split=\" \"):\n seq = text_to_word_sequence(text, filters=filters, lower=lower, split=split)\n return [(abs(hash(w)) % (n - 1) + 1) for w in seq]\n\n\nclass Tokenizer(object):\n def __init__(self, nb_words=None, filters=base_filter(),\n lower=True, split=' ', char_level=False):\n '''The class allows to vectorize a text corpus, by turning each\n text into either a sequence of integers (each integer being the index\n of a token in a dictionary) or into a vector where the coefficient\n for each token could be binary, based on word count, based on tf-idf...\n\n # Arguments\n nb_words: the maximum number of words to keep, based\n on word frequency. Only the most common `nb_words` words will\n be kept.\n filters: a string where each element is a character that will be\n filtered from the texts. The default is all punctuation, plus\n tabs and line breaks, minus the `'` character.\n lower: boolean. Whether to convert the texts to lowercase.\n split: character or string to use for token splitting.\n char_level: if True, every character will be treated as a word.\n\n By default, all punctuation is removed, turning the texts into\n space-separated sequences of words\n (words maybe include the `'` character). These sequences are then\n split into lists of tokens. They will then be indexed or vectorized.\n\n `0` is a reserved index that won't be assigned to any word.\n '''\n self.word_counts = {}\n self.word_docs = {}\n self.filters = filters\n self.split = split\n self.lower = lower\n self.nb_words = nb_words\n self.document_count = 0\n self.char_level = char_level\n\n def fit_on_texts(self, texts):\n '''Required before using texts_to_sequences or texts_to_matrix\n\n # Arguments\n texts: can be a list of strings,\n or a generator of strings (for memory-efficiency)\n '''\n self.document_count = 0\n for text in texts:\n self.document_count += 1\n seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)\n for w in seq:\n if w in self.word_counts:\n self.word_counts[w] += 1\n else:\n self.word_counts[w] = 1\n for w in set(seq):\n if w in self.word_docs:\n self.word_docs[w] += 1\n else:\n self.word_docs[w] = 1\n\n wcounts = list(self.word_counts.items())\n wcounts.sort(key=lambda x: x[1], reverse=True)\n sorted_voc = [wc[0] for wc in wcounts]\n self.word_index = dict(list(zip(sorted_voc, list(range(1, len(sorted_voc) + 1)))))\n\n self.index_docs = {}\n for w, c in list(self.word_docs.items()):\n self.index_docs[self.word_index[w]] = c\n\n def fit_on_sequences(self, sequences):\n '''Required before using sequences_to_matrix\n (if fit_on_texts was never called)\n '''\n self.document_count = len(sequences)\n self.index_docs = {}\n for seq in sequences:\n seq = set(seq)\n for i in seq:\n if i not in self.index_docs:\n self.index_docs[i] = 1\n else:\n self.index_docs[i] += 1\n\n def texts_to_sequences(self, texts):\n '''Transforms each text in texts in a sequence of integers.\n Only top \"nb_words\" most frequent words will be taken into account.\n Only words known by the tokenizer will be taken into account.\n\n Returns a list of sequences.\n '''\n res = []\n for vect in self.texts_to_sequences_generator(texts):\n res.append(vect)\n return res\n\n def texts_to_sequences_generator(self, texts):\n '''Transforms each text in texts in a sequence of integers.\n Only top \"nb_words\" most frequent words will be taken into account.\n Only words known by the tokenizer will be taken into account.\n\n Yields individual sequences.\n\n # Arguments:\n texts: list of strings.\n '''\n nb_words = self.nb_words\n for text in texts:\n seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)\n vect = []\n for w in seq:\n i = self.word_index.get(w)\n if i is not None:\n if nb_words and i >= nb_words:\n continue\n else:\n vect.append(i)\n yield vect\n\n def texts_to_matrix(self, texts, mode='binary'):\n '''Convert a list of texts to a Numpy matrix,\n according to some vectorization mode.\n\n # Arguments:\n texts: list of strings.\n modes: one of \"binary\", \"count\", \"tfidf\", \"freq\"\n '''\n sequences = self.texts_to_sequences(texts)\n return self.sequences_to_matrix(sequences, mode=mode)\n\n def sequences_to_matrix(self, sequences, mode='binary'):\n '''Converts a list of sequences into a Numpy matrix,\n according to some vectorization mode.\n\n # Arguments:\n sequences: list of sequences\n (a sequence is a list of integer word indices).\n modes: one of \"binary\", \"count\", \"tfidf\", \"freq\"\n '''\n if not self.nb_words:\n if self.word_index:\n nb_words = len(self.word_index) + 1\n else:\n raise Exception('Specify a dimension (nb_words argument), '\n 'or fit on some text data first.')\n else:\n nb_words = self.nb_words\n\n if mode == 'tfidf' and not self.document_count:\n raise Exception('Fit the Tokenizer on some data '\n 'before using tfidf mode.')\n\n X = np.zeros((len(sequences), nb_words))\n for i, seq in enumerate(sequences):\n if not seq:\n continue\n counts = {}\n for j in seq:\n if j >= nb_words:\n continue\n if j not in counts:\n counts[j] = 1.\n else:\n counts[j] += 1\n for j, c in list(counts.items()):\n if mode == 'count':\n X[i][j] = c\n elif mode == 'freq':\n X[i][j] = c / len(seq)\n elif mode == 'binary':\n X[i][j] = 1\n elif mode == 'tfidf':\n # Use weighting scheme 2 in\n # https://en.wikipedia.org/wiki/Tf%E2%80%93idf\n tf = 1 + np.log(c)\n df = np.log(1 + self.index_docs.get(j, 0) / (1 + self.document_count))\n X[i][j] = tf / df\n else:\n raise Exception('Unknown vectorization mode: ' + str(mode))\n return X\n", "path": "keras/preprocessing/text.py"}]} | 3,173 | 192 |
gh_patches_debug_27207 | rasdani/github-patches | git_diff | pyodide__pyodide-1742 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Passing python dict into a js function
Hi, I just noticed that we now (v0.17.0) convert python dict to a `Map` in js, and it break my previous code.
I have 3 questions:
1. What would be the easiest way to send a dictionary in Python to Js so it will become a regular object.
2. What is the reason to choose converting `dict` to `Map`? I find it's bit inconvenient if we get a Map instead of an object by default. I mean, `Object`/`{}` is obviously more commonly used than `Map` in JS, wouldn't it be a better default type for converting `dict` in Python?
3. Previously, if we import js function (e.g. `postMessage`) into Python, then send a python dictionary( `js.postMessage({'data': 123})`, it automatically convert the arguments into js object, but now it complains that `JsException: DataCloneError: Failed to execute 'postMessage' on 'DedicatedWorkerGlobalScope': [object Object] could not be cloned.`.
</issue>
<code>
[start of src/py/_pyodide/_core_docs.py]
1 # type: ignore
2
3 from typing import Any, Callable
4
5 # All docstrings for public `core` APIs should be extracted from here. We use
6 # the utilities in `docstring.py` and `docstring.c` to format them
7 # appropriately.
8
9 # Sphinx uses __name__ to determine the paths and such. It looks better for it
10 # to refer to e.g., `pyodide.JsProxy` than `_pyodide._core.JsProxy`.
11 _save_name = __name__
12 __name__ = "pyodide"
13 try:
14 # From jsproxy.c
15
16 class JsException(Exception):
17 """
18 A wrapper around a Javascript Error to allow it to be thrown in Python.
19 See :ref:`type-translations-errors`.
20 """
21
22 @property
23 def js_error(self):
24 """The original Javascript error"""
25
26 class ConversionError(Exception):
27 """An error thrown when conversion between Javascript and Python fails."""
28
29 class JsProxy:
30 """A proxy to make a Javascript object behave like a Python object
31
32 For more information see :ref:`type-translations` documentation.
33 """
34
35 def __init__(self):
36 """ """
37
38 def object_entries(self) -> "JsProxy":
39 "The Javascript API ``Object.entries(object)``"
40
41 def object_keys(self) -> "JsProxy":
42 "The Javascript API ``Object.keys(object)``"
43
44 def object_values(self) -> "JsProxy":
45 "The Javascript API ``Object.values(object)``"
46
47 def new(self, *args, **kwargs) -> "JsProxy":
48 """Construct a new instance of the Javascript object"""
49
50 def to_py(self, *, depth: int = -1) -> Any:
51 """Convert the :class:`JsProxy` to a native Python object as best as
52 possible.
53
54 By default does a deep conversion, if a shallow conversion is
55 desired, you can use ``proxy.to_py(depth=1)``. See
56 :ref:`type-translations-jsproxy-to-py` for more information.
57 """
58 pass
59
60 def then(self, onfulfilled: Callable, onrejected: Callable) -> "Promise":
61 """The ``Promise.then`` API, wrapped to manage the lifetimes of the
62 handlers.
63
64 Present only if the wrapped Javascript object has a "then" method.
65 Pyodide will automatically release the references to the handlers
66 when the promise resolves.
67 """
68
69 def catch(self, onrejected: Callable) -> "Promise":
70 """The ``Promise.catch`` API, wrapped to manage the lifetimes of the
71 handler.
72
73 Present only if the wrapped Javascript object has a "then" method.
74 Pyodide will automatically release the references to the handler
75 when the promise resolves.
76 """
77
78 def finally_(self, onfinally: Callable) -> "Promise":
79 """The ``Promise.finally`` API, wrapped to manage the lifetimes of
80 the handler.
81
82 Present only if the wrapped Javascript object has a "then" method.
83 Pyodide will automatically release the references to the handler
84 when the promise resolves. Note the trailing underscore in the name;
85 this is needed because ``finally`` is a reserved keyword in Python.
86 """
87
88 # There are no types for buffers:
89 # https://github.com/python/typing/issues/593
90 # https://bugs.python.org/issue27501
91 # This is just for docs so lets just make something up?
92
93 def assign(self, rhs: "ReadBuffer"):
94 """Assign from a Python buffer into the Javascript buffer.
95
96 Present only if the wrapped Javascript object is an ArrayBuffer or
97 an ArrayBuffer view.
98 """
99
100 def assign_to(self, to: "ReadWriteBuffer"):
101 """Assign to a Python buffer from the Javascript buffer.
102
103 Present only if the wrapped Javascript object is an ArrayBuffer or
104 an ArrayBuffer view.
105 """
106
107 # from pyproxy.c
108
109 def create_once_callable(obj: Callable) -> JsProxy:
110 """Wrap a Python callable in a Javascript function that can be called once.
111
112 After being called the proxy will decrement the reference count
113 of the Callable. The Javascript function also has a ``destroy`` API that
114 can be used to release the proxy without calling it.
115 """
116 return obj
117
118 def create_proxy(obj: Any) -> JsProxy:
119 """Create a ``JsProxy`` of a ``PyProxy``.
120
121 This allows explicit control over the lifetime of the ``PyProxy`` from
122 Python: call the ``destroy`` API when done.
123 """
124 return obj
125
126 # from python2js
127
128 def to_js(
129 obj: Any,
130 *,
131 depth: int = -1,
132 pyproxies: JsProxy = None,
133 create_pyproxies: bool = True
134 ) -> JsProxy:
135 """Convert the object to Javascript.
136
137 This is similar to :any:`PyProxy.toJs`, but for use from Python. If the
138 object would be implicitly translated to Javascript, it will be returned
139 unchanged. If the object cannot be converted into Javascript, this
140 method will return a :any:`JsProxy` of a :any:`PyProxy`, as if you had
141 used :any:`pyodide.create_proxy`.
142
143 See :ref:`type-translations-pyproxy-to-js` for more information.
144
145 Parameters
146 ----------
147 obj : Any
148 The Python object to convert
149
150 depth : int, default=-1
151 The maximum depth to do the conversion. Negative numbers are treated
152 as infinite. Set this to 1 to do a shallow conversion.
153
154 pyproxies: JsProxy, default = None
155 Should be a Javascript ``Array``. If provided, any ``PyProxies`` generated
156 will be stored here. You can later use :any:`destroy_proxies` if you want
157 to destroy the proxies from Python (or from Javascript you can just iterate
158 over the ``Array`` and destroy the proxies).
159
160 create_pyproxies: bool, default=True
161 If you set this to False, :any:`to_js` will raise an error
162
163 """
164 return obj
165
166 def destroy_proxies(pyproxies: JsProxy):
167 """Destroy all PyProxies in a Javascript array.
168
169 pyproxies must be a JsProxy of type PyProxy[]. Intended for use with the
170 arrays created from the "pyproxies" argument of :any:`toJs` and
171 :any:`to_js`. This method is necessary because indexing the Array from
172 Python automatically unwraps the PyProxy into the wrapped Python object.
173 """
174 pass
175
176
177 finally:
178 __name__ = _save_name
179 del _save_name
180
[end of src/py/_pyodide/_core_docs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/py/_pyodide/_core_docs.py b/src/py/_pyodide/_core_docs.py
--- a/src/py/_pyodide/_core_docs.py
+++ b/src/py/_pyodide/_core_docs.py
@@ -1,6 +1,6 @@
# type: ignore
-from typing import Any, Callable
+from typing import Any, Callable, Iterable
# All docstrings for public `core` APIs should be extracted from here. We use
# the utilities in `docstring.py` and `docstring.c` to format them
@@ -130,7 +130,8 @@
*,
depth: int = -1,
pyproxies: JsProxy = None,
- create_pyproxies: bool = True
+ create_pyproxies: bool = True,
+ dict_converter: Callable[[Iterable[JsProxy]], JsProxy] = None,
) -> JsProxy:
"""Convert the object to Javascript.
@@ -160,6 +161,15 @@
create_pyproxies: bool, default=True
If you set this to False, :any:`to_js` will raise an error
+ dict_converter: Callable[[Iterable[JsProxy]], JsProxy], defauilt = None
+ This converter if provided recieves a (Javascript) iterable of
+ (Javascript) pairs [key, value]. It is expected to return the
+ desired result of the dict conversion. Some suggested values for
+ this argument:
+
+ js.Map.new -- similar to the default behavior
+ js.Array.from -- convert to an array of entries
+ js.Object.fromEntries -- convert to a Javascript object
"""
return obj
| {"golden_diff": "diff --git a/src/py/_pyodide/_core_docs.py b/src/py/_pyodide/_core_docs.py\n--- a/src/py/_pyodide/_core_docs.py\n+++ b/src/py/_pyodide/_core_docs.py\n@@ -1,6 +1,6 @@\n # type: ignore\n \n-from typing import Any, Callable\n+from typing import Any, Callable, Iterable\n \n # All docstrings for public `core` APIs should be extracted from here. We use\n # the utilities in `docstring.py` and `docstring.c` to format them\n@@ -130,7 +130,8 @@\n *,\n depth: int = -1,\n pyproxies: JsProxy = None,\n- create_pyproxies: bool = True\n+ create_pyproxies: bool = True,\n+ dict_converter: Callable[[Iterable[JsProxy]], JsProxy] = None,\n ) -> JsProxy:\n \"\"\"Convert the object to Javascript.\n \n@@ -160,6 +161,15 @@\n create_pyproxies: bool, default=True\n If you set this to False, :any:`to_js` will raise an error\n \n+ dict_converter: Callable[[Iterable[JsProxy]], JsProxy], defauilt = None\n+ This converter if provided recieves a (Javascript) iterable of\n+ (Javascript) pairs [key, value]. It is expected to return the\n+ desired result of the dict conversion. Some suggested values for\n+ this argument:\n+\n+ js.Map.new -- similar to the default behavior\n+ js.Array.from -- convert to an array of entries\n+ js.Object.fromEntries -- convert to a Javascript object\n \"\"\"\n return obj\n", "issue": "Passing python dict into a js function\nHi, I just noticed that we now (v0.17.0) convert python dict to a `Map` in js, and it break my previous code.\r\n\r\nI have 3 questions:\r\n1. What would be the easiest way to send a dictionary in Python to Js so it will become a regular object. \r\n2. What is the reason to choose converting `dict` to `Map`? I find it's bit inconvenient if we get a Map instead of an object by default. I mean, `Object`/`{}` is obviously more commonly used than `Map` in JS, wouldn't it be a better default type for converting `dict` in Python?\r\n3. Previously, if we import js function (e.g. `postMessage`) into Python, then send a python dictionary( `js.postMessage({'data': 123})`, it automatically convert the arguments into js object, but now it complains that `JsException: DataCloneError: Failed to execute 'postMessage' on 'DedicatedWorkerGlobalScope': [object Object] could not be cloned.`. \r\n\r\n\r\n\n", "before_files": [{"content": "# type: ignore\n\nfrom typing import Any, Callable\n\n# All docstrings for public `core` APIs should be extracted from here. We use\n# the utilities in `docstring.py` and `docstring.c` to format them\n# appropriately.\n\n# Sphinx uses __name__ to determine the paths and such. It looks better for it\n# to refer to e.g., `pyodide.JsProxy` than `_pyodide._core.JsProxy`.\n_save_name = __name__\n__name__ = \"pyodide\"\ntry:\n # From jsproxy.c\n\n class JsException(Exception):\n \"\"\"\n A wrapper around a Javascript Error to allow it to be thrown in Python.\n See :ref:`type-translations-errors`.\n \"\"\"\n\n @property\n def js_error(self):\n \"\"\"The original Javascript error\"\"\"\n\n class ConversionError(Exception):\n \"\"\"An error thrown when conversion between Javascript and Python fails.\"\"\"\n\n class JsProxy:\n \"\"\"A proxy to make a Javascript object behave like a Python object\n\n For more information see :ref:`type-translations` documentation.\n \"\"\"\n\n def __init__(self):\n \"\"\" \"\"\"\n\n def object_entries(self) -> \"JsProxy\":\n \"The Javascript API ``Object.entries(object)``\"\n\n def object_keys(self) -> \"JsProxy\":\n \"The Javascript API ``Object.keys(object)``\"\n\n def object_values(self) -> \"JsProxy\":\n \"The Javascript API ``Object.values(object)``\"\n\n def new(self, *args, **kwargs) -> \"JsProxy\":\n \"\"\"Construct a new instance of the Javascript object\"\"\"\n\n def to_py(self, *, depth: int = -1) -> Any:\n \"\"\"Convert the :class:`JsProxy` to a native Python object as best as\n possible.\n\n By default does a deep conversion, if a shallow conversion is\n desired, you can use ``proxy.to_py(depth=1)``. See\n :ref:`type-translations-jsproxy-to-py` for more information.\n \"\"\"\n pass\n\n def then(self, onfulfilled: Callable, onrejected: Callable) -> \"Promise\":\n \"\"\"The ``Promise.then`` API, wrapped to manage the lifetimes of the\n handlers.\n\n Present only if the wrapped Javascript object has a \"then\" method.\n Pyodide will automatically release the references to the handlers\n when the promise resolves.\n \"\"\"\n\n def catch(self, onrejected: Callable) -> \"Promise\":\n \"\"\"The ``Promise.catch`` API, wrapped to manage the lifetimes of the\n handler.\n\n Present only if the wrapped Javascript object has a \"then\" method.\n Pyodide will automatically release the references to the handler\n when the promise resolves.\n \"\"\"\n\n def finally_(self, onfinally: Callable) -> \"Promise\":\n \"\"\"The ``Promise.finally`` API, wrapped to manage the lifetimes of\n the handler.\n\n Present only if the wrapped Javascript object has a \"then\" method.\n Pyodide will automatically release the references to the handler\n when the promise resolves. Note the trailing underscore in the name;\n this is needed because ``finally`` is a reserved keyword in Python.\n \"\"\"\n\n # There are no types for buffers:\n # https://github.com/python/typing/issues/593\n # https://bugs.python.org/issue27501\n # This is just for docs so lets just make something up?\n\n def assign(self, rhs: \"ReadBuffer\"):\n \"\"\"Assign from a Python buffer into the Javascript buffer.\n\n Present only if the wrapped Javascript object is an ArrayBuffer or\n an ArrayBuffer view.\n \"\"\"\n\n def assign_to(self, to: \"ReadWriteBuffer\"):\n \"\"\"Assign to a Python buffer from the Javascript buffer.\n\n Present only if the wrapped Javascript object is an ArrayBuffer or\n an ArrayBuffer view.\n \"\"\"\n\n # from pyproxy.c\n\n def create_once_callable(obj: Callable) -> JsProxy:\n \"\"\"Wrap a Python callable in a Javascript function that can be called once.\n\n After being called the proxy will decrement the reference count\n of the Callable. The Javascript function also has a ``destroy`` API that\n can be used to release the proxy without calling it.\n \"\"\"\n return obj\n\n def create_proxy(obj: Any) -> JsProxy:\n \"\"\"Create a ``JsProxy`` of a ``PyProxy``.\n\n This allows explicit control over the lifetime of the ``PyProxy`` from\n Python: call the ``destroy`` API when done.\n \"\"\"\n return obj\n\n # from python2js\n\n def to_js(\n obj: Any,\n *,\n depth: int = -1,\n pyproxies: JsProxy = None,\n create_pyproxies: bool = True\n ) -> JsProxy:\n \"\"\"Convert the object to Javascript.\n\n This is similar to :any:`PyProxy.toJs`, but for use from Python. If the\n object would be implicitly translated to Javascript, it will be returned\n unchanged. If the object cannot be converted into Javascript, this\n method will return a :any:`JsProxy` of a :any:`PyProxy`, as if you had\n used :any:`pyodide.create_proxy`.\n\n See :ref:`type-translations-pyproxy-to-js` for more information.\n\n Parameters\n ----------\n obj : Any\n The Python object to convert\n\n depth : int, default=-1\n The maximum depth to do the conversion. Negative numbers are treated\n as infinite. Set this to 1 to do a shallow conversion.\n\n pyproxies: JsProxy, default = None\n Should be a Javascript ``Array``. If provided, any ``PyProxies`` generated\n will be stored here. You can later use :any:`destroy_proxies` if you want\n to destroy the proxies from Python (or from Javascript you can just iterate\n over the ``Array`` and destroy the proxies).\n\n create_pyproxies: bool, default=True\n If you set this to False, :any:`to_js` will raise an error\n\n \"\"\"\n return obj\n\n def destroy_proxies(pyproxies: JsProxy):\n \"\"\"Destroy all PyProxies in a Javascript array.\n\n pyproxies must be a JsProxy of type PyProxy[]. Intended for use with the\n arrays created from the \"pyproxies\" argument of :any:`toJs` and\n :any:`to_js`. This method is necessary because indexing the Array from\n Python automatically unwraps the PyProxy into the wrapped Python object.\n \"\"\"\n pass\n\n\nfinally:\n __name__ = _save_name\n del _save_name\n", "path": "src/py/_pyodide/_core_docs.py"}]} | 2,670 | 374 |
gh_patches_debug_31241 | rasdani/github-patches | git_diff | python-discord__bot-1532 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Default duration for !superstar command if no duration is specified
## Context
The `!superstar` command currently requires a duration to be specified when invoked.
Often we want to change someone's nickname from something very noisy or that takes up a lot of room until the user can respond and change it themselves. The duration in these cases is not usually that important as we want it to be a quick change.
## Proposal
It would be useful if there was a short default duration for this command. Something like 5 or 15 minutes. It would function similarly to the `!mute` command where if no duration is given, it opts for a 1 hour duration.
</issue>
<code>
[start of bot/exts/moderation/infraction/superstarify.py]
1 import json
2 import logging
3 import random
4 import textwrap
5 import typing as t
6 from pathlib import Path
7
8 from discord import Embed, Member
9 from discord.ext.commands import Cog, Context, command, has_any_role
10 from discord.utils import escape_markdown
11
12 from bot import constants
13 from bot.bot import Bot
14 from bot.converters import Expiry
15 from bot.exts.moderation.infraction import _utils
16 from bot.exts.moderation.infraction._scheduler import InfractionScheduler
17 from bot.utils.messages import format_user
18 from bot.utils.time import format_infraction
19
20 log = logging.getLogger(__name__)
21 NICKNAME_POLICY_URL = "https://pythondiscord.com/pages/rules/#nickname-policy"
22
23 with Path("bot/resources/stars.json").open(encoding="utf-8") as stars_file:
24 STAR_NAMES = json.load(stars_file)
25
26
27 class Superstarify(InfractionScheduler, Cog):
28 """A set of commands to moderate terrible nicknames."""
29
30 def __init__(self, bot: Bot):
31 super().__init__(bot, supported_infractions={"superstar"})
32
33 @Cog.listener()
34 async def on_member_update(self, before: Member, after: Member) -> None:
35 """Revert nickname edits if the user has an active superstarify infraction."""
36 if before.display_name == after.display_name:
37 return # User didn't change their nickname. Abort!
38
39 log.trace(
40 f"{before} ({before.display_name}) is trying to change their nickname to "
41 f"{after.display_name}. Checking if the user is in superstar-prison..."
42 )
43
44 active_superstarifies = await self.bot.api_client.get(
45 "bot/infractions",
46 params={
47 "active": "true",
48 "type": "superstar",
49 "user__id": str(before.id)
50 }
51 )
52
53 if not active_superstarifies:
54 log.trace(f"{before} has no active superstar infractions.")
55 return
56
57 infraction = active_superstarifies[0]
58 forced_nick = self.get_nick(infraction["id"], before.id)
59 if after.display_name == forced_nick:
60 return # Nick change was triggered by this event. Ignore.
61
62 log.info(
63 f"{after.display_name} ({after.id}) tried to escape superstar prison. "
64 f"Changing the nick back to {before.display_name}."
65 )
66 await after.edit(
67 nick=forced_nick,
68 reason=f"Superstarified member tried to escape the prison: {infraction['id']}"
69 )
70
71 notified = await _utils.notify_infraction(
72 user=after,
73 infr_type="Superstarify",
74 expires_at=format_infraction(infraction["expires_at"]),
75 reason=(
76 "You have tried to change your nickname on the **Python Discord** server "
77 f"from **{before.display_name}** to **{after.display_name}**, but as you "
78 "are currently in superstar-prison, you do not have permission to do so."
79 ),
80 icon_url=_utils.INFRACTION_ICONS["superstar"][0]
81 )
82
83 if not notified:
84 log.info("Failed to DM user about why they cannot change their nickname.")
85
86 @Cog.listener()
87 async def on_member_join(self, member: Member) -> None:
88 """Reapply active superstar infractions for returning members."""
89 active_superstarifies = await self.bot.api_client.get(
90 "bot/infractions",
91 params={
92 "active": "true",
93 "type": "superstar",
94 "user__id": member.id
95 }
96 )
97
98 if active_superstarifies:
99 infraction = active_superstarifies[0]
100 action = member.edit(
101 nick=self.get_nick(infraction["id"], member.id),
102 reason=f"Superstarified member tried to escape the prison: {infraction['id']}"
103 )
104
105 await self.reapply_infraction(infraction, action)
106
107 @command(name="superstarify", aliases=("force_nick", "star", "starify", "superstar"))
108 async def superstarify(
109 self,
110 ctx: Context,
111 member: Member,
112 duration: Expiry,
113 *,
114 reason: str = '',
115 ) -> None:
116 """
117 Temporarily force a random superstar name (like Taylor Swift) to be the user's nickname.
118
119 A unit of time should be appended to the duration.
120 Units (βcase-sensitive):
121 \u2003`y` - years
122 \u2003`m` - monthsβ
123 \u2003`w` - weeks
124 \u2003`d` - days
125 \u2003`h` - hours
126 \u2003`M` - minutesβ
127 \u2003`s` - seconds
128
129 Alternatively, an ISO 8601 timestamp can be provided for the duration.
130
131 An optional reason can be provided, which would be added to a message stating their old nickname
132 and linking to the nickname policy.
133 """
134 if await _utils.get_active_infraction(ctx, member, "superstar"):
135 return
136
137 # Post the infraction to the API
138 old_nick = member.display_name
139 infraction_reason = f'Old nickname: {old_nick}. {reason}'
140 infraction = await _utils.post_infraction(ctx, member, "superstar", infraction_reason, duration, active=True)
141 id_ = infraction["id"]
142
143 forced_nick = self.get_nick(id_, member.id)
144 expiry_str = format_infraction(infraction["expires_at"])
145
146 # Apply the infraction
147 async def action() -> None:
148 log.debug(f"Changing nickname of {member} to {forced_nick}.")
149 self.mod_log.ignore(constants.Event.member_update, member.id)
150 await member.edit(nick=forced_nick, reason=reason)
151
152 old_nick = escape_markdown(old_nick)
153 forced_nick = escape_markdown(forced_nick)
154
155 nickname_info = textwrap.dedent(f"""
156 Old nickname: `{old_nick}`
157 New nickname: `{forced_nick}`
158 """).strip()
159
160 user_message = (
161 f"Your previous nickname, **{old_nick}**, "
162 f"was so bad that we have decided to change it. "
163 f"Your new nickname will be **{forced_nick}**.\n\n"
164 "{reason}"
165 f"You will be unable to change your nickname until **{expiry_str}**. "
166 "If you're confused by this, please read our "
167 f"[official nickname policy]({NICKNAME_POLICY_URL})."
168 ).format
169
170 successful = await self.apply_infraction(
171 ctx, infraction, member, action(),
172 user_reason=user_message(reason=f'**Additional details:** {reason}\n\n' if reason else ''),
173 additional_info=nickname_info
174 )
175
176 # Send an embed with to the invoking context if superstar was successful.
177 if successful:
178 log.trace(f"Sending superstar #{id_} embed.")
179 embed = Embed(
180 title="Superstarified!",
181 colour=constants.Colours.soft_orange,
182 description=user_message(reason='')
183 )
184 await ctx.send(embed=embed)
185
186 @command(name="unsuperstarify", aliases=("release_nick", "unstar", "unstarify", "unsuperstar"))
187 async def unsuperstarify(self, ctx: Context, member: Member) -> None:
188 """Remove the superstarify infraction and allow the user to change their nickname."""
189 await self.pardon_infraction(ctx, "superstar", member)
190
191 async def _pardon_action(self, infraction: _utils.Infraction) -> t.Optional[t.Dict[str, str]]:
192 """Pardon a superstar infraction and return a log dict."""
193 if infraction["type"] != "superstar":
194 return
195
196 guild = self.bot.get_guild(constants.Guild.id)
197 user = guild.get_member(infraction["user"])
198
199 # Don't bother sending a notification if the user left the guild.
200 if not user:
201 log.debug(
202 "User left the guild and therefore won't be notified about superstar "
203 f"{infraction['id']} pardon."
204 )
205 return {}
206
207 # DM the user about the expiration.
208 notified = await _utils.notify_pardon(
209 user=user,
210 title="You are no longer superstarified",
211 content="You may now change your nickname on the server.",
212 icon_url=_utils.INFRACTION_ICONS["superstar"][1]
213 )
214
215 return {
216 "Member": format_user(user),
217 "DM": "Sent" if notified else "**Failed**"
218 }
219
220 @staticmethod
221 def get_nick(infraction_id: int, member_id: int) -> str:
222 """Randomly select a nickname from the Superstarify nickname list."""
223 log.trace(f"Choosing a random nickname for superstar #{infraction_id}.")
224
225 rng = random.Random(str(infraction_id) + str(member_id))
226 return rng.choice(STAR_NAMES)
227
228 # This cannot be static (must have a __func__ attribute).
229 async def cog_check(self, ctx: Context) -> bool:
230 """Only allow moderators to invoke the commands in this cog."""
231 return await has_any_role(*constants.MODERATION_ROLES).predicate(ctx)
232
233
234 def setup(bot: Bot) -> None:
235 """Load the Superstarify cog."""
236 bot.add_cog(Superstarify(bot))
237
[end of bot/exts/moderation/infraction/superstarify.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bot/exts/moderation/infraction/superstarify.py b/bot/exts/moderation/infraction/superstarify.py
--- a/bot/exts/moderation/infraction/superstarify.py
+++ b/bot/exts/moderation/infraction/superstarify.py
@@ -11,7 +11,7 @@
from bot import constants
from bot.bot import Bot
-from bot.converters import Expiry
+from bot.converters import Duration, Expiry
from bot.exts.moderation.infraction import _utils
from bot.exts.moderation.infraction._scheduler import InfractionScheduler
from bot.utils.messages import format_user
@@ -19,6 +19,7 @@
log = logging.getLogger(__name__)
NICKNAME_POLICY_URL = "https://pythondiscord.com/pages/rules/#nickname-policy"
+SUPERSTARIFY_DEFAULT_DURATION = "1h"
with Path("bot/resources/stars.json").open(encoding="utf-8") as stars_file:
STAR_NAMES = json.load(stars_file)
@@ -109,7 +110,7 @@
self,
ctx: Context,
member: Member,
- duration: Expiry,
+ duration: t.Optional[Expiry],
*,
reason: str = '',
) -> None:
@@ -134,6 +135,9 @@
if await _utils.get_active_infraction(ctx, member, "superstar"):
return
+ # Set to default duration if none was provided.
+ duration = duration or await Duration().convert(ctx, SUPERSTARIFY_DEFAULT_DURATION)
+
# Post the infraction to the API
old_nick = member.display_name
infraction_reason = f'Old nickname: {old_nick}. {reason}'
| {"golden_diff": "diff --git a/bot/exts/moderation/infraction/superstarify.py b/bot/exts/moderation/infraction/superstarify.py\n--- a/bot/exts/moderation/infraction/superstarify.py\n+++ b/bot/exts/moderation/infraction/superstarify.py\n@@ -11,7 +11,7 @@\n \n from bot import constants\n from bot.bot import Bot\n-from bot.converters import Expiry\n+from bot.converters import Duration, Expiry\n from bot.exts.moderation.infraction import _utils\n from bot.exts.moderation.infraction._scheduler import InfractionScheduler\n from bot.utils.messages import format_user\n@@ -19,6 +19,7 @@\n \n log = logging.getLogger(__name__)\n NICKNAME_POLICY_URL = \"https://pythondiscord.com/pages/rules/#nickname-policy\"\n+SUPERSTARIFY_DEFAULT_DURATION = \"1h\"\n \n with Path(\"bot/resources/stars.json\").open(encoding=\"utf-8\") as stars_file:\n STAR_NAMES = json.load(stars_file)\n@@ -109,7 +110,7 @@\n self,\n ctx: Context,\n member: Member,\n- duration: Expiry,\n+ duration: t.Optional[Expiry],\n *,\n reason: str = '',\n ) -> None:\n@@ -134,6 +135,9 @@\n if await _utils.get_active_infraction(ctx, member, \"superstar\"):\n return\n \n+ # Set to default duration if none was provided.\n+ duration = duration or await Duration().convert(ctx, SUPERSTARIFY_DEFAULT_DURATION)\n+\n # Post the infraction to the API\n old_nick = member.display_name\n infraction_reason = f'Old nickname: {old_nick}. {reason}'\n", "issue": "Default duration for !superstar command if no duration is specified\n## Context\r\nThe `!superstar` command currently requires a duration to be specified when invoked. \r\n\r\nOften we want to change someone's nickname from something very noisy or that takes up a lot of room until the user can respond and change it themselves. The duration in these cases is not usually that important as we want it to be a quick change.\r\n\r\n## Proposal\r\nIt would be useful if there was a short default duration for this command. Something like 5 or 15 minutes. It would function similarly to the `!mute` command where if no duration is given, it opts for a 1 hour duration.\n", "before_files": [{"content": "import json\nimport logging\nimport random\nimport textwrap\nimport typing as t\nfrom pathlib import Path\n\nfrom discord import Embed, Member\nfrom discord.ext.commands import Cog, Context, command, has_any_role\nfrom discord.utils import escape_markdown\n\nfrom bot import constants\nfrom bot.bot import Bot\nfrom bot.converters import Expiry\nfrom bot.exts.moderation.infraction import _utils\nfrom bot.exts.moderation.infraction._scheduler import InfractionScheduler\nfrom bot.utils.messages import format_user\nfrom bot.utils.time import format_infraction\n\nlog = logging.getLogger(__name__)\nNICKNAME_POLICY_URL = \"https://pythondiscord.com/pages/rules/#nickname-policy\"\n\nwith Path(\"bot/resources/stars.json\").open(encoding=\"utf-8\") as stars_file:\n STAR_NAMES = json.load(stars_file)\n\n\nclass Superstarify(InfractionScheduler, Cog):\n \"\"\"A set of commands to moderate terrible nicknames.\"\"\"\n\n def __init__(self, bot: Bot):\n super().__init__(bot, supported_infractions={\"superstar\"})\n\n @Cog.listener()\n async def on_member_update(self, before: Member, after: Member) -> None:\n \"\"\"Revert nickname edits if the user has an active superstarify infraction.\"\"\"\n if before.display_name == after.display_name:\n return # User didn't change their nickname. Abort!\n\n log.trace(\n f\"{before} ({before.display_name}) is trying to change their nickname to \"\n f\"{after.display_name}. Checking if the user is in superstar-prison...\"\n )\n\n active_superstarifies = await self.bot.api_client.get(\n \"bot/infractions\",\n params={\n \"active\": \"true\",\n \"type\": \"superstar\",\n \"user__id\": str(before.id)\n }\n )\n\n if not active_superstarifies:\n log.trace(f\"{before} has no active superstar infractions.\")\n return\n\n infraction = active_superstarifies[0]\n forced_nick = self.get_nick(infraction[\"id\"], before.id)\n if after.display_name == forced_nick:\n return # Nick change was triggered by this event. Ignore.\n\n log.info(\n f\"{after.display_name} ({after.id}) tried to escape superstar prison. \"\n f\"Changing the nick back to {before.display_name}.\"\n )\n await after.edit(\n nick=forced_nick,\n reason=f\"Superstarified member tried to escape the prison: {infraction['id']}\"\n )\n\n notified = await _utils.notify_infraction(\n user=after,\n infr_type=\"Superstarify\",\n expires_at=format_infraction(infraction[\"expires_at\"]),\n reason=(\n \"You have tried to change your nickname on the **Python Discord** server \"\n f\"from **{before.display_name}** to **{after.display_name}**, but as you \"\n \"are currently in superstar-prison, you do not have permission to do so.\"\n ),\n icon_url=_utils.INFRACTION_ICONS[\"superstar\"][0]\n )\n\n if not notified:\n log.info(\"Failed to DM user about why they cannot change their nickname.\")\n\n @Cog.listener()\n async def on_member_join(self, member: Member) -> None:\n \"\"\"Reapply active superstar infractions for returning members.\"\"\"\n active_superstarifies = await self.bot.api_client.get(\n \"bot/infractions\",\n params={\n \"active\": \"true\",\n \"type\": \"superstar\",\n \"user__id\": member.id\n }\n )\n\n if active_superstarifies:\n infraction = active_superstarifies[0]\n action = member.edit(\n nick=self.get_nick(infraction[\"id\"], member.id),\n reason=f\"Superstarified member tried to escape the prison: {infraction['id']}\"\n )\n\n await self.reapply_infraction(infraction, action)\n\n @command(name=\"superstarify\", aliases=(\"force_nick\", \"star\", \"starify\", \"superstar\"))\n async def superstarify(\n self,\n ctx: Context,\n member: Member,\n duration: Expiry,\n *,\n reason: str = '',\n ) -> None:\n \"\"\"\n Temporarily force a random superstar name (like Taylor Swift) to be the user's nickname.\n\n A unit of time should be appended to the duration.\n Units (\u2217case-sensitive):\n \\u2003`y` - years\n \\u2003`m` - months\u2217\n \\u2003`w` - weeks\n \\u2003`d` - days\n \\u2003`h` - hours\n \\u2003`M` - minutes\u2217\n \\u2003`s` - seconds\n\n Alternatively, an ISO 8601 timestamp can be provided for the duration.\n\n An optional reason can be provided, which would be added to a message stating their old nickname\n and linking to the nickname policy.\n \"\"\"\n if await _utils.get_active_infraction(ctx, member, \"superstar\"):\n return\n\n # Post the infraction to the API\n old_nick = member.display_name\n infraction_reason = f'Old nickname: {old_nick}. {reason}'\n infraction = await _utils.post_infraction(ctx, member, \"superstar\", infraction_reason, duration, active=True)\n id_ = infraction[\"id\"]\n\n forced_nick = self.get_nick(id_, member.id)\n expiry_str = format_infraction(infraction[\"expires_at\"])\n\n # Apply the infraction\n async def action() -> None:\n log.debug(f\"Changing nickname of {member} to {forced_nick}.\")\n self.mod_log.ignore(constants.Event.member_update, member.id)\n await member.edit(nick=forced_nick, reason=reason)\n\n old_nick = escape_markdown(old_nick)\n forced_nick = escape_markdown(forced_nick)\n\n nickname_info = textwrap.dedent(f\"\"\"\n Old nickname: `{old_nick}`\n New nickname: `{forced_nick}`\n \"\"\").strip()\n\n user_message = (\n f\"Your previous nickname, **{old_nick}**, \"\n f\"was so bad that we have decided to change it. \"\n f\"Your new nickname will be **{forced_nick}**.\\n\\n\"\n \"{reason}\"\n f\"You will be unable to change your nickname until **{expiry_str}**. \"\n \"If you're confused by this, please read our \"\n f\"[official nickname policy]({NICKNAME_POLICY_URL}).\"\n ).format\n\n successful = await self.apply_infraction(\n ctx, infraction, member, action(),\n user_reason=user_message(reason=f'**Additional details:** {reason}\\n\\n' if reason else ''),\n additional_info=nickname_info\n )\n\n # Send an embed with to the invoking context if superstar was successful.\n if successful:\n log.trace(f\"Sending superstar #{id_} embed.\")\n embed = Embed(\n title=\"Superstarified!\",\n colour=constants.Colours.soft_orange,\n description=user_message(reason='')\n )\n await ctx.send(embed=embed)\n\n @command(name=\"unsuperstarify\", aliases=(\"release_nick\", \"unstar\", \"unstarify\", \"unsuperstar\"))\n async def unsuperstarify(self, ctx: Context, member: Member) -> None:\n \"\"\"Remove the superstarify infraction and allow the user to change their nickname.\"\"\"\n await self.pardon_infraction(ctx, \"superstar\", member)\n\n async def _pardon_action(self, infraction: _utils.Infraction) -> t.Optional[t.Dict[str, str]]:\n \"\"\"Pardon a superstar infraction and return a log dict.\"\"\"\n if infraction[\"type\"] != \"superstar\":\n return\n\n guild = self.bot.get_guild(constants.Guild.id)\n user = guild.get_member(infraction[\"user\"])\n\n # Don't bother sending a notification if the user left the guild.\n if not user:\n log.debug(\n \"User left the guild and therefore won't be notified about superstar \"\n f\"{infraction['id']} pardon.\"\n )\n return {}\n\n # DM the user about the expiration.\n notified = await _utils.notify_pardon(\n user=user,\n title=\"You are no longer superstarified\",\n content=\"You may now change your nickname on the server.\",\n icon_url=_utils.INFRACTION_ICONS[\"superstar\"][1]\n )\n\n return {\n \"Member\": format_user(user),\n \"DM\": \"Sent\" if notified else \"**Failed**\"\n }\n\n @staticmethod\n def get_nick(infraction_id: int, member_id: int) -> str:\n \"\"\"Randomly select a nickname from the Superstarify nickname list.\"\"\"\n log.trace(f\"Choosing a random nickname for superstar #{infraction_id}.\")\n\n rng = random.Random(str(infraction_id) + str(member_id))\n return rng.choice(STAR_NAMES)\n\n # This cannot be static (must have a __func__ attribute).\n async def cog_check(self, ctx: Context) -> bool:\n \"\"\"Only allow moderators to invoke the commands in this cog.\"\"\"\n return await has_any_role(*constants.MODERATION_ROLES).predicate(ctx)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Superstarify cog.\"\"\"\n bot.add_cog(Superstarify(bot))\n", "path": "bot/exts/moderation/infraction/superstarify.py"}]} | 3,327 | 388 |
gh_patches_debug_1501 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-6332 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 'api_core-1.6.0a2'? or 'api_core-1.5.1'?
I'd like to use the changes from https://github.com/googleapis/google-cloud-python/pull/6310 in the library I'm working on.
Not sure about the version number for that one, since:
- I don't know what else has been released since 1.6.0a1
- I don't know what is intended by 1.6.0a1 in https://github.com/googleapis/google-cloud-python/pull/6267 (is it a pre-release?)
</issue>
<code>
[start of api_core/setup.py]
1 # Copyright 2018 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import io
16 import os
17
18 import setuptools
19
20
21 # Package metadata.
22
23 name = 'google-api-core'
24 description = 'Google API client core library'
25 version = '1.6.0a1'
26 # Should be one of:
27 # 'Development Status :: 3 - Alpha'
28 # 'Development Status :: 4 - Beta'
29 # 'Development Status :: 5 - Production/Stable'
30 release_status = 'Development Status :: 5 - Production/Stable'
31 dependencies = [
32 'googleapis-common-protos<2.0dev,>=1.5.3',
33 'protobuf>=3.4.0',
34 'google-auth<2.0.0dev,>=0.4.0',
35 'requests<3.0.0dev,>=2.18.0',
36 'setuptools>=34.0.0',
37 'six>=1.10.0',
38 'pytz',
39 'futures>=3.2.0;python_version<"3.2"'
40 ]
41 extras = {
42 'grpc': 'grpcio>=1.8.2',
43 'grpcio-gcp': 'grpcio-gcp>=0.2.2'
44 }
45
46
47 # Setup boilerplate below this line.
48
49 package_root = os.path.abspath(os.path.dirname(__file__))
50
51 readme_filename = os.path.join(package_root, 'README.rst')
52 with io.open(readme_filename, encoding='utf-8') as readme_file:
53 readme = readme_file.read()
54
55 # Only include packages under the 'google' namespace. Do not include tests,
56 # benchmarks, etc.
57 packages = [
58 package for package in setuptools.find_packages()
59 if package.startswith('google')]
60
61 # Determine which namespaces are needed.
62 namespaces = ['google']
63 if 'google.cloud' in packages:
64 namespaces.append('google.cloud')
65
66
67 setuptools.setup(
68 name=name,
69 version=version,
70 description=description,
71 long_description=readme,
72 author='Google LLC',
73 author_email='[email protected]',
74 license='Apache 2.0',
75 url='https://github.com/GoogleCloudPlatform/google-cloud-python',
76 classifiers=[
77 release_status,
78 'Intended Audience :: Developers',
79 'License :: OSI Approved :: Apache Software License',
80 'Programming Language :: Python',
81 'Programming Language :: Python :: 2',
82 'Programming Language :: Python :: 2.7',
83 'Programming Language :: Python :: 3',
84 'Programming Language :: Python :: 3.5',
85 'Programming Language :: Python :: 3.6',
86 'Programming Language :: Python :: 3.7',
87 'Operating System :: OS Independent',
88 'Topic :: Internet',
89 ],
90 platforms='Posix; MacOS X; Windows',
91 packages=packages,
92 namespace_packages=namespaces,
93 install_requires=dependencies,
94 extras_require=extras,
95 include_package_data=True,
96 zip_safe=False,
97 )
98
[end of api_core/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api_core/setup.py b/api_core/setup.py
--- a/api_core/setup.py
+++ b/api_core/setup.py
@@ -22,7 +22,7 @@
name = 'google-api-core'
description = 'Google API client core library'
-version = '1.6.0a1'
+version = '1.5.1'
# Should be one of:
# 'Development Status :: 3 - Alpha'
# 'Development Status :: 4 - Beta'
| {"golden_diff": "diff --git a/api_core/setup.py b/api_core/setup.py\n--- a/api_core/setup.py\n+++ b/api_core/setup.py\n@@ -22,7 +22,7 @@\n \n name = 'google-api-core'\n description = 'Google API client core library'\n-version = '1.6.0a1'\n+version = '1.5.1'\n # Should be one of:\n # 'Development Status :: 3 - Alpha'\n # 'Development Status :: 4 - Beta'\n", "issue": "Release 'api_core-1.6.0a2'? or 'api_core-1.5.1'?\nI'd like to use the changes from https://github.com/googleapis/google-cloud-python/pull/6310 in the library I'm working on.\r\n\r\nNot sure about the version number for that one, since:\r\n\r\n- I don't know what else has been released since 1.6.0a1\r\n- I don't know what is intended by 1.6.0a1 in https://github.com/googleapis/google-cloud-python/pull/6267 (is it a pre-release?)\n", "before_files": [{"content": "# Copyright 2018 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport io\nimport os\n\nimport setuptools\n\n\n# Package metadata.\n\nname = 'google-api-core'\ndescription = 'Google API client core library'\nversion = '1.6.0a1'\n# Should be one of:\n# 'Development Status :: 3 - Alpha'\n# 'Development Status :: 4 - Beta'\n# 'Development Status :: 5 - Production/Stable'\nrelease_status = 'Development Status :: 5 - Production/Stable'\ndependencies = [\n 'googleapis-common-protos<2.0dev,>=1.5.3',\n 'protobuf>=3.4.0',\n 'google-auth<2.0.0dev,>=0.4.0',\n 'requests<3.0.0dev,>=2.18.0',\n 'setuptools>=34.0.0',\n 'six>=1.10.0',\n 'pytz',\n 'futures>=3.2.0;python_version<\"3.2\"'\n]\nextras = {\n 'grpc': 'grpcio>=1.8.2',\n 'grpcio-gcp': 'grpcio-gcp>=0.2.2'\n}\n\n\n# Setup boilerplate below this line.\n\npackage_root = os.path.abspath(os.path.dirname(__file__))\n\nreadme_filename = os.path.join(package_root, 'README.rst')\nwith io.open(readme_filename, encoding='utf-8') as readme_file:\n readme = readme_file.read()\n\n# Only include packages under the 'google' namespace. Do not include tests,\n# benchmarks, etc.\npackages = [\n package for package in setuptools.find_packages()\n if package.startswith('google')]\n\n# Determine which namespaces are needed.\nnamespaces = ['google']\nif 'google.cloud' in packages:\n namespaces.append('google.cloud')\n\n\nsetuptools.setup(\n name=name,\n version=version,\n description=description,\n long_description=readme,\n author='Google LLC',\n author_email='[email protected]',\n license='Apache 2.0',\n url='https://github.com/GoogleCloudPlatform/google-cloud-python',\n classifiers=[\n release_status,\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: Apache Software License',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Operating System :: OS Independent',\n 'Topic :: Internet',\n ],\n platforms='Posix; MacOS X; Windows',\n packages=packages,\n namespace_packages=namespaces,\n install_requires=dependencies,\n extras_require=extras,\n include_package_data=True,\n zip_safe=False,\n)\n", "path": "api_core/setup.py"}]} | 1,594 | 104 |
gh_patches_debug_4670 | rasdani/github-patches | git_diff | DDMAL__CantusDB-1215 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
typo in "Bibliography"
Source view has "bibilography" and not bibliography:
<img width="250" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/42a71b35-3598-4e66-abd2-1830d44a9ce6">
Source edit seems to be correct though (https://cantusdatabase.org/edit-source/711225):
<img width="202" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/ac6d001e-c2cb-49af-9e77-a4953bc657a7">
</issue>
<code>
[start of django/cantusdb_project/main_app/admin.py]
1 from django.contrib import admin
2 from reversion.admin import VersionAdmin
3 from main_app.models import *
4 from main_app.forms import (
5 AdminCenturyForm,
6 AdminChantForm,
7 AdminFeastForm,
8 AdminGenreForm,
9 AdminNotationForm,
10 AdminOfficeForm,
11 AdminProvenanceForm,
12 AdminRismSiglumForm,
13 AdminSegmentForm,
14 AdminSequenceForm,
15 AdminSourceForm,
16 )
17
18 # these fields should not be editable by all classes
19 EXCLUDE = (
20 "json_info",
21 )
22
23 READ_ONLY = (
24 "created_by",
25 "last_updated_by",
26 )
27
28 class BaseModelAdmin(VersionAdmin):
29 exclude = EXCLUDE
30 readonly_fields = READ_ONLY
31
32 # if an object is created in the admin interface, assign the user to the created_by field
33 # else if an object is updated in the admin interface, assign the user to the last_updated_by field
34 def save_model(self, request, obj, form, change):
35 if change:
36 obj.last_updated_by = request.user
37 else:
38 obj.created_by = request.user
39 super().save_model(request, obj, form, change)
40
41
42 class CenturyAdmin(BaseModelAdmin):
43 search_fields = ("name",)
44 form = AdminCenturyForm
45
46
47 class ChantAdmin(BaseModelAdmin):
48 @admin.display(description="Source Siglum")
49 def get_source_siglum(self, obj):
50 if obj.source:
51 return obj.source.siglum
52
53 list_display = (
54 "incipit",
55 "get_source_siglum",
56 "genre",
57 )
58 search_fields = (
59 "title",
60 "incipit",
61 "cantus_id",
62 "id",
63 )
64
65 readonly_fields = READ_ONLY + (
66 "date_created",
67 "date_updated",
68 )
69
70 list_filter = (
71 "genre",
72 "office",
73 )
74 exclude = EXCLUDE + (
75 "col1",
76 "col2",
77 "col3",
78 "next_chant",
79 "s_sequence",
80 "is_last_chant_in_feast",
81 "visible_status",
82 "date",
83 "volpiano_notes",
84 "volpiano_intervals",
85 "title",
86 "differentiae_database",
87 )
88 form = AdminChantForm
89 raw_id_fields = (
90 "source",
91 "feast",
92 )
93 ordering = ("source__siglum",)
94
95
96 class DifferentiaAdmin(BaseModelAdmin):
97 search_fields = (
98 "differentia_id",
99 "id",
100 )
101
102
103 class FeastAdmin(BaseModelAdmin):
104 search_fields = (
105 "name",
106 "feast_code",
107 )
108 list_display = (
109 "name",
110 "month",
111 "day",
112 "feast_code",
113 )
114 form = AdminFeastForm
115
116
117 class GenreAdmin(BaseModelAdmin):
118 search_fields = ("name",)
119 form = AdminGenreForm
120
121
122 class NotationAdmin(BaseModelAdmin):
123 search_fields = ("name",)
124 form = AdminNotationForm
125
126
127 class OfficeAdmin(BaseModelAdmin):
128 search_fields = ("name",)
129 form = AdminOfficeForm
130
131
132 class ProvenanceAdmin(BaseModelAdmin):
133 search_fields = ("name",)
134 form = AdminProvenanceForm
135
136
137 class RismSiglumAdmin(BaseModelAdmin):
138 search_fields = ("name",)
139 form = AdminRismSiglumForm
140
141
142 class SegmentAdmin(BaseModelAdmin):
143 search_fields = ("name",)
144 form = AdminSegmentForm
145
146
147 class SequenceAdmin(BaseModelAdmin):
148 @admin.display(description="Source Siglum")
149 def get_source_siglum(self, obj):
150 if obj.source:
151 return obj.source.siglum
152
153 search_fields = (
154 "title",
155 "incipit",
156 "cantus_id",
157 "id",
158 )
159 exclude = EXCLUDE + (
160 "c_sequence",
161 "next_chant",
162 "is_last_chant_in_feast",
163 "visible_status",
164 )
165 list_display = ("incipit", "get_source_siglum", "genre")
166 list_filter = (
167 "genre",
168 "office",
169 )
170 raw_id_fields = (
171 "source",
172 "feast",
173 )
174 ordering = ("source__siglum",)
175 form = AdminSequenceForm
176
177
178 class SourceAdmin(BaseModelAdmin):
179 exclude = EXCLUDE + ("source_status",)
180
181 # These search fields are also available on the user-source inline relationship in the user admin page
182 search_fields = (
183 "siglum",
184 "title",
185 "id",
186 )
187 readonly_fields = READ_ONLY + (
188 "number_of_chants",
189 "number_of_melodies",
190 "date_created",
191 "date_updated",
192 )
193 # from the Django docs:
194 # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript βfilterβ interface
195 # that allows searching within the options. The unselected and selected options appear in two boxes side by side.
196 filter_horizontal = (
197 "century",
198 "notation",
199 "current_editors",
200 "inventoried_by",
201 "full_text_entered_by",
202 "melodies_entered_by",
203 "proofreaders",
204 "other_editors",
205 )
206
207 list_display = (
208 "title",
209 "siglum",
210 "id",
211 )
212
213 list_filter = (
214 "full_source",
215 "segment",
216 "source_status",
217 "published",
218 "century",
219 )
220
221 ordering = ("siglum",)
222
223 form = AdminSourceForm
224
225
226 admin.site.register(Century, CenturyAdmin)
227 admin.site.register(Chant, ChantAdmin)
228 admin.site.register(Differentia, DifferentiaAdmin)
229 admin.site.register(Feast, FeastAdmin)
230 admin.site.register(Genre, GenreAdmin)
231 admin.site.register(Notation, NotationAdmin)
232 admin.site.register(Office, OfficeAdmin)
233 admin.site.register(Provenance, ProvenanceAdmin)
234 admin.site.register(RismSiglum, RismSiglumAdmin)
235 admin.site.register(Segment, SegmentAdmin)
236 admin.site.register(Sequence, SequenceAdmin)
237 admin.site.register(Source, SourceAdmin)
238
[end of django/cantusdb_project/main_app/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py
--- a/django/cantusdb_project/main_app/admin.py
+++ b/django/cantusdb_project/main_app/admin.py
@@ -16,15 +16,14 @@
)
# these fields should not be editable by all classes
-EXCLUDE = (
- "json_info",
-)
+EXCLUDE = ("json_info",)
READ_ONLY = (
"created_by",
"last_updated_by",
)
+
class BaseModelAdmin(VersionAdmin):
exclude = EXCLUDE
readonly_fields = READ_ONLY
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/admin.py b/django/cantusdb_project/main_app/admin.py\n--- a/django/cantusdb_project/main_app/admin.py\n+++ b/django/cantusdb_project/main_app/admin.py\n@@ -16,15 +16,14 @@\n )\n \n # these fields should not be editable by all classes\n-EXCLUDE = (\n- \"json_info\",\n-)\n+EXCLUDE = (\"json_info\",)\n \n READ_ONLY = (\n \"created_by\",\n \"last_updated_by\",\n )\n \n+\n class BaseModelAdmin(VersionAdmin):\n exclude = EXCLUDE\n readonly_fields = READ_ONLY\n", "issue": "typo in \"Bibliography\" \nSource view has \"bibilography\" and not bibliography:\r\n<img width=\"250\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/42a71b35-3598-4e66-abd2-1830d44a9ce6\">\r\nSource edit seems to be correct though (https://cantusdatabase.org/edit-source/711225):\r\n<img width=\"202\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/ac6d001e-c2cb-49af-9e77-a4953bc657a7\">\r\n\n", "before_files": [{"content": "from django.contrib import admin\nfrom reversion.admin import VersionAdmin\nfrom main_app.models import *\nfrom main_app.forms import (\n AdminCenturyForm,\n AdminChantForm,\n AdminFeastForm,\n AdminGenreForm,\n AdminNotationForm,\n AdminOfficeForm,\n AdminProvenanceForm,\n AdminRismSiglumForm,\n AdminSegmentForm,\n AdminSequenceForm,\n AdminSourceForm,\n)\n\n# these fields should not be editable by all classes\nEXCLUDE = (\n \"json_info\",\n)\n\nREAD_ONLY = (\n \"created_by\",\n \"last_updated_by\",\n)\n\nclass BaseModelAdmin(VersionAdmin):\n exclude = EXCLUDE\n readonly_fields = READ_ONLY\n\n # if an object is created in the admin interface, assign the user to the created_by field\n # else if an object is updated in the admin interface, assign the user to the last_updated_by field\n def save_model(self, request, obj, form, change):\n if change:\n obj.last_updated_by = request.user\n else:\n obj.created_by = request.user\n super().save_model(request, obj, form, change)\n\n\nclass CenturyAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminCenturyForm\n\n\nclass ChantAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n list_display = (\n \"incipit\",\n \"get_source_siglum\",\n \"genre\",\n )\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n\n readonly_fields = READ_ONLY + (\n \"date_created\",\n \"date_updated\",\n )\n\n list_filter = (\n \"genre\",\n \"office\",\n )\n exclude = EXCLUDE + (\n \"col1\",\n \"col2\",\n \"col3\",\n \"next_chant\",\n \"s_sequence\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n \"date\",\n \"volpiano_notes\",\n \"volpiano_intervals\",\n \"title\",\n \"differentiae_database\",\n )\n form = AdminChantForm\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n\n\nclass DifferentiaAdmin(BaseModelAdmin):\n search_fields = (\n \"differentia_id\",\n \"id\",\n )\n\n\nclass FeastAdmin(BaseModelAdmin):\n search_fields = (\n \"name\",\n \"feast_code\",\n )\n list_display = (\n \"name\",\n \"month\",\n \"day\",\n \"feast_code\",\n )\n form = AdminFeastForm\n\n\nclass GenreAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminGenreForm\n\n\nclass NotationAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminNotationForm\n\n\nclass OfficeAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminOfficeForm\n\n\nclass ProvenanceAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminProvenanceForm\n\n\nclass RismSiglumAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminRismSiglumForm\n\n\nclass SegmentAdmin(BaseModelAdmin):\n search_fields = (\"name\",)\n form = AdminSegmentForm\n\n\nclass SequenceAdmin(BaseModelAdmin):\n @admin.display(description=\"Source Siglum\")\n def get_source_siglum(self, obj):\n if obj.source:\n return obj.source.siglum\n\n search_fields = (\n \"title\",\n \"incipit\",\n \"cantus_id\",\n \"id\",\n )\n exclude = EXCLUDE + (\n \"c_sequence\",\n \"next_chant\",\n \"is_last_chant_in_feast\",\n \"visible_status\",\n )\n list_display = (\"incipit\", \"get_source_siglum\", \"genre\")\n list_filter = (\n \"genre\",\n \"office\",\n )\n raw_id_fields = (\n \"source\",\n \"feast\",\n )\n ordering = (\"source__siglum\",)\n form = AdminSequenceForm\n\n\nclass SourceAdmin(BaseModelAdmin):\n exclude = EXCLUDE + (\"source_status\",)\n\n # These search fields are also available on the user-source inline relationship in the user admin page\n search_fields = (\n \"siglum\",\n \"title\",\n \"id\",\n )\n readonly_fields = READ_ONLY + (\n \"number_of_chants\",\n \"number_of_melodies\",\n \"date_created\",\n \"date_updated\",\n )\n # from the Django docs:\n # Adding a ManyToManyField to this list will instead use a nifty unobtrusive JavaScript \u201cfilter\u201d interface\n # that allows searching within the options. The unselected and selected options appear in two boxes side by side.\n filter_horizontal = (\n \"century\",\n \"notation\",\n \"current_editors\",\n \"inventoried_by\",\n \"full_text_entered_by\",\n \"melodies_entered_by\",\n \"proofreaders\",\n \"other_editors\",\n )\n\n list_display = (\n \"title\",\n \"siglum\",\n \"id\",\n )\n\n list_filter = (\n \"full_source\",\n \"segment\",\n \"source_status\",\n \"published\",\n \"century\",\n )\n\n ordering = (\"siglum\",)\n\n form = AdminSourceForm\n\n\nadmin.site.register(Century, CenturyAdmin)\nadmin.site.register(Chant, ChantAdmin)\nadmin.site.register(Differentia, DifferentiaAdmin)\nadmin.site.register(Feast, FeastAdmin)\nadmin.site.register(Genre, GenreAdmin)\nadmin.site.register(Notation, NotationAdmin)\nadmin.site.register(Office, OfficeAdmin)\nadmin.site.register(Provenance, ProvenanceAdmin)\nadmin.site.register(RismSiglum, RismSiglumAdmin)\nadmin.site.register(Segment, SegmentAdmin)\nadmin.site.register(Sequence, SequenceAdmin)\nadmin.site.register(Source, SourceAdmin)\n", "path": "django/cantusdb_project/main_app/admin.py"}]} | 2,677 | 144 |
gh_patches_debug_29753 | rasdani/github-patches | git_diff | google__clusterfuzz-2642 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Endpoints "/fuzz-strategy-selection" and "/fuzzer-and-job-weights" under cron-service failed.
Hi,
Deploy was done from the master, not from the latest tag.
The reason in both cases is the same:
**"Table afl_stats.TestcaseRun does not have a schema.">. Retrying again.**
Cron-jobs:
Below details for an endpoint "/fuzz-strategy-selection":

Stacktrace:
```
<HttpError 400 when requesting https://bigquery.googleapis.com/bigquery/v2/projects/sta-np-11528163/queries?alt=json returned "Table afl_stats.TestcaseRun does not have a schema.">
Traceback (most recent call last):
File "/srv/handlers/base_handler.py", line 277, in dispatch_request
return super(Handler, self).dispatch_request(*args, **kwargs)
File "third_party/flask/views.py", line 163, in dispatch_request
return meth(*args, **kwargs)
File "/srv/libs/handler.py", line 100, in wrapper
result = func(self)
File "/srv/handlers/cron/fuzz_strategy_selection.py", line 218, in get
_query_and_upload_strategy_probabilities(engine)
File "/srv/handlers/cron/fuzz_strategy_selection.py", line 185, in _query_and_upload_strategy_probabilities
data = _query_multi_armed_bandit_probabilities(engine)
File "/srv/handlers/cron/fuzz_strategy_selection.py", line 151, in _query_multi_armed_bandit_probabilities
return client.query(query=formatted_query).rows
File "/srv/clusterfuzz/_internal/google_cloud_utils/big_query.py", line 247, in query
result = self.raw_query(query, max_results=0)
File "/srv/clusterfuzz/_internal/base/retry.py", line 88, in _wrapper
result = func(*args, **kwargs)
File "/srv/clusterfuzz/_internal/google_cloud_utils/big_query.py", line 190, in raw_query
projectId=self.project_id, body=body).execute()
File "third_party/googleapiclient/_helpers.py", line 134, in positional_wrapper
return wrapped(*args, **kwargs)
File "third_party/googleapiclient/http.py", line 907, in execute
raise HttpError(resp, content, uri=self.uri)
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://bigquery.googleapis.com/bigquery/v2/projects/sta-np-11528163/queries?alt=json returned "Table afl_stats.TestcaseRun does not have a schema.">
```
**Is it normal behaviour?**
**Do you have any idea what can be wrong? How to fix it?**
So far we have only tested libfuzzer, afl wasn't touched yet.
Thanks,
</issue>
<code>
[start of src/appengine/handlers/cron/load_bigquery_stats.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Handler used for loading bigquery data."""
15
16 import datetime
17 import random
18 import time
19
20 from googleapiclient.errors import HttpError
21 import httplib2
22
23 from clusterfuzz._internal.base import utils
24 from clusterfuzz._internal.datastore import data_types
25 from clusterfuzz._internal.google_cloud_utils import big_query
26 from clusterfuzz._internal.metrics import fuzzer_stats
27 from clusterfuzz._internal.metrics import fuzzer_stats_schema
28 from clusterfuzz._internal.metrics import logs
29 from handlers import base_handler
30 from libs import handler
31
32 STATS_KINDS = [fuzzer_stats.JobRun, fuzzer_stats.TestcaseRun]
33
34 NUM_RETRIES = 2
35 RETRY_SLEEP_TIME = 5
36
37
38 class Handler(base_handler.Handler):
39 """Cron handler for loading bigquery stats."""
40
41 def _utc_now(self):
42 """Return datetime.datetime.utcnow()."""
43 return datetime.datetime.utcnow()
44
45 def _execute_insert_request(self, request):
46 """Executes a table/dataset insert request, retrying on transport errors."""
47 for i in range(NUM_RETRIES + 1):
48 try:
49 request.execute()
50 return True
51 except HttpError as e:
52 if e.resp.status == 409:
53 # Already exists.
54 return True
55
56 logs.log_error('Failed to insert table/dataset.')
57 return False
58 except httplib2.HttpLib2Error:
59 # Transport error.
60 time.sleep(random.uniform(0, (1 << i) * RETRY_SLEEP_TIME))
61 continue
62
63 logs.log_error('Failed to insert table/dataset.')
64 return False
65
66 def _create_dataset_if_needed(self, bigquery, dataset_id):
67 """Create a new dataset if necessary."""
68 project_id = utils.get_application_id()
69 dataset_body = {
70 'datasetReference': {
71 'datasetId': dataset_id,
72 'projectId': project_id,
73 },
74 }
75 dataset_insert = bigquery.datasets().insert(
76 projectId=project_id, body=dataset_body)
77
78 return self._execute_insert_request(dataset_insert)
79
80 def _create_table_if_needed(self, bigquery, dataset_id, table_id):
81 """Create a new table if needed."""
82 project_id = utils.get_application_id()
83 table_body = {
84 'tableReference': {
85 'datasetId': dataset_id,
86 'projectId': project_id,
87 'tableId': table_id,
88 },
89 'timePartitioning': {
90 'type': 'DAY',
91 },
92 }
93
94 table_insert = bigquery.tables().insert(
95 projectId=project_id, datasetId=dataset_id, body=table_body)
96 return self._execute_insert_request(table_insert)
97
98 def _load_data(self, bigquery, fuzzer):
99 """Load yesterday's stats into BigQuery."""
100 project_id = utils.get_application_id()
101
102 yesterday = (self._utc_now().date() - datetime.timedelta(days=1))
103 date_string = yesterday.strftime('%Y%m%d')
104 timestamp = utils.utc_date_to_timestamp(yesterday)
105
106 dataset_id = fuzzer_stats.dataset_name(fuzzer)
107 if not self._create_dataset_if_needed(bigquery, dataset_id):
108 return
109
110 for kind in STATS_KINDS:
111 kind_name = kind.__name__
112 table_id = kind_name
113 if not self._create_table_if_needed(bigquery, dataset_id, table_id):
114 continue
115
116 if kind == fuzzer_stats.TestcaseRun:
117 schema = fuzzer_stats_schema.get(fuzzer)
118 else:
119 schema = kind.SCHEMA
120
121 gcs_path = fuzzer_stats.get_gcs_stats_path(kind_name, fuzzer, timestamp)
122 load = {
123 'destinationTable': {
124 'projectId': project_id,
125 'tableId': table_id + '$' + date_string,
126 'datasetId': dataset_id,
127 },
128 'schemaUpdateOptions': ['ALLOW_FIELD_ADDITION',],
129 'sourceFormat': 'NEWLINE_DELIMITED_JSON',
130 'sourceUris': ['gs:/' + gcs_path + '*.json'],
131 'writeDisposition': 'WRITE_TRUNCATE',
132 }
133 if schema is not None:
134 load['schema'] = schema
135
136 job_body = {
137 'configuration': {
138 'load': load,
139 },
140 }
141
142 logs.log("Uploading job to BigQuery.", job_body=job_body)
143 request = bigquery.jobs().insert(projectId=project_id, body=job_body)
144 response = request.execute()
145
146 # We cannot really check the response here, as the query might be still
147 # running, but having a BigQuery jobId in the log would make our life
148 # simpler if we ever have to manually check the status of the query.
149 # See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query.
150 logs.log('Response from BigQuery: %s' % response)
151
152 @handler.cron()
153 def get(self):
154 """Load bigquery stats from GCS."""
155 if not big_query.get_bucket():
156 logs.log_error('Loading stats to BigQuery failed: missing bucket name.')
157 return
158
159 # Retrieve list of fuzzers before iterating them, since the query can expire
160 # as we create the load jobs.
161 bigquery_client = big_query.get_api_client()
162 for fuzzer in list(data_types.Fuzzer.query()):
163 logs.log('Loading stats to BigQuery for %s.' % fuzzer.name)
164 self._load_data(bigquery_client, fuzzer.name)
165
[end of src/appengine/handlers/cron/load_bigquery_stats.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/appengine/handlers/cron/load_bigquery_stats.py b/src/appengine/handlers/cron/load_bigquery_stats.py
--- a/src/appengine/handlers/cron/load_bigquery_stats.py
+++ b/src/appengine/handlers/cron/load_bigquery_stats.py
@@ -77,7 +77,7 @@
return self._execute_insert_request(dataset_insert)
- def _create_table_if_needed(self, bigquery, dataset_id, table_id):
+ def _create_table_if_needed(self, bigquery, dataset_id, table_id, schema):
"""Create a new table if needed."""
project_id = utils.get_application_id()
table_body = {
@@ -91,6 +91,9 @@
},
}
+ if schema is not None:
+ table_body['schema'] = schema
+
table_insert = bigquery.tables().insert(
projectId=project_id, datasetId=dataset_id, body=table_body)
return self._execute_insert_request(table_insert)
@@ -110,14 +113,16 @@
for kind in STATS_KINDS:
kind_name = kind.__name__
table_id = kind_name
- if not self._create_table_if_needed(bigquery, dataset_id, table_id):
- continue
if kind == fuzzer_stats.TestcaseRun:
schema = fuzzer_stats_schema.get(fuzzer)
else:
schema = kind.SCHEMA
+ if not self._create_table_if_needed(bigquery, dataset_id, table_id,
+ schema):
+ continue
+
gcs_path = fuzzer_stats.get_gcs_stats_path(kind_name, fuzzer, timestamp)
load = {
'destinationTable': {
| {"golden_diff": "diff --git a/src/appengine/handlers/cron/load_bigquery_stats.py b/src/appengine/handlers/cron/load_bigquery_stats.py\n--- a/src/appengine/handlers/cron/load_bigquery_stats.py\n+++ b/src/appengine/handlers/cron/load_bigquery_stats.py\n@@ -77,7 +77,7 @@\n \n return self._execute_insert_request(dataset_insert)\n \n- def _create_table_if_needed(self, bigquery, dataset_id, table_id):\n+ def _create_table_if_needed(self, bigquery, dataset_id, table_id, schema):\n \"\"\"Create a new table if needed.\"\"\"\n project_id = utils.get_application_id()\n table_body = {\n@@ -91,6 +91,9 @@\n },\n }\n \n+ if schema is not None:\n+ table_body['schema'] = schema\n+\n table_insert = bigquery.tables().insert(\n projectId=project_id, datasetId=dataset_id, body=table_body)\n return self._execute_insert_request(table_insert)\n@@ -110,14 +113,16 @@\n for kind in STATS_KINDS:\n kind_name = kind.__name__\n table_id = kind_name\n- if not self._create_table_if_needed(bigquery, dataset_id, table_id):\n- continue\n \n if kind == fuzzer_stats.TestcaseRun:\n schema = fuzzer_stats_schema.get(fuzzer)\n else:\n schema = kind.SCHEMA\n \n+ if not self._create_table_if_needed(bigquery, dataset_id, table_id,\n+ schema):\n+ continue\n+\n gcs_path = fuzzer_stats.get_gcs_stats_path(kind_name, fuzzer, timestamp)\n load = {\n 'destinationTable': {\n", "issue": "Endpoints \"/fuzz-strategy-selection\" and \"/fuzzer-and-job-weights\" under cron-service failed. \nHi,\r\nDeploy was done from the master, not from the latest tag. \r\n\r\n\r\nThe reason in both cases is the same:\r\n**\"Table afl_stats.TestcaseRun does not have a schema.\">. Retrying again.**\r\n\r\nCron-jobs:\r\n\r\nBelow details for an endpoint \"/fuzz-strategy-selection\":\r\n\r\n\r\nStacktrace:\r\n```\r\n<HttpError 400 when requesting https://bigquery.googleapis.com/bigquery/v2/projects/sta-np-11528163/queries?alt=json returned \"Table afl_stats.TestcaseRun does not have a schema.\">\r\nTraceback (most recent call last):\r\n File \"/srv/handlers/base_handler.py\", line 277, in dispatch_request\r\n return super(Handler, self).dispatch_request(*args, **kwargs)\r\n File \"third_party/flask/views.py\", line 163, in dispatch_request\r\n return meth(*args, **kwargs)\r\n File \"/srv/libs/handler.py\", line 100, in wrapper\r\n result = func(self)\r\n File \"/srv/handlers/cron/fuzz_strategy_selection.py\", line 218, in get\r\n _query_and_upload_strategy_probabilities(engine)\r\n File \"/srv/handlers/cron/fuzz_strategy_selection.py\", line 185, in _query_and_upload_strategy_probabilities\r\n data = _query_multi_armed_bandit_probabilities(engine)\r\n File \"/srv/handlers/cron/fuzz_strategy_selection.py\", line 151, in _query_multi_armed_bandit_probabilities\r\n return client.query(query=formatted_query).rows\r\n File \"/srv/clusterfuzz/_internal/google_cloud_utils/big_query.py\", line 247, in query\r\n result = self.raw_query(query, max_results=0)\r\n File \"/srv/clusterfuzz/_internal/base/retry.py\", line 88, in _wrapper\r\n result = func(*args, **kwargs)\r\n File \"/srv/clusterfuzz/_internal/google_cloud_utils/big_query.py\", line 190, in raw_query\r\n projectId=self.project_id, body=body).execute()\r\n File \"third_party/googleapiclient/_helpers.py\", line 134, in positional_wrapper\r\n return wrapped(*args, **kwargs)\r\n File \"third_party/googleapiclient/http.py\", line 907, in execute\r\n raise HttpError(resp, content, uri=self.uri)\r\ngoogleapiclient.errors.HttpError: <HttpError 400 when requesting https://bigquery.googleapis.com/bigquery/v2/projects/sta-np-11528163/queries?alt=json returned \"Table afl_stats.TestcaseRun does not have a schema.\">\r\n```\r\n\r\n**Is it normal behaviour?**\r\n\r\n**Do you have any idea what can be wrong? How to fix it?**\r\n\r\nSo far we have only tested libfuzzer, afl wasn't touched yet.\r\n\r\nThanks,\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Handler used for loading bigquery data.\"\"\"\n\nimport datetime\nimport random\nimport time\n\nfrom googleapiclient.errors import HttpError\nimport httplib2\n\nfrom clusterfuzz._internal.base import utils\nfrom clusterfuzz._internal.datastore import data_types\nfrom clusterfuzz._internal.google_cloud_utils import big_query\nfrom clusterfuzz._internal.metrics import fuzzer_stats\nfrom clusterfuzz._internal.metrics import fuzzer_stats_schema\nfrom clusterfuzz._internal.metrics import logs\nfrom handlers import base_handler\nfrom libs import handler\n\nSTATS_KINDS = [fuzzer_stats.JobRun, fuzzer_stats.TestcaseRun]\n\nNUM_RETRIES = 2\nRETRY_SLEEP_TIME = 5\n\n\nclass Handler(base_handler.Handler):\n \"\"\"Cron handler for loading bigquery stats.\"\"\"\n\n def _utc_now(self):\n \"\"\"Return datetime.datetime.utcnow().\"\"\"\n return datetime.datetime.utcnow()\n\n def _execute_insert_request(self, request):\n \"\"\"Executes a table/dataset insert request, retrying on transport errors.\"\"\"\n for i in range(NUM_RETRIES + 1):\n try:\n request.execute()\n return True\n except HttpError as e:\n if e.resp.status == 409:\n # Already exists.\n return True\n\n logs.log_error('Failed to insert table/dataset.')\n return False\n except httplib2.HttpLib2Error:\n # Transport error.\n time.sleep(random.uniform(0, (1 << i) * RETRY_SLEEP_TIME))\n continue\n\n logs.log_error('Failed to insert table/dataset.')\n return False\n\n def _create_dataset_if_needed(self, bigquery, dataset_id):\n \"\"\"Create a new dataset if necessary.\"\"\"\n project_id = utils.get_application_id()\n dataset_body = {\n 'datasetReference': {\n 'datasetId': dataset_id,\n 'projectId': project_id,\n },\n }\n dataset_insert = bigquery.datasets().insert(\n projectId=project_id, body=dataset_body)\n\n return self._execute_insert_request(dataset_insert)\n\n def _create_table_if_needed(self, bigquery, dataset_id, table_id):\n \"\"\"Create a new table if needed.\"\"\"\n project_id = utils.get_application_id()\n table_body = {\n 'tableReference': {\n 'datasetId': dataset_id,\n 'projectId': project_id,\n 'tableId': table_id,\n },\n 'timePartitioning': {\n 'type': 'DAY',\n },\n }\n\n table_insert = bigquery.tables().insert(\n projectId=project_id, datasetId=dataset_id, body=table_body)\n return self._execute_insert_request(table_insert)\n\n def _load_data(self, bigquery, fuzzer):\n \"\"\"Load yesterday's stats into BigQuery.\"\"\"\n project_id = utils.get_application_id()\n\n yesterday = (self._utc_now().date() - datetime.timedelta(days=1))\n date_string = yesterday.strftime('%Y%m%d')\n timestamp = utils.utc_date_to_timestamp(yesterday)\n\n dataset_id = fuzzer_stats.dataset_name(fuzzer)\n if not self._create_dataset_if_needed(bigquery, dataset_id):\n return\n\n for kind in STATS_KINDS:\n kind_name = kind.__name__\n table_id = kind_name\n if not self._create_table_if_needed(bigquery, dataset_id, table_id):\n continue\n\n if kind == fuzzer_stats.TestcaseRun:\n schema = fuzzer_stats_schema.get(fuzzer)\n else:\n schema = kind.SCHEMA\n\n gcs_path = fuzzer_stats.get_gcs_stats_path(kind_name, fuzzer, timestamp)\n load = {\n 'destinationTable': {\n 'projectId': project_id,\n 'tableId': table_id + '$' + date_string,\n 'datasetId': dataset_id,\n },\n 'schemaUpdateOptions': ['ALLOW_FIELD_ADDITION',],\n 'sourceFormat': 'NEWLINE_DELIMITED_JSON',\n 'sourceUris': ['gs:/' + gcs_path + '*.json'],\n 'writeDisposition': 'WRITE_TRUNCATE',\n }\n if schema is not None:\n load['schema'] = schema\n\n job_body = {\n 'configuration': {\n 'load': load,\n },\n }\n\n logs.log(\"Uploading job to BigQuery.\", job_body=job_body)\n request = bigquery.jobs().insert(projectId=project_id, body=job_body)\n response = request.execute()\n\n # We cannot really check the response here, as the query might be still\n # running, but having a BigQuery jobId in the log would make our life\n # simpler if we ever have to manually check the status of the query.\n # See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/query.\n logs.log('Response from BigQuery: %s' % response)\n\n @handler.cron()\n def get(self):\n \"\"\"Load bigquery stats from GCS.\"\"\"\n if not big_query.get_bucket():\n logs.log_error('Loading stats to BigQuery failed: missing bucket name.')\n return\n\n # Retrieve list of fuzzers before iterating them, since the query can expire\n # as we create the load jobs.\n bigquery_client = big_query.get_api_client()\n for fuzzer in list(data_types.Fuzzer.query()):\n logs.log('Loading stats to BigQuery for %s.' % fuzzer.name)\n self._load_data(bigquery_client, fuzzer.name)\n", "path": "src/appengine/handlers/cron/load_bigquery_stats.py"}]} | 2,944 | 377 |
gh_patches_debug_22057 | rasdani/github-patches | git_diff | facebookresearch__hydra-2713 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Hydra ray launcher requirements update
# π Feature Request
The requirements of `hydra-ray-launcher` are too restrictive. Please provide wider bounds. In particular, so that is compatible with the latest version of ray and with python 3.10
```
install_requires=[
"boto3==1.22.6",
"hydra-core>=1.1.2",
"ray[default]==1.12.0",
"aiohttp==3.8.1",
"cloudpickle==2.0.0",
"pickle5==0.0.11; python_version < '3.8'",
],
```
## Motivation
**Is your feature request related to a problem? Please describe.**
There are multiple issues pointing to this problem #2407 , #2336 and many others
## Pitch
**Describe the solution you'd like**
<!-- A clear and concise description of what you want to happen. -->
**Describe alternatives you've considered**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Are you willing to open a pull request?** (See [CONTRIBUTING](../../CONTRIBUTING.md))
## Additional context
Add any other context or screenshots about the feature request here.
</issue>
<code>
[start of plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2
3 __version__ = "1.2.0"
4
[end of plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py]
[start of plugins/hydra_ray_launcher/setup.py]
1 # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
2 # type: ignore
3 from pathlib import Path
4
5 from read_version import read_version
6 from setuptools import find_namespace_packages, setup
7
8 setup(
9 name="hydra-ray-launcher",
10 version=read_version("hydra_plugins/hydra_ray_launcher", "__init__.py"),
11 author="Jieru Hu",
12 author_email="[email protected]",
13 description="Hydra Ray launcher plugin",
14 long_description=(Path(__file__).parent / "README.md").read_text(),
15 long_description_content_type="text/markdown",
16 url="https://github.com/facebookresearch/hydra/",
17 packages=find_namespace_packages(include=["hydra_plugins.*"]),
18 classifiers=[
19 "License :: OSI Approved :: MIT License",
20 "Programming Language :: Python :: 3.7",
21 "Programming Language :: Python :: 3.8",
22 "Programming Language :: Python :: 3.9",
23 "Operating System :: MacOS",
24 "Operating System :: POSIX :: Linux",
25 ],
26 install_requires=[
27 "boto3==1.22.6",
28 "hydra-core>=1.1.2",
29 "ray[default]~=1.12",
30 "aiohttp==3.8.1",
31 "cloudpickle==2.0.0",
32 "pickle5==0.0.11; python_version < '3.8'",
33 ],
34 include_package_data=True,
35 )
36
[end of plugins/hydra_ray_launcher/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py b/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py
--- a/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py
+++ b/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py
@@ -1,3 +1,3 @@
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-__version__ = "1.2.0"
+__version__ = "1.2.1"
diff --git a/plugins/hydra_ray_launcher/setup.py b/plugins/hydra_ray_launcher/setup.py
--- a/plugins/hydra_ray_launcher/setup.py
+++ b/plugins/hydra_ray_launcher/setup.py
@@ -20,16 +20,18 @@
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
+ "Programming Language :: Python :: 3.10",
+ "Programming Language :: Python :: 3.11",
"Operating System :: MacOS",
"Operating System :: POSIX :: Linux",
],
install_requires=[
- "boto3==1.22.6",
+ "boto3",
"hydra-core>=1.1.2",
- "ray[default]~=1.12",
- "aiohttp==3.8.1",
- "cloudpickle==2.0.0",
- "pickle5==0.0.11; python_version < '3.8'",
+ "ray[default]<3",
+ "aiohttp<4",
+ "cloudpickle<3",
+ "pickle5==0.0.12; python_version < '3.8'",
],
include_package_data=True,
)
| {"golden_diff": "diff --git a/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py b/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py\n--- a/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py\n+++ b/plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py\n@@ -1,3 +1,3 @@\n # Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n \n-__version__ = \"1.2.0\"\n+__version__ = \"1.2.1\"\ndiff --git a/plugins/hydra_ray_launcher/setup.py b/plugins/hydra_ray_launcher/setup.py\n--- a/plugins/hydra_ray_launcher/setup.py\n+++ b/plugins/hydra_ray_launcher/setup.py\n@@ -20,16 +20,18 @@\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n+ \"Programming Language :: Python :: 3.10\",\n+ \"Programming Language :: Python :: 3.11\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX :: Linux\",\n ],\n install_requires=[\n- \"boto3==1.22.6\",\n+ \"boto3\",\n \"hydra-core>=1.1.2\",\n- \"ray[default]~=1.12\",\n- \"aiohttp==3.8.1\",\n- \"cloudpickle==2.0.0\",\n- \"pickle5==0.0.11; python_version < '3.8'\",\n+ \"ray[default]<3\",\n+ \"aiohttp<4\",\n+ \"cloudpickle<3\",\n+ \"pickle5==0.0.12; python_version < '3.8'\",\n ],\n include_package_data=True,\n )\n", "issue": "Hydra ray launcher requirements update\n# \ud83d\ude80 Feature Request\r\n\r\nThe requirements of `hydra-ray-launcher` are too restrictive. Please provide wider bounds. In particular, so that is compatible with the latest version of ray and with python 3.10\r\n\r\n```\r\ninstall_requires=[\r\n \"boto3==1.22.6\",\r\n \"hydra-core>=1.1.2\",\r\n \"ray[default]==1.12.0\",\r\n \"aiohttp==3.8.1\",\r\n \"cloudpickle==2.0.0\",\r\n \"pickle5==0.0.11; python_version < '3.8'\",\r\n ],\r\n```\r\n\r\n## Motivation\r\n\r\n**Is your feature request related to a problem? Please describe.**\r\n\r\nThere are multiple issues pointing to this problem #2407 , #2336 and many others\r\n\r\n## Pitch\r\n\r\n**Describe the solution you'd like**\r\n<!-- A clear and concise description of what you want to happen. -->\r\n\r\n**Describe alternatives you've considered**\r\n<!-- A clear and concise description of any alternative solutions or features you've considered. -->\r\n\r\n**Are you willing to open a pull request?** (See [CONTRIBUTING](../../CONTRIBUTING.md))\r\n\r\n## Additional context\r\nAdd any other context or screenshots about the feature request here.\r\n\n", "before_files": [{"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n\n__version__ = \"1.2.0\"\n", "path": "plugins/hydra_ray_launcher/hydra_plugins/hydra_ray_launcher/__init__.py"}, {"content": "# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved\n# type: ignore\nfrom pathlib import Path\n\nfrom read_version import read_version\nfrom setuptools import find_namespace_packages, setup\n\nsetup(\n name=\"hydra-ray-launcher\",\n version=read_version(\"hydra_plugins/hydra_ray_launcher\", \"__init__.py\"),\n author=\"Jieru Hu\",\n author_email=\"[email protected]\",\n description=\"Hydra Ray launcher plugin\",\n long_description=(Path(__file__).parent / \"README.md\").read_text(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/facebookresearch/hydra/\",\n packages=find_namespace_packages(include=[\"hydra_plugins.*\"]),\n classifiers=[\n \"License :: OSI Approved :: MIT License\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX :: Linux\",\n ],\n install_requires=[\n \"boto3==1.22.6\",\n \"hydra-core>=1.1.2\",\n \"ray[default]~=1.12\",\n \"aiohttp==3.8.1\",\n \"cloudpickle==2.0.0\",\n \"pickle5==0.0.11; python_version < '3.8'\",\n ],\n include_package_data=True,\n)\n", "path": "plugins/hydra_ray_launcher/setup.py"}]} | 1,280 | 431 |
gh_patches_debug_12563 | rasdani/github-patches | git_diff | pypi__warehouse-669 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Figure out uWSGI Configuration Options
We're currently serving Warehouse on Heroku using gunicorn with the eventlet workers. I started using gunicorn because it's simple, and eventually switched from sync to eventlet workers so that we weren't trivially DoSable by having someone hold open connections (since Heroku's routers don't buffer the request body, only the request headers). In looking at the options, it appears that the following options are sane-ish:
- Continue using gunicorn + eventlet.
- Switch to Waitress which is another Python WSGI server that has an async master process to buffer requests.
- Use nginx or something in front of gunicorn to buffer requests.
- Use uWSGI instead of gunicorn.
Looking at the options, I think uWSGI might be our best option, however we need to figure out a good set of configuration knobs to tune (since uWSGI supports so many of them). We currently have one situation:
- There is no buffering proxy to prevent slowloris or similar attacks.
- We want uploads to go back to the app unbuffered so the app can dynamically choose when to kill an upload (e.g. UserA gets 10MB and UserB gets 50MB).
However, in the future we might be splitting out the upload service to it's own app, in which case it'd be useful to also figure out the configuration options for the additional situation:
- There is no buffering proxy to prevent slowloris or similar attacks.
- We don't care if the app gets requests unbuffered because requests are small.
/cc @mattrobenolt
Add basic uwsgi config
Fixes GH-659
</issue>
<code>
[start of setup.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import os
14
15 import setuptools
16
17
18 base_dir = os.path.dirname(__file__)
19
20 about = {}
21 with open(os.path.join(base_dir, "warehouse", "__about__.py")) as f:
22 exec(f.read(), about)
23
24 with open(os.path.join(base_dir, "README.rst")) as f:
25 long_description = f.read()
26
27
28 setuptools.setup(
29 name=about["__title__"],
30 version=about["__version__"],
31
32 description=about["__summary__"],
33 long_description=long_description,
34 license=about["__license__"],
35 url=about["__uri__"],
36
37 author=about["__author__"],
38 author_email=about["__email__"],
39
40 classifiers=[
41 "Intended Audience :: Developers",
42
43 "License :: OSI Approved :: Apache Software License",
44
45 "Programming Language :: Python",
46 "Programming Language :: Python :: 3",
47 "Programming Language :: Python :: 3.4",
48 ],
49
50 packages=[
51 "warehouse",
52 "warehouse.accounts",
53 "warehouse.cache",
54 "warehouse.cache.origin",
55 "warehouse.classifiers",
56 "warehouse.cli",
57 "warehouse.cli.db",
58 "warehouse.i18n",
59 "warehouse.legacy",
60 "warehouse.legacy.api",
61 "warehouse.migrations",
62 "warehouse.packaging",
63 "warehouse.sitemap",
64 "warehouse.utils",
65 ],
66
67 include_package_data=True,
68
69 install_requires=[
70 "alembic>=0.7.0",
71 "Babel",
72 "bcrypt",
73 "boto3",
74 "celery>=3.1",
75 "click",
76 "eventlet",
77 "gunicorn",
78 "hiredis",
79 "html5lib",
80 "itsdangerous",
81 "Jinja2>=2.8",
82 "msgpack-python",
83 "packaging>=15.2",
84 "passlib>=1.6.4",
85 "psycopg2",
86 "pyramid>=1.6a1",
87 "pyramid_jinja2>=2.5",
88 "pyramid_multiauth",
89 "pyramid_rpc>=0.7",
90 "pyramid_services",
91 "pyramid_tm>=0.12",
92 "raven",
93 "readme>=0.6.0",
94 "requests",
95 "redis>=2.8.0",
96 "rfc3986",
97 "setproctitle",
98 "setuptools",
99 "sqlalchemy>=0.9",
100 "sqlalchemy-citext",
101 "structlog",
102 "transaction",
103 "WebOb>=1.5.0a0",
104 "WTForms>=2.0.0",
105 "zope.sqlalchemy",
106 ],
107
108 entry_points={
109 "console_scripts": [
110 "warehouse = warehouse.cli:warehouse",
111 ],
112 },
113 )
114
[end of setup.py]
[start of tasks/pip.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 import os
14 import re
15
16 import invoke
17
18
19 REQUIREMENTS_IN = """
20 -e .
21
22 newrelic
23 """
24
25
26 REQUIREMENTS_HEADER = """
27 #
28 # This file is autogenerated by pip-compile
29 # Make changes in setup.py, then run this to update:
30 #
31 # $ invoke pip.compile
32 #
33
34 """.lstrip()
35
36
37 @invoke.task
38 def compile():
39 with open("requirements.in", "w") as fp:
40 fp.write(REQUIREMENTS_IN)
41
42 try:
43 invoke.run("pip-compile --no-header requirements.in", hide="out")
44 finally:
45 os.remove("requirements.in")
46
47 lines = [REQUIREMENTS_HEADER]
48 with open("requirements.txt", "r") as fp:
49 for line in fp:
50 # The boto3 wheel includes a futures==2.2.0 even though that is a
51 # Python 2 only dependency. This dependency comes by default on
52 # Python 3, so the backport is never needed. See boto/boto3#163.
53 if re.search(r"^futures==2\.2\.0", line.strip()) is not None:
54 continue
55
56 if re.search(r"^-e file:///.+/warehouse$", line.strip()) is None:
57 lines.append(line)
58
59 with open("requirements.txt", "w") as fp:
60 for line in lines:
61 fp.write(line)
62
[end of tasks/pip.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -73,8 +73,6 @@
"boto3",
"celery>=3.1",
"click",
- "eventlet",
- "gunicorn",
"hiredis",
"html5lib",
"itsdangerous",
@@ -94,7 +92,6 @@
"requests",
"redis>=2.8.0",
"rfc3986",
- "setproctitle",
"setuptools",
"sqlalchemy>=0.9",
"sqlalchemy-citext",
diff --git a/tasks/pip.py b/tasks/pip.py
--- a/tasks/pip.py
+++ b/tasks/pip.py
@@ -20,6 +20,7 @@
-e .
newrelic
+uwsgi
"""
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -73,8 +73,6 @@\n \"boto3\",\n \"celery>=3.1\",\n \"click\",\n- \"eventlet\",\n- \"gunicorn\",\n \"hiredis\",\n \"html5lib\",\n \"itsdangerous\",\n@@ -94,7 +92,6 @@\n \"requests\",\n \"redis>=2.8.0\",\n \"rfc3986\",\n- \"setproctitle\",\n \"setuptools\",\n \"sqlalchemy>=0.9\",\n \"sqlalchemy-citext\",\ndiff --git a/tasks/pip.py b/tasks/pip.py\n--- a/tasks/pip.py\n+++ b/tasks/pip.py\n@@ -20,6 +20,7 @@\n -e .\n \n newrelic\n+uwsgi\n \"\"\"\n", "issue": "Figure out uWSGI Configuration Options\nWe're currently serving Warehouse on Heroku using gunicorn with the eventlet workers. I started using gunicorn because it's simple, and eventually switched from sync to eventlet workers so that we weren't trivially DoSable by having someone hold open connections (since Heroku's routers don't buffer the request body, only the request headers). In looking at the options, it appears that the following options are sane-ish:\n- Continue using gunicorn + eventlet.\n- Switch to Waitress which is another Python WSGI server that has an async master process to buffer requests.\n- Use nginx or something in front of gunicorn to buffer requests.\n- Use uWSGI instead of gunicorn.\n\nLooking at the options, I think uWSGI might be our best option, however we need to figure out a good set of configuration knobs to tune (since uWSGI supports so many of them). We currently have one situation:\n- There is no buffering proxy to prevent slowloris or similar attacks.\n- We want uploads to go back to the app unbuffered so the app can dynamically choose when to kill an upload (e.g. UserA gets 10MB and UserB gets 50MB).\n\nHowever, in the future we might be splitting out the upload service to it's own app, in which case it'd be useful to also figure out the configuration options for the additional situation:\n- There is no buffering proxy to prevent slowloris or similar attacks.\n- We don't care if the app gets requests unbuffered because requests are small.\n\n/cc @mattrobenolt\n\nAdd basic uwsgi config\nFixes GH-659\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport setuptools\n\n\nbase_dir = os.path.dirname(__file__)\n\nabout = {}\nwith open(os.path.join(base_dir, \"warehouse\", \"__about__.py\")) as f:\n exec(f.read(), about)\n\nwith open(os.path.join(base_dir, \"README.rst\")) as f:\n long_description = f.read()\n\n\nsetuptools.setup(\n name=about[\"__title__\"],\n version=about[\"__version__\"],\n\n description=about[\"__summary__\"],\n long_description=long_description,\n license=about[\"__license__\"],\n url=about[\"__uri__\"],\n\n author=about[\"__author__\"],\n author_email=about[\"__email__\"],\n\n classifiers=[\n \"Intended Audience :: Developers\",\n\n \"License :: OSI Approved :: Apache Software License\",\n\n \"Programming Language :: Python\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n ],\n\n packages=[\n \"warehouse\",\n \"warehouse.accounts\",\n \"warehouse.cache\",\n \"warehouse.cache.origin\",\n \"warehouse.classifiers\",\n \"warehouse.cli\",\n \"warehouse.cli.db\",\n \"warehouse.i18n\",\n \"warehouse.legacy\",\n \"warehouse.legacy.api\",\n \"warehouse.migrations\",\n \"warehouse.packaging\",\n \"warehouse.sitemap\",\n \"warehouse.utils\",\n ],\n\n include_package_data=True,\n\n install_requires=[\n \"alembic>=0.7.0\",\n \"Babel\",\n \"bcrypt\",\n \"boto3\",\n \"celery>=3.1\",\n \"click\",\n \"eventlet\",\n \"gunicorn\",\n \"hiredis\",\n \"html5lib\",\n \"itsdangerous\",\n \"Jinja2>=2.8\",\n \"msgpack-python\",\n \"packaging>=15.2\",\n \"passlib>=1.6.4\",\n \"psycopg2\",\n \"pyramid>=1.6a1\",\n \"pyramid_jinja2>=2.5\",\n \"pyramid_multiauth\",\n \"pyramid_rpc>=0.7\",\n \"pyramid_services\",\n \"pyramid_tm>=0.12\",\n \"raven\",\n \"readme>=0.6.0\",\n \"requests\",\n \"redis>=2.8.0\",\n \"rfc3986\",\n \"setproctitle\",\n \"setuptools\",\n \"sqlalchemy>=0.9\",\n \"sqlalchemy-citext\",\n \"structlog\",\n \"transaction\",\n \"WebOb>=1.5.0a0\",\n \"WTForms>=2.0.0\",\n \"zope.sqlalchemy\",\n ],\n\n entry_points={\n \"console_scripts\": [\n \"warehouse = warehouse.cli:warehouse\",\n ],\n },\n)\n", "path": "setup.py"}, {"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nimport re\n\nimport invoke\n\n\nREQUIREMENTS_IN = \"\"\"\n-e .\n\nnewrelic\n\"\"\"\n\n\nREQUIREMENTS_HEADER = \"\"\"\n#\n# This file is autogenerated by pip-compile\n# Make changes in setup.py, then run this to update:\n#\n# $ invoke pip.compile\n#\n\n\"\"\".lstrip()\n\n\[email protected]\ndef compile():\n with open(\"requirements.in\", \"w\") as fp:\n fp.write(REQUIREMENTS_IN)\n\n try:\n invoke.run(\"pip-compile --no-header requirements.in\", hide=\"out\")\n finally:\n os.remove(\"requirements.in\")\n\n lines = [REQUIREMENTS_HEADER]\n with open(\"requirements.txt\", \"r\") as fp:\n for line in fp:\n # The boto3 wheel includes a futures==2.2.0 even though that is a\n # Python 2 only dependency. This dependency comes by default on\n # Python 3, so the backport is never needed. See boto/boto3#163.\n if re.search(r\"^futures==2\\.2\\.0\", line.strip()) is not None:\n continue\n\n if re.search(r\"^-e file:///.+/warehouse$\", line.strip()) is None:\n lines.append(line)\n\n with open(\"requirements.txt\", \"w\") as fp:\n for line in lines:\n fp.write(line)\n", "path": "tasks/pip.py"}]} | 2,385 | 199 |
gh_patches_debug_1855 | rasdani/github-patches | git_diff | flairNLP__flair-300 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update torch version to 1.0.0
torch version 1.0.0 is available. Use torch version 1.0.0 in Flair.
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2
3 setup(
4 name='flair',
5 version='0.3.2',
6 description='A very simple framework for state-of-the-art NLP',
7 long_description=open("README.md", encoding='utf-8').read(),
8 long_description_content_type="text/markdown",
9 author='Alan Akbik',
10 author_email='[email protected]',
11 url='https://github.com/zalandoresearch/flair',
12 packages=find_packages(exclude='test'), # same as name
13 license='MIT',
14 install_requires=[
15 'torch==0.4.1',
16 'gensim==3.4.0',
17 'typing==3.6.4',
18 'tqdm==4.26.0',
19 'segtok==1.5.7',
20 'matplotlib==3.0.0',
21 'mpld3==0.3',
22 'sklearn',
23 'sqlitedict==1.6.0',
24 'deprecated==1.2.4',
25 'hyperopt==0.1.1',
26 'pytorch-pretrained-bert==0.3.0'
27 ],
28 include_package_data=True,
29 python_requires='>=3.6',
30 )
31
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -12,7 +12,7 @@
packages=find_packages(exclude='test'), # same as name
license='MIT',
install_requires=[
- 'torch==0.4.1',
+ 'torch==1.0.0',
'gensim==3.4.0',
'typing==3.6.4',
'tqdm==4.26.0',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -12,7 +12,7 @@\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n- 'torch==0.4.1',\n+ 'torch==1.0.0',\n 'gensim==3.4.0',\n 'typing==3.6.4',\n 'tqdm==4.26.0',\n", "issue": "Update torch version to 1.0.0\ntorch version 1.0.0 is available. Use torch version 1.0.0 in Flair.\n", "before_files": [{"content": "from setuptools import setup, find_packages\n\nsetup(\n name='flair',\n version='0.3.2',\n description='A very simple framework for state-of-the-art NLP',\n long_description=open(\"README.md\", encoding='utf-8').read(),\n long_description_content_type=\"text/markdown\",\n author='Alan Akbik',\n author_email='[email protected]',\n url='https://github.com/zalandoresearch/flair',\n packages=find_packages(exclude='test'), # same as name\n license='MIT',\n install_requires=[\n 'torch==0.4.1',\n 'gensim==3.4.0',\n 'typing==3.6.4',\n 'tqdm==4.26.0',\n 'segtok==1.5.7',\n 'matplotlib==3.0.0',\n 'mpld3==0.3',\n 'sklearn',\n 'sqlitedict==1.6.0',\n 'deprecated==1.2.4',\n 'hyperopt==0.1.1',\n 'pytorch-pretrained-bert==0.3.0'\n ],\n include_package_data=True,\n python_requires='>=3.6',\n)\n", "path": "setup.py"}]} | 891 | 112 |
gh_patches_debug_18221 | rasdani/github-patches | git_diff | PrefectHQ__prefect-2686 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Google Imports are Tied Together
## Description
*A clear description of the bug*
Iβm using the new `GCSResult` and Iβm getting an import error when I donβt also specify `google-cloud-bigquery` as a dependency since theyβre imports occur in the same file, I think?
```
Unexpected error: ImportError("cannot import name 'bigquery' from 'google.cloud' (unknown location)")
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/prefect/engine/runner.py", line 48, in inner
new_state = method(self, state, *args, **kwargs)
File "/usr/local/lib/python3.7/site-packages/prefect/engine/task_runner.py", line 986, in get_task_run_state
result = self.result.write(value, filename="output", **prefect.context)
File "/usr/local/lib/python3.7/site-packages/prefect/engine/results/gcs_result.py", line 73, in write
self.gcs_bucket.blob(new.location).upload_from_string(binary_data)
File "/usr/local/lib/python3.7/site-packages/prefect/engine/results/gcs_result.py", line 35, in gcs_bucket
from prefect.utilities.gcp import get_storage_client
File "/usr/local/lib/python3.7/site-packages/prefect/utilities/gcp.py", line 6, in <module>
from google.cloud import bigquery, storage
ImportError: cannot import name 'bigquery' from 'google.cloud' (unknown location)
```
https://cloud.prefect.io/prefect/flow-run/6704aa4e-ba9b-40ed-a4f8-386920839a8e?logId=75b1fc01-0ee8-4061-ab8b-5481e6123a79
On a cool note, changing to `python_dependencies=["prefect[google]"]` did work π
## Expected Behavior
*What did you expect to happen instead?*
I'd like to be able to specify one import in insolation (in this case `google-cloud-storage`)
## Reproduction
*A minimal example that exhibits the behavior.*
```
from prefect import task, Flow
from prefect.tasks.notifications.slack_task import SlackTask
from prefect.schedules import CronSchedule
from prefect.environments.storage import Docker
from prefect.engine.results import GCSResult
import pendulum
import datetime
@task(name="Get Week Message", max_retries=5, retry_delay=datetime.timedelta(seconds=5))
def get_week_message():
prefects_birthday = pendulum.date(2018, 1, 17)
current_week = prefects_birthday.diff(pendulum.now()).in_weeks()
return f"Hello, Jeremiah! It is week {current_week}."
send_message = SlackTask(
name="Slack Jeremiah",
max_retries=5,
retry_delay=datetime.timedelta(seconds=5),
webhook_secret="SLACK_WEBHOOK",
)
schedule = CronSchedule(cron="50 11 * * MON", start_date=pendulum.now(tz="US/Eastern"))
storage = Docker(
base_image="prefecthq/prefect:latest-python3.7",
registry_url=URL,
python_dependencies=["google-cloud-storage"],
files={
FILE_LOCATION: FILENAME
},
env_vars={"GOOGLE_APPLICATION_CREDENTIALS": FILENAME},
)
gcs_result = GCSResult(bucket="what_week_is_it_results")
with Flow(
name="What Week is It?", schedule=schedule, storage=storage, result=gcs_result
) as flow:
week_message = get_week_message()
result = send_message(message=week_message)
```
## Environment
*Any additional information about your environment*
*Optionally run `prefect diagnostics` from the command line and paste the information here*
```
{
"config_overrides": {
"cloud": {
"auth_token": true,
"use_local_secrets": true
},
"context": {
"secrets": false
},
"home_dir": true
},
"env_vars": [],
"system_information": {
"platform": "Darwin-19.4.0-x86_64-i386-64bit",
"prefect_version": "0.11.2",
"python_version": "3.7.7"
}
}
```
</issue>
<code>
[start of src/prefect/utilities/gcp.py]
1 """
2 Utility functions for interacting with Google Cloud.
3 """
4 import prefect
5
6 from google.cloud import bigquery, storage
7 from google.oauth2.service_account import Credentials
8
9
10 def get_google_client(submodule, credentials: dict = None, project: str = None):
11 """
12 Utility function for loading Google Client objects from a given set of credentials.
13
14 Args:
15 - submodule: a Python submodule with a Client attribute
16 - credentials (dict, optional): a dictionary of Google credentials used to initialize the Client; if
17 not provided, will attempt to load the Client using ambient environment settings
18 - project (str, optional): the Google project to point the Client to; if not provided, Client defaults
19 will be used
20
21 Returns:
22 - Client: an initialized and authenticated Google Client
23 """
24 Client = getattr(submodule, "Client")
25 credentials = credentials or prefect.context.get("secrets", {}).get(
26 "GCP_CREDENTIALS"
27 )
28 if credentials is not None:
29 credentials = Credentials.from_service_account_info(credentials)
30 project = project or credentials.project_id
31 client = Client(project=project, credentials=credentials)
32 else:
33 client = Client(project=project)
34 return client
35
36
37 def get_storage_client(credentials: dict = None, project: str = None):
38 """
39 Utility function for instantiating a Google Storage Client from a given set of credentials.
40
41 Args:
42 - credentials (dict, optional): a dictionary of Google credentials used to initialize the Client; if
43 not provided, will attempt to load the Client using ambient environment settings
44 - project (str, optional): the Google project to point the Client to; if not provided, Client defaults
45 will be used
46
47 Returns:
48 - Client: an initialized and authenticated Google Client
49 """
50 return get_google_client(storage, credentials=credentials, project=project)
51
52
53 def get_bigquery_client(credentials: dict = None, project: str = None):
54 """
55 Utility function for instantiating a Google BigQuery Client from a given set of credentials.
56
57 Args:
58 - credentials (dict, optional): a dictionary of Google credentials used to initialize the Client; if
59 not provided, will attempt to load the Client using ambient environment settings
60 - project (str, optional): the Google project to point the Client to; if not provided, Client defaults
61 will be used
62
63 Returns:
64 - Client: an initialized and authenticated Google Client
65 """
66 return get_google_client(bigquery, credentials=credentials, project=project)
67
[end of src/prefect/utilities/gcp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/prefect/utilities/gcp.py b/src/prefect/utilities/gcp.py
--- a/src/prefect/utilities/gcp.py
+++ b/src/prefect/utilities/gcp.py
@@ -3,7 +3,6 @@
"""
import prefect
-from google.cloud import bigquery, storage
from google.oauth2.service_account import Credentials
@@ -47,6 +46,8 @@
Returns:
- Client: an initialized and authenticated Google Client
"""
+ from google.cloud import storage
+
return get_google_client(storage, credentials=credentials, project=project)
@@ -63,4 +64,6 @@
Returns:
- Client: an initialized and authenticated Google Client
"""
+ from google.cloud import bigquery
+
return get_google_client(bigquery, credentials=credentials, project=project)
| {"golden_diff": "diff --git a/src/prefect/utilities/gcp.py b/src/prefect/utilities/gcp.py\n--- a/src/prefect/utilities/gcp.py\n+++ b/src/prefect/utilities/gcp.py\n@@ -3,7 +3,6 @@\n \"\"\"\n import prefect\n \n-from google.cloud import bigquery, storage\n from google.oauth2.service_account import Credentials\n \n \n@@ -47,6 +46,8 @@\n Returns:\n - Client: an initialized and authenticated Google Client\n \"\"\"\n+ from google.cloud import storage\n+\n return get_google_client(storage, credentials=credentials, project=project)\n \n \n@@ -63,4 +64,6 @@\n Returns:\n - Client: an initialized and authenticated Google Client\n \"\"\"\n+ from google.cloud import bigquery\n+\n return get_google_client(bigquery, credentials=credentials, project=project)\n", "issue": "Google Imports are Tied Together\n## Description\r\n*A clear description of the bug*\r\nI\u2019m using the new `GCSResult` and I\u2019m getting an import error when I don\u2019t also specify `google-cloud-bigquery` as a dependency since they\u2019re imports occur in the same file, I think?\r\n\r\n```\r\nUnexpected error: ImportError(\"cannot import name 'bigquery' from 'google.cloud' (unknown location)\")\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.7/site-packages/prefect/engine/runner.py\", line 48, in inner\r\n new_state = method(self, state, *args, **kwargs)\r\n File \"/usr/local/lib/python3.7/site-packages/prefect/engine/task_runner.py\", line 986, in get_task_run_state\r\n result = self.result.write(value, filename=\"output\", **prefect.context)\r\n File \"/usr/local/lib/python3.7/site-packages/prefect/engine/results/gcs_result.py\", line 73, in write\r\n self.gcs_bucket.blob(new.location).upload_from_string(binary_data)\r\n File \"/usr/local/lib/python3.7/site-packages/prefect/engine/results/gcs_result.py\", line 35, in gcs_bucket\r\n from prefect.utilities.gcp import get_storage_client\r\n File \"/usr/local/lib/python3.7/site-packages/prefect/utilities/gcp.py\", line 6, in <module>\r\n from google.cloud import bigquery, storage\r\nImportError: cannot import name 'bigquery' from 'google.cloud' (unknown location)\r\n```\r\nhttps://cloud.prefect.io/prefect/flow-run/6704aa4e-ba9b-40ed-a4f8-386920839a8e?logId=75b1fc01-0ee8-4061-ab8b-5481e6123a79\r\n\r\nOn a cool note, changing to `python_dependencies=[\"prefect[google]\"]` did work \ud83c\udf89 \r\n\r\n## Expected Behavior\r\n*What did you expect to happen instead?*\r\nI'd like to be able to specify one import in insolation (in this case `google-cloud-storage`) \r\n\r\n\r\n## Reproduction\r\n*A minimal example that exhibits the behavior.*\r\n```\r\nfrom prefect import task, Flow\r\nfrom prefect.tasks.notifications.slack_task import SlackTask\r\nfrom prefect.schedules import CronSchedule\r\nfrom prefect.environments.storage import Docker\r\nfrom prefect.engine.results import GCSResult\r\n\r\nimport pendulum\r\nimport datetime\r\n\r\n\r\n@task(name=\"Get Week Message\", max_retries=5, retry_delay=datetime.timedelta(seconds=5))\r\ndef get_week_message():\r\n prefects_birthday = pendulum.date(2018, 1, 17)\r\n current_week = prefects_birthday.diff(pendulum.now()).in_weeks()\r\n return f\"Hello, Jeremiah! It is week {current_week}.\"\r\n\r\n\r\nsend_message = SlackTask(\r\n name=\"Slack Jeremiah\",\r\n max_retries=5,\r\n retry_delay=datetime.timedelta(seconds=5),\r\n webhook_secret=\"SLACK_WEBHOOK\",\r\n)\r\n\r\nschedule = CronSchedule(cron=\"50 11 * * MON\", start_date=pendulum.now(tz=\"US/Eastern\"))\r\n\r\nstorage = Docker(\r\n base_image=\"prefecthq/prefect:latest-python3.7\",\r\n registry_url=URL,\r\n python_dependencies=[\"google-cloud-storage\"],\r\n files={\r\n FILE_LOCATION: FILENAME\r\n },\r\n env_vars={\"GOOGLE_APPLICATION_CREDENTIALS\": FILENAME},\r\n)\r\n\r\ngcs_result = GCSResult(bucket=\"what_week_is_it_results\")\r\n\r\nwith Flow(\r\n name=\"What Week is It?\", schedule=schedule, storage=storage, result=gcs_result\r\n) as flow:\r\n week_message = get_week_message()\r\n result = send_message(message=week_message)\r\n```\r\n\r\n\r\n\r\n## Environment\r\n*Any additional information about your environment*\r\n\r\n\r\n*Optionally run `prefect diagnostics` from the command line and paste the information here*\r\n```\r\n{\r\n \"config_overrides\": {\r\n \"cloud\": {\r\n \"auth_token\": true,\r\n \"use_local_secrets\": true\r\n },\r\n \"context\": {\r\n \"secrets\": false\r\n },\r\n \"home_dir\": true\r\n },\r\n \"env_vars\": [],\r\n \"system_information\": {\r\n \"platform\": \"Darwin-19.4.0-x86_64-i386-64bit\",\r\n \"prefect_version\": \"0.11.2\",\r\n \"python_version\": \"3.7.7\"\r\n }\r\n}\r\n```\n", "before_files": [{"content": "\"\"\"\nUtility functions for interacting with Google Cloud.\n\"\"\"\nimport prefect\n\nfrom google.cloud import bigquery, storage\nfrom google.oauth2.service_account import Credentials\n\n\ndef get_google_client(submodule, credentials: dict = None, project: str = None):\n \"\"\"\n Utility function for loading Google Client objects from a given set of credentials.\n\n Args:\n - submodule: a Python submodule with a Client attribute\n - credentials (dict, optional): a dictionary of Google credentials used to initialize the Client; if\n not provided, will attempt to load the Client using ambient environment settings\n - project (str, optional): the Google project to point the Client to; if not provided, Client defaults\n will be used\n\n Returns:\n - Client: an initialized and authenticated Google Client\n \"\"\"\n Client = getattr(submodule, \"Client\")\n credentials = credentials or prefect.context.get(\"secrets\", {}).get(\n \"GCP_CREDENTIALS\"\n )\n if credentials is not None:\n credentials = Credentials.from_service_account_info(credentials)\n project = project or credentials.project_id\n client = Client(project=project, credentials=credentials)\n else:\n client = Client(project=project)\n return client\n\n\ndef get_storage_client(credentials: dict = None, project: str = None):\n \"\"\"\n Utility function for instantiating a Google Storage Client from a given set of credentials.\n\n Args:\n - credentials (dict, optional): a dictionary of Google credentials used to initialize the Client; if\n not provided, will attempt to load the Client using ambient environment settings\n - project (str, optional): the Google project to point the Client to; if not provided, Client defaults\n will be used\n\n Returns:\n - Client: an initialized and authenticated Google Client\n \"\"\"\n return get_google_client(storage, credentials=credentials, project=project)\n\n\ndef get_bigquery_client(credentials: dict = None, project: str = None):\n \"\"\"\n Utility function for instantiating a Google BigQuery Client from a given set of credentials.\n\n Args:\n - credentials (dict, optional): a dictionary of Google credentials used to initialize the Client; if\n not provided, will attempt to load the Client using ambient environment settings\n - project (str, optional): the Google project to point the Client to; if not provided, Client defaults\n will be used\n\n Returns:\n - Client: an initialized and authenticated Google Client\n \"\"\"\n return get_google_client(bigquery, credentials=credentials, project=project)\n", "path": "src/prefect/utilities/gcp.py"}]} | 2,186 | 188 |
gh_patches_debug_12930 | rasdani/github-patches | git_diff | spyder-ide__spyder-5128 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error when trying to load third-party plugins
## Description
**What steps will reproduce the problem?**
1. Start sypder.
2.
3.
**What is the expected output? What do you see instead?**
It shows right away after starting Spyder.
**Please provide any additional information below**
A message box show this:
module = _import_module_from_path(module_name, plugin_path)
module = spec.loader.load_module(module_name)
AttributeError: 'NoneType' object has no attribute 'load_module'

Spyder functions fine, no problem after close the message box.
## Version and main components
* Spyder Version: 3.2.0
* Python Version: 3.5.3
* Qt Versions: 5.6.2, PyQt5 5.6 on Darwin
## Dependencies
```
pyflakes >=0.6.0 : 1.5.0 (OK)
pycodestyle >=2.3: 2.3.1 (OK)
pygments >=2.0 : 2.2.0 (OK)
pandas >=0.13.1 : 0.19.2 (OK)
numpy >=1.7 : 1.12.1 (OK)
sphinx >=0.6.6 : 1.5.4 (OK)
rope >=0.9.4 : 0.10.5 (OK)
jedi >=0.9.0 : 0.10.2 (OK)
psutil >=0.3 : 5.2.1 (OK)
nbconvert >=4.0 : 5.1.1 (OK)
sympy >=0.7.3 : None (NOK)
cython >=0.21 : None (NOK)
qtconsole >=4.2.0: 4.3.0 (OK)
IPython >=4.0 : 6.1.0 (OK)
pylint >=0.25 : 1.6.4 (OK)
```
</issue>
<code>
[start of spyder/otherplugins.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright Β© Spyder Project Contributors
4 # Licensed under the terms of the MIT License
5 # (see spyder/__init__.py for details)
6
7 """
8 Spyder third-party plugins configuration management
9 """
10
11 import os
12 import os.path as osp
13 import sys
14 import traceback
15
16 # Local imports
17 from spyder.config.base import get_conf_path
18 from spyder.py3compat import PY2
19
20 if PY2:
21 import imp
22 else:
23 import importlib
24
25
26 USER_PLUGIN_DIR = "plugins"
27 PLUGIN_PREFIX = "spyder_"
28 IO_PREFIX = PLUGIN_PREFIX + "io_"
29
30
31 def get_spyderplugins_mods(io=False):
32 """Import modules from plugins package and return the list"""
33 # Create user directory
34 user_plugin_path = osp.join(get_conf_path(), USER_PLUGIN_DIR)
35 if not osp.isdir(user_plugin_path):
36 os.makedirs(user_plugin_path)
37
38 modlist, modnames = [], []
39
40 # The user plugins directory is given the priority when looking for modules
41 for plugin_path in [user_plugin_path] + sys.path:
42 _get_spyderplugins(plugin_path, io, modnames, modlist)
43 return modlist
44
45
46 def _get_spyderplugins(plugin_path, is_io, modnames, modlist):
47 """Scan the directory `plugin_path` for plugin packages and loads them."""
48 if not osp.isdir(plugin_path):
49 return
50
51 for name in os.listdir(plugin_path):
52 # This is needed in order to register the spyder_io_hdf5 plugin.
53 # See issue 4487
54 # Is this a Spyder plugin?
55 if not name.startswith(PLUGIN_PREFIX):
56 continue
57 # Ensure right type of plugin
58 if is_io != name.startswith(IO_PREFIX):
59 continue
60
61 # Import the plugin
62 _import_plugin(name, plugin_path, modnames, modlist)
63
64
65 def _import_plugin(module_name, plugin_path, modnames, modlist):
66 """Import the plugin `module_name` from `plugin_path`, add it to `modlist`
67 and adds its name to `modnames`.
68 """
69 if module_name in modnames:
70 return
71 try:
72 # First add a mock module with the LOCALEPATH attribute so that the
73 # helper method can find the locale on import
74 mock = _ModuleMock()
75 mock.LOCALEPATH = osp.join(plugin_path, module_name, 'locale')
76 sys.modules[module_name] = mock
77
78 if osp.isdir(osp.join(plugin_path, module_name)):
79 module = _import_module_from_path(module_name, plugin_path)
80 else:
81 module = None
82
83 # Then restore the actual loaded module instead of the mock
84 if module:
85 sys.modules[module_name] = module
86 modlist.append(module)
87 modnames.append(module_name)
88 except Exception:
89 sys.stderr.write("ERROR: 3rd party plugin import failed for "
90 "`{0}`\n".format(module_name))
91 traceback.print_exc(file=sys.stderr)
92
93
94 def _import_module_from_path(module_name, plugin_path):
95 """Imports `module_name` from `plugin_path`.
96
97 Return None if no module is found.
98 """
99 module = None
100 if PY2:
101 info = imp.find_module(module_name, [plugin_path])
102 if info:
103 module = imp.load_module(module_name, *info)
104 elif sys.version_info[0:2] <= (3, 3):
105 loader = importlib.machinery.PathFinder.find_module(
106 module_name,
107 [plugin_path])
108 if loader:
109 module = loader.load_module(module_name)
110 else: # Python 3.4+
111 spec = importlib.machinery.PathFinder.find_spec(
112 module_name,
113 [plugin_path])
114 if spec:
115 module = spec.loader.load_module(module_name)
116 return module
117
118
119 class _ModuleMock():
120 """This mock module is added to sys.modules on plugin load to add the
121 location of the LOCALEDATA so that the module loads succesfully.
122 Once loaded the module is replaced by the actual loaded module object.
123 """
124 pass
125
[end of spyder/otherplugins.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/spyder/otherplugins.py b/spyder/otherplugins.py
--- a/spyder/otherplugins.py
+++ b/spyder/otherplugins.py
@@ -54,10 +54,16 @@
# Is this a Spyder plugin?
if not name.startswith(PLUGIN_PREFIX):
continue
+
# Ensure right type of plugin
if is_io != name.startswith(IO_PREFIX):
continue
+ # Skip names that end in certain suffixes
+ forbidden_suffixes = ['dist-info', 'egg.info', 'egg-link']
+ if any([name.endswith(s) for s in forbidden_suffixes]):
+ continue
+
# Import the plugin
_import_plugin(name, plugin_path, modnames, modlist)
| {"golden_diff": "diff --git a/spyder/otherplugins.py b/spyder/otherplugins.py\n--- a/spyder/otherplugins.py\n+++ b/spyder/otherplugins.py\n@@ -54,10 +54,16 @@\n # Is this a Spyder plugin?\r\n if not name.startswith(PLUGIN_PREFIX):\r\n continue\r\n+\r\n # Ensure right type of plugin\r\n if is_io != name.startswith(IO_PREFIX):\r\n continue\r\n \r\n+ # Skip names that end in certain suffixes\r\n+ forbidden_suffixes = ['dist-info', 'egg.info', 'egg-link']\r\n+ if any([name.endswith(s) for s in forbidden_suffixes]):\r\n+ continue\r\n+\r\n # Import the plugin\r\n _import_plugin(name, plugin_path, modnames, modlist)\n", "issue": "Error when trying to load third-party plugins\n## Description\r\n\r\n**What steps will reproduce the problem?**\r\n\r\n1. Start sypder.\r\n2. \r\n3. \r\n\r\n**What is the expected output? What do you see instead?**\r\nIt shows right away after starting Spyder.\r\n\r\n**Please provide any additional information below**\r\nA message box show this:\r\n module = _import_module_from_path(module_name, plugin_path)\r\n module = spec.loader.load_module(module_name)\r\nAttributeError: 'NoneType' object has no attribute 'load_module'\r\n\r\n\r\nSpyder functions fine, no problem after close the message box.\r\n## Version and main components\r\n\r\n* Spyder Version: 3.2.0 \r\n* Python Version: 3.5.3\r\n* Qt Versions: 5.6.2, PyQt5 5.6 on Darwin\r\n\r\n## Dependencies\r\n```\r\npyflakes >=0.6.0 : 1.5.0 (OK)\r\npycodestyle >=2.3: 2.3.1 (OK)\r\npygments >=2.0 : 2.2.0 (OK)\r\npandas >=0.13.1 : 0.19.2 (OK)\r\nnumpy >=1.7 : 1.12.1 (OK)\r\nsphinx >=0.6.6 : 1.5.4 (OK)\r\nrope >=0.9.4 : 0.10.5 (OK)\r\njedi >=0.9.0 : 0.10.2 (OK)\r\npsutil >=0.3 : 5.2.1 (OK)\r\nnbconvert >=4.0 : 5.1.1 (OK)\r\nsympy >=0.7.3 : None (NOK)\r\ncython >=0.21 : None (NOK)\r\nqtconsole >=4.2.0: 4.3.0 (OK)\r\nIPython >=4.0 : 6.1.0 (OK)\r\npylint >=0.25 : 1.6.4 (OK)\r\n\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\r\n#\r\n# Copyright \u00a9 Spyder Project Contributors\r\n# Licensed under the terms of the MIT License\r\n# (see spyder/__init__.py for details)\r\n\r\n\"\"\"\r\nSpyder third-party plugins configuration management\r\n\"\"\"\r\n\r\nimport os\r\nimport os.path as osp\r\nimport sys\r\nimport traceback\r\n\r\n# Local imports\r\nfrom spyder.config.base import get_conf_path\r\nfrom spyder.py3compat import PY2\r\n\r\nif PY2:\r\n import imp\r\nelse:\r\n import importlib\r\n\r\n\r\nUSER_PLUGIN_DIR = \"plugins\"\r\nPLUGIN_PREFIX = \"spyder_\"\r\nIO_PREFIX = PLUGIN_PREFIX + \"io_\"\r\n\r\n\r\ndef get_spyderplugins_mods(io=False):\r\n \"\"\"Import modules from plugins package and return the list\"\"\"\r\n # Create user directory\r\n user_plugin_path = osp.join(get_conf_path(), USER_PLUGIN_DIR)\r\n if not osp.isdir(user_plugin_path):\r\n os.makedirs(user_plugin_path)\r\n\r\n modlist, modnames = [], []\r\n\r\n # The user plugins directory is given the priority when looking for modules\r\n for plugin_path in [user_plugin_path] + sys.path:\r\n _get_spyderplugins(plugin_path, io, modnames, modlist)\r\n return modlist\r\n\r\n\r\ndef _get_spyderplugins(plugin_path, is_io, modnames, modlist):\r\n \"\"\"Scan the directory `plugin_path` for plugin packages and loads them.\"\"\"\r\n if not osp.isdir(plugin_path):\r\n return\r\n\r\n for name in os.listdir(plugin_path):\r\n # This is needed in order to register the spyder_io_hdf5 plugin.\r\n # See issue 4487\r\n # Is this a Spyder plugin?\r\n if not name.startswith(PLUGIN_PREFIX):\r\n continue\r\n # Ensure right type of plugin\r\n if is_io != name.startswith(IO_PREFIX):\r\n continue\r\n\r\n # Import the plugin\r\n _import_plugin(name, plugin_path, modnames, modlist)\r\n\r\n\r\ndef _import_plugin(module_name, plugin_path, modnames, modlist):\r\n \"\"\"Import the plugin `module_name` from `plugin_path`, add it to `modlist`\r\n and adds its name to `modnames`.\r\n \"\"\"\r\n if module_name in modnames:\r\n return\r\n try:\r\n # First add a mock module with the LOCALEPATH attribute so that the\r\n # helper method can find the locale on import\r\n mock = _ModuleMock()\r\n mock.LOCALEPATH = osp.join(plugin_path, module_name, 'locale')\r\n sys.modules[module_name] = mock\r\n\r\n if osp.isdir(osp.join(plugin_path, module_name)):\r\n module = _import_module_from_path(module_name, plugin_path)\r\n else:\r\n module = None\r\n\r\n # Then restore the actual loaded module instead of the mock\r\n if module:\r\n sys.modules[module_name] = module\r\n modlist.append(module)\r\n modnames.append(module_name)\r\n except Exception:\r\n sys.stderr.write(\"ERROR: 3rd party plugin import failed for \"\r\n \"`{0}`\\n\".format(module_name))\r\n traceback.print_exc(file=sys.stderr)\r\n\r\n\r\ndef _import_module_from_path(module_name, plugin_path):\r\n \"\"\"Imports `module_name` from `plugin_path`.\r\n\r\n Return None if no module is found.\r\n \"\"\"\r\n module = None\r\n if PY2:\r\n info = imp.find_module(module_name, [plugin_path])\r\n if info:\r\n module = imp.load_module(module_name, *info)\r\n elif sys.version_info[0:2] <= (3, 3):\r\n loader = importlib.machinery.PathFinder.find_module(\r\n module_name,\r\n [plugin_path])\r\n if loader:\r\n module = loader.load_module(module_name)\r\n else: # Python 3.4+\r\n spec = importlib.machinery.PathFinder.find_spec(\r\n module_name,\r\n [plugin_path])\r\n if spec:\r\n module = spec.loader.load_module(module_name)\r\n return module\r\n\r\n\r\nclass _ModuleMock():\r\n \"\"\"This mock module is added to sys.modules on plugin load to add the\r\n location of the LOCALEDATA so that the module loads succesfully.\r\n Once loaded the module is replaced by the actual loaded module object.\r\n \"\"\"\r\n pass\r\n", "path": "spyder/otherplugins.py"}]} | 2,224 | 172 |
gh_patches_debug_7825 | rasdani/github-patches | git_diff | spack__spack-4606 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: mxml / adios link error
This seems to be related to the pull request https://github.com/LLNL/spack/pull/3696 that I abandoned a few months back, but which has now come back to haunt me.
If I use the current spack setup for adios and mxml, I receive the following error when I try to link against the adios library:
ld: /home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a(mxml-attr.o):
relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
Interestingly enough, the adios library is always being built as a `.a`, regardless of +shared, ~shared, whatever.
If I avoid the external mxml library for ADIOS - relying instead on adios' own internal copy - I don't see this issue.
I'll post a commit and pull-request and see if someone can tell me why.
BUG: mxml / adios link error
This seems to be related to the pull request https://github.com/LLNL/spack/pull/3696 that I abandoned a few months back, but which has now come back to haunt me.
If I use the current spack setup for adios and mxml, I receive the following error when I try to link against the adios library:
ld: /home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a(mxml-attr.o):
relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC
/home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a: error adding symbols: Bad value
collect2: error: ld returned 1 exit status
Interestingly enough, the adios library is always being built as a `.a`, regardless of +shared, ~shared, whatever.
If I avoid the external mxml library for ADIOS - relying instead on adios' own internal copy - I don't see this issue.
I'll post a commit and pull-request and see if someone can tell me why.
</issue>
<code>
[start of var/spack/repos/builtin/packages/mxml/package.py]
1 ##############################################################################
2 # Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.
3 # Produced at the Lawrence Livermore National Laboratory.
4 #
5 # This file is part of Spack.
6 # Created by Todd Gamblin, [email protected], All rights reserved.
7 # LLNL-CODE-647188
8 #
9 # For details, see https://github.com/llnl/spack
10 # Please also see the NOTICE and LICENSE files for our notice and the LGPL.
11 #
12 # This program is free software; you can redistribute it and/or modify
13 # it under the terms of the GNU Lesser General Public License (as
14 # published by the Free Software Foundation) version 2.1, February 1999.
15 #
16 # This program is distributed in the hope that it will be useful, but
17 # WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF
18 # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and
19 # conditions of the GNU Lesser General Public License for more details.
20 #
21 # You should have received a copy of the GNU Lesser General Public
22 # License along with this program; if not, write to the Free Software
23 # Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
24 ##############################################################################
25 from spack import *
26
27
28 class Mxml(AutotoolsPackage):
29 """Mini-XML is a small XML library that you can use to read and write XML
30 and XML-like data files in your application without requiring large
31 non-standard libraries.
32 """
33
34 homepage = "http://michaelrsweet.github.io/mxml/"
35 url = "https://github.com/michaelrsweet/mxml/releases/download/release-2.10/mxml-2.10.tar.gz"
36
37 version('2.10', '8804c961a24500a95690ef287d150abe')
38 version('2.9', 'e21cad0f7aacd18f942aa0568a8dee19')
39 version('2.8', 'd85ee6d30de053581242c4a86e79a5d2')
40 version('2.7', '76f2ae49bf0f5745d5cb5d9507774dc9')
41 version('2.6', '68977789ae64985dddbd1a1a1652642e')
42 version('2.5', 'f706377fba630b39fa02fd63642b17e5')
43
44 def url_for_version(self, version):
45 if version <= Version('2.7'):
46 return 'https://github.com/michaelrsweet/mxml/archive/release-{0}.tar.gz'.format(version)
47 else:
48 return 'https://github.com/michaelrsweet/mxml/releases/download/release-{0}/mxml-{0}.tar.gz'.format(version)
49
50 def configure_args(self):
51 # Default is non-shared, but avoid any future surprises
52 return ['--disable-shared']
53
[end of var/spack/repos/builtin/packages/mxml/package.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/var/spack/repos/builtin/packages/mxml/package.py b/var/spack/repos/builtin/packages/mxml/package.py
--- a/var/spack/repos/builtin/packages/mxml/package.py
+++ b/var/spack/repos/builtin/packages/mxml/package.py
@@ -48,5 +48,9 @@
return 'https://github.com/michaelrsweet/mxml/releases/download/release-{0}/mxml-{0}.tar.gz'.format(version)
def configure_args(self):
- # Default is non-shared, but avoid any future surprises
- return ['--disable-shared']
+ return [
+ # ADIOS build with -fPIC, so we need it too (avoid linkage issue)
+ 'CFLAGS=-fPIC',
+ # Default is non-shared, but avoid any future surprises
+ '--disable-shared',
+ ]
| {"golden_diff": "diff --git a/var/spack/repos/builtin/packages/mxml/package.py b/var/spack/repos/builtin/packages/mxml/package.py\n--- a/var/spack/repos/builtin/packages/mxml/package.py\n+++ b/var/spack/repos/builtin/packages/mxml/package.py\n@@ -48,5 +48,9 @@\n return 'https://github.com/michaelrsweet/mxml/releases/download/release-{0}/mxml-{0}.tar.gz'.format(version)\n \n def configure_args(self):\n- # Default is non-shared, but avoid any future surprises\n- return ['--disable-shared']\n+ return [\n+ # ADIOS build with -fPIC, so we need it too (avoid linkage issue)\n+ 'CFLAGS=-fPIC',\n+ # Default is non-shared, but avoid any future surprises\n+ '--disable-shared',\n+ ]\n", "issue": "BUG: mxml / adios link error\nThis seems to be related to the pull request https://github.com/LLNL/spack/pull/3696 that I abandoned a few months back, but which has now come back to haunt me.\r\nIf I use the current spack setup for adios and mxml, I receive the following error when I try to link against the adios library:\r\n\r\n ld: /home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a(mxml-attr.o):\r\n relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\r\n /home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a: error adding symbols: Bad value\r\ncollect2: error: ld returned 1 exit status\r\n\r\nInterestingly enough, the adios library is always being built as a `.a`, regardless of +shared, ~shared, whatever.\r\n\r\nIf I avoid the external mxml library for ADIOS - relying instead on adios' own internal copy - I don't see this issue.\r\nI'll post a commit and pull-request and see if someone can tell me why.\nBUG: mxml / adios link error\nThis seems to be related to the pull request https://github.com/LLNL/spack/pull/3696 that I abandoned a few months back, but which has now come back to haunt me.\r\nIf I use the current spack setup for adios and mxml, I receive the following error when I try to link against the adios library:\r\n\r\n ld: /home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a(mxml-attr.o):\r\n relocation R_X86_64_32 against `.rodata.str1.1' can not be used when making a shared object; recompile with -fPIC\r\n /home/mol/Software/spack/opt/spack/linux-opensuse42-x86_64/gcc-4.8/mxml-2.10-zxpuyumt66b7ehrlidmcax7li5ekhr6x/lib64/libmxml.a: error adding symbols: Bad value\r\ncollect2: error: ld returned 1 exit status\r\n\r\nInterestingly enough, the adios library is always being built as a `.a`, regardless of +shared, ~shared, whatever.\r\n\r\nIf I avoid the external mxml library for ADIOS - relying instead on adios' own internal copy - I don't see this issue.\r\nI'll post a commit and pull-request and see if someone can tell me why.\n", "before_files": [{"content": "##############################################################################\n# Copyright (c) 2013-2016, Lawrence Livermore National Security, LLC.\n# Produced at the Lawrence Livermore National Laboratory.\n#\n# This file is part of Spack.\n# Created by Todd Gamblin, [email protected], All rights reserved.\n# LLNL-CODE-647188\n#\n# For details, see https://github.com/llnl/spack\n# Please also see the NOTICE and LICENSE files for our notice and the LGPL.\n#\n# This program is free software; you can redistribute it and/or modify\n# it under the terms of the GNU Lesser General Public License (as\n# published by the Free Software Foundation) version 2.1, February 1999.\n#\n# This program is distributed in the hope that it will be useful, but\n# WITHOUT ANY WARRANTY; without even the IMPLIED WARRANTY OF\n# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the terms and\n# conditions of the GNU Lesser General Public License for more details.\n#\n# You should have received a copy of the GNU Lesser General Public\n# License along with this program; if not, write to the Free Software\n# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA\n##############################################################################\nfrom spack import *\n\n\nclass Mxml(AutotoolsPackage):\n \"\"\"Mini-XML is a small XML library that you can use to read and write XML\n and XML-like data files in your application without requiring large\n non-standard libraries.\n \"\"\"\n\n homepage = \"http://michaelrsweet.github.io/mxml/\"\n url = \"https://github.com/michaelrsweet/mxml/releases/download/release-2.10/mxml-2.10.tar.gz\"\n\n version('2.10', '8804c961a24500a95690ef287d150abe')\n version('2.9', 'e21cad0f7aacd18f942aa0568a8dee19')\n version('2.8', 'd85ee6d30de053581242c4a86e79a5d2')\n version('2.7', '76f2ae49bf0f5745d5cb5d9507774dc9')\n version('2.6', '68977789ae64985dddbd1a1a1652642e')\n version('2.5', 'f706377fba630b39fa02fd63642b17e5')\n\n def url_for_version(self, version):\n if version <= Version('2.7'):\n return 'https://github.com/michaelrsweet/mxml/archive/release-{0}.tar.gz'.format(version)\n else:\n return 'https://github.com/michaelrsweet/mxml/releases/download/release-{0}/mxml-{0}.tar.gz'.format(version)\n\n def configure_args(self):\n # Default is non-shared, but avoid any future surprises\n return ['--disable-shared']\n", "path": "var/spack/repos/builtin/packages/mxml/package.py"}]} | 2,061 | 184 |
gh_patches_debug_29724 | rasdani/github-patches | git_diff | pallets__click-545 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bash completion for MultiCommands returns subcommands twice until it stops.
Python 2.7.10
broken in click >= 6
ok in click (5.1)
```
manage hello hello world world
```
</issue>
<code>
[start of click/_bashcomplete.py]
1 import os
2 import re
3 from .utils import echo
4 from .parser import split_arg_string
5 from .core import MultiCommand, Option
6
7
8 COMPLETION_SCRIPT = '''
9 %(complete_func)s() {
10 COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\
11 COMP_CWORD=$COMP_CWORD \\
12 %(autocomplete_var)s=complete $1 ) )
13 return 0
14 }
15
16 complete -F %(complete_func)s -o default %(script_names)s
17 '''
18
19 _invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')
20
21
22 def get_completion_script(prog_name, complete_var):
23 cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))
24 return (COMPLETION_SCRIPT % {
25 'complete_func': '_%s_completion' % cf_name,
26 'script_names': prog_name,
27 'autocomplete_var': complete_var,
28 }).strip() + ';'
29
30
31 def resolve_ctx(cli, prog_name, args):
32 ctx = cli.make_context(prog_name, args, resilient_parsing=True)
33 while ctx.args and isinstance(ctx.command, MultiCommand):
34 cmd = ctx.command.get_command(ctx, ctx.args[0])
35 if cmd is None:
36 return None
37 ctx = cmd.make_context(ctx.args[0], ctx.args[1:], parent=ctx,
38 resilient_parsing=True)
39 return ctx
40
41
42 def do_complete(cli, prog_name):
43 cwords = split_arg_string(os.environ['COMP_WORDS'])
44 cword = int(os.environ['COMP_CWORD'])
45 args = cwords[1:cword]
46 try:
47 incomplete = cwords[cword]
48 except IndexError:
49 incomplete = ''
50
51 ctx = resolve_ctx(cli, prog_name, args)
52 if ctx is None:
53 return True
54
55 choices = []
56 if incomplete and not incomplete[:1].isalnum():
57 for param in ctx.command.params:
58 if not isinstance(param, Option):
59 continue
60 choices.extend(param.opts)
61 choices.extend(param.secondary_opts)
62 elif isinstance(ctx.command, MultiCommand):
63 choices.extend(ctx.command.list_commands(ctx))
64
65 for item in choices:
66 if item.startswith(incomplete):
67 echo(item)
68
69 return True
70
71
72 def bashcomplete(cli, prog_name, complete_var, complete_instr):
73 if complete_instr == 'source':
74 echo(get_completion_script(prog_name, complete_var))
75 return True
76 elif complete_instr == 'complete':
77 return do_complete(cli, prog_name)
78 return False
79
[end of click/_bashcomplete.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py
--- a/click/_bashcomplete.py
+++ b/click/_bashcomplete.py
@@ -30,27 +30,19 @@
def resolve_ctx(cli, prog_name, args):
ctx = cli.make_context(prog_name, args, resilient_parsing=True)
- while ctx.args and isinstance(ctx.command, MultiCommand):
- cmd = ctx.command.get_command(ctx, ctx.args[0])
+ while ctx.args + ctx.protected_args and isinstance(ctx.command, MultiCommand):
+ a = ctx.args + ctx.protected_args
+ cmd = ctx.command.get_command(ctx, a[0])
if cmd is None:
return None
- ctx = cmd.make_context(ctx.args[0], ctx.args[1:], parent=ctx,
- resilient_parsing=True)
+ ctx = cmd.make_context(a[0], a[1:], parent=ctx, resilient_parsing=True)
return ctx
-def do_complete(cli, prog_name):
- cwords = split_arg_string(os.environ['COMP_WORDS'])
- cword = int(os.environ['COMP_CWORD'])
- args = cwords[1:cword]
- try:
- incomplete = cwords[cword]
- except IndexError:
- incomplete = ''
-
+def get_choices(cli, prog_name, args, incomplete):
ctx = resolve_ctx(cli, prog_name, args)
if ctx is None:
- return True
+ return
choices = []
if incomplete and not incomplete[:1].isalnum():
@@ -64,7 +56,20 @@
for item in choices:
if item.startswith(incomplete):
- echo(item)
+ yield item
+
+
+def do_complete(cli, prog_name):
+ cwords = split_arg_string(os.environ['COMP_WORDS'])
+ cword = int(os.environ['COMP_CWORD'])
+ args = cwords[1:cword]
+ try:
+ incomplete = cwords[cword]
+ except IndexError:
+ incomplete = ''
+
+ for item in get_choices(cli, prog_name, args, incomplete):
+ echo(item)
return True
| {"golden_diff": "diff --git a/click/_bashcomplete.py b/click/_bashcomplete.py\n--- a/click/_bashcomplete.py\n+++ b/click/_bashcomplete.py\n@@ -30,27 +30,19 @@\n \n def resolve_ctx(cli, prog_name, args):\n ctx = cli.make_context(prog_name, args, resilient_parsing=True)\n- while ctx.args and isinstance(ctx.command, MultiCommand):\n- cmd = ctx.command.get_command(ctx, ctx.args[0])\n+ while ctx.args + ctx.protected_args and isinstance(ctx.command, MultiCommand):\n+ a = ctx.args + ctx.protected_args\n+ cmd = ctx.command.get_command(ctx, a[0])\n if cmd is None:\n return None\n- ctx = cmd.make_context(ctx.args[0], ctx.args[1:], parent=ctx,\n- resilient_parsing=True)\n+ ctx = cmd.make_context(a[0], a[1:], parent=ctx, resilient_parsing=True)\n return ctx\n \n \n-def do_complete(cli, prog_name):\n- cwords = split_arg_string(os.environ['COMP_WORDS'])\n- cword = int(os.environ['COMP_CWORD'])\n- args = cwords[1:cword]\n- try:\n- incomplete = cwords[cword]\n- except IndexError:\n- incomplete = ''\n-\n+def get_choices(cli, prog_name, args, incomplete):\n ctx = resolve_ctx(cli, prog_name, args)\n if ctx is None:\n- return True\n+ return\n \n choices = []\n if incomplete and not incomplete[:1].isalnum():\n@@ -64,7 +56,20 @@\n \n for item in choices:\n if item.startswith(incomplete):\n- echo(item)\n+ yield item\n+\n+\n+def do_complete(cli, prog_name):\n+ cwords = split_arg_string(os.environ['COMP_WORDS'])\n+ cword = int(os.environ['COMP_CWORD'])\n+ args = cwords[1:cword]\n+ try:\n+ incomplete = cwords[cword]\n+ except IndexError:\n+ incomplete = ''\n+\n+ for item in get_choices(cli, prog_name, args, incomplete):\n+ echo(item)\n \n return True\n", "issue": "Bash completion for MultiCommands returns subcommands twice until it stops.\nPython 2.7.10\nbroken in click >= 6\nok in click (5.1)\n\n```\nmanage hello hello world world\n```\n\n", "before_files": [{"content": "import os\nimport re\nfrom .utils import echo\nfrom .parser import split_arg_string\nfrom .core import MultiCommand, Option\n\n\nCOMPLETION_SCRIPT = '''\n%(complete_func)s() {\n COMPREPLY=( $( env COMP_WORDS=\"${COMP_WORDS[*]}\" \\\\\n COMP_CWORD=$COMP_CWORD \\\\\n %(autocomplete_var)s=complete $1 ) )\n return 0\n}\n\ncomplete -F %(complete_func)s -o default %(script_names)s\n'''\n\n_invalid_ident_char_re = re.compile(r'[^a-zA-Z0-9_]')\n\n\ndef get_completion_script(prog_name, complete_var):\n cf_name = _invalid_ident_char_re.sub('', prog_name.replace('-', '_'))\n return (COMPLETION_SCRIPT % {\n 'complete_func': '_%s_completion' % cf_name,\n 'script_names': prog_name,\n 'autocomplete_var': complete_var,\n }).strip() + ';'\n\n\ndef resolve_ctx(cli, prog_name, args):\n ctx = cli.make_context(prog_name, args, resilient_parsing=True)\n while ctx.args and isinstance(ctx.command, MultiCommand):\n cmd = ctx.command.get_command(ctx, ctx.args[0])\n if cmd is None:\n return None\n ctx = cmd.make_context(ctx.args[0], ctx.args[1:], parent=ctx,\n resilient_parsing=True)\n return ctx\n\n\ndef do_complete(cli, prog_name):\n cwords = split_arg_string(os.environ['COMP_WORDS'])\n cword = int(os.environ['COMP_CWORD'])\n args = cwords[1:cword]\n try:\n incomplete = cwords[cword]\n except IndexError:\n incomplete = ''\n\n ctx = resolve_ctx(cli, prog_name, args)\n if ctx is None:\n return True\n\n choices = []\n if incomplete and not incomplete[:1].isalnum():\n for param in ctx.command.params:\n if not isinstance(param, Option):\n continue\n choices.extend(param.opts)\n choices.extend(param.secondary_opts)\n elif isinstance(ctx.command, MultiCommand):\n choices.extend(ctx.command.list_commands(ctx))\n\n for item in choices:\n if item.startswith(incomplete):\n echo(item)\n\n return True\n\n\ndef bashcomplete(cli, prog_name, complete_var, complete_instr):\n if complete_instr == 'source':\n echo(get_completion_script(prog_name, complete_var))\n return True\n elif complete_instr == 'complete':\n return do_complete(cli, prog_name)\n return False\n", "path": "click/_bashcomplete.py"}]} | 1,259 | 478 |
gh_patches_debug_14200 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-898 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Client Address is not displayed for flows from dumpfiles
I have two clients connecting to mitmproxy now, and I'd like to see the client IP for each request in the "Detail" tab.
Maybe show the server's IP too, since it can differ when connections come from different local IPs (e.g. 127.0.0.1 and 192.168.1.101).
</issue>
<code>
[start of libmproxy/console/flowdetailview.py]
1 from __future__ import absolute_import
2 import urwid
3 from . import common, searchable
4 from .. import utils
5
6
7 def maybe_timestamp(base, attr):
8 if base and getattr(base, attr):
9 return utils.format_timestamp_with_milli(getattr(base, attr))
10 else:
11 return "active"
12 pass
13
14
15 def flowdetails(state, flow):
16 text = []
17
18 cc = flow.client_conn
19 sc = flow.server_conn
20 req = flow.request
21 resp = flow.response
22
23 if sc:
24 text.append(urwid.Text([("head", "Server Connection:")]))
25 parts = [
26 ["Address", "%s:%s" % sc.address()],
27 ]
28
29 text.extend(
30 common.format_keyvals(parts, key="key", val="text", indent=4)
31 )
32
33 c = sc.cert
34 if c:
35 text.append(urwid.Text([("head", "Server Certificate:")]))
36 parts = [
37 ["Type", "%s, %s bits" % c.keyinfo],
38 ["SHA1 digest", c.digest("sha1")],
39 ["Valid to", str(c.notafter)],
40 ["Valid from", str(c.notbefore)],
41 ["Serial", str(c.serial)],
42 [
43 "Subject",
44 urwid.BoxAdapter(
45 urwid.ListBox(
46 common.format_keyvals(
47 c.subject,
48 key="highlight",
49 val="text"
50 )
51 ),
52 len(c.subject)
53 )
54 ],
55 [
56 "Issuer",
57 urwid.BoxAdapter(
58 urwid.ListBox(
59 common.format_keyvals(
60 c.issuer, key="highlight", val="text"
61 )
62 ),
63 len(c.issuer)
64 )
65 ]
66 ]
67
68 if c.altnames:
69 parts.append(
70 [
71 "Alt names",
72 ", ".join(c.altnames)
73 ]
74 )
75 text.extend(
76 common.format_keyvals(parts, key="key", val="text", indent=4)
77 )
78
79 if cc:
80 text.append(urwid.Text([("head", "Client Connection:")]))
81
82 parts = [
83 ["Address", "%s:%s" % cc.address()],
84 # ["Requests", "%s"%cc.requestcount],
85 ]
86
87 text.extend(
88 common.format_keyvals(parts, key="key", val="text", indent=4)
89 )
90
91 parts = []
92
93 parts.append(
94 [
95 "Client conn. established",
96 maybe_timestamp(cc, "timestamp_start")
97 ]
98 )
99 parts.append(
100 [
101 "Server conn. initiated",
102 maybe_timestamp(sc, "timestamp_start")
103 ]
104 )
105 parts.append(
106 [
107 "Server conn. TCP handshake",
108 maybe_timestamp(sc, "timestamp_tcp_setup")
109 ]
110 )
111 if sc.ssl_established:
112 parts.append(
113 [
114 "Server conn. SSL handshake",
115 maybe_timestamp(sc, "timestamp_ssl_setup")
116 ]
117 )
118 parts.append(
119 [
120 "Client conn. SSL handshake",
121 maybe_timestamp(cc, "timestamp_ssl_setup")
122 ]
123 )
124 parts.append(
125 [
126 "First request byte",
127 maybe_timestamp(req, "timestamp_start")
128 ]
129 )
130 parts.append(
131 [
132 "Request complete",
133 maybe_timestamp(req, "timestamp_end")
134 ]
135 )
136 parts.append(
137 [
138 "First response byte",
139 maybe_timestamp(resp, "timestamp_start")
140 ]
141 )
142 parts.append(
143 [
144 "Response complete",
145 maybe_timestamp(resp, "timestamp_end")
146 ]
147 )
148
149 # sort operations by timestamp
150 parts = sorted(parts, key=lambda p: p[1])
151
152 text.append(urwid.Text([("head", "Timing:")]))
153 text.extend(common.format_keyvals(parts, key="key", val="text", indent=4))
154 return searchable.Searchable(state, text)
155
[end of libmproxy/console/flowdetailview.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/libmproxy/console/flowdetailview.py b/libmproxy/console/flowdetailview.py
--- a/libmproxy/console/flowdetailview.py
+++ b/libmproxy/console/flowdetailview.py
@@ -20,7 +20,7 @@
req = flow.request
resp = flow.response
- if sc:
+ if sc is not None:
text.append(urwid.Text([("head", "Server Connection:")]))
parts = [
["Address", "%s:%s" % sc.address()],
@@ -76,7 +76,7 @@
common.format_keyvals(parts, key="key", val="text", indent=4)
)
- if cc:
+ if cc is not None:
text.append(urwid.Text([("head", "Client Connection:")]))
parts = [
| {"golden_diff": "diff --git a/libmproxy/console/flowdetailview.py b/libmproxy/console/flowdetailview.py\n--- a/libmproxy/console/flowdetailview.py\n+++ b/libmproxy/console/flowdetailview.py\n@@ -20,7 +20,7 @@\n req = flow.request\n resp = flow.response\n \n- if sc:\n+ if sc is not None:\n text.append(urwid.Text([(\"head\", \"Server Connection:\")]))\n parts = [\n [\"Address\", \"%s:%s\" % sc.address()],\n@@ -76,7 +76,7 @@\n common.format_keyvals(parts, key=\"key\", val=\"text\", indent=4)\n )\n \n- if cc:\n+ if cc is not None:\n text.append(urwid.Text([(\"head\", \"Client Connection:\")]))\n \n parts = [\n", "issue": "Client Address is not displayed for flows from dumpfiles\nI have two clients connecting to mitmproxy now, and I'd like to see the client IP for each request in the \"Detail\" tab.\n\nMaybe show the server's IP too, since it can differ when connections come from different local IPs (e.g. 127.0.0.1 and 192.168.1.101).\n\n", "before_files": [{"content": "from __future__ import absolute_import\nimport urwid\nfrom . import common, searchable\nfrom .. import utils\n\n\ndef maybe_timestamp(base, attr):\n if base and getattr(base, attr):\n return utils.format_timestamp_with_milli(getattr(base, attr))\n else:\n return \"active\"\n pass\n\n\ndef flowdetails(state, flow):\n text = []\n\n cc = flow.client_conn\n sc = flow.server_conn\n req = flow.request\n resp = flow.response\n\n if sc:\n text.append(urwid.Text([(\"head\", \"Server Connection:\")]))\n parts = [\n [\"Address\", \"%s:%s\" % sc.address()],\n ]\n\n text.extend(\n common.format_keyvals(parts, key=\"key\", val=\"text\", indent=4)\n )\n\n c = sc.cert\n if c:\n text.append(urwid.Text([(\"head\", \"Server Certificate:\")]))\n parts = [\n [\"Type\", \"%s, %s bits\" % c.keyinfo],\n [\"SHA1 digest\", c.digest(\"sha1\")],\n [\"Valid to\", str(c.notafter)],\n [\"Valid from\", str(c.notbefore)],\n [\"Serial\", str(c.serial)],\n [\n \"Subject\",\n urwid.BoxAdapter(\n urwid.ListBox(\n common.format_keyvals(\n c.subject,\n key=\"highlight\",\n val=\"text\"\n )\n ),\n len(c.subject)\n )\n ],\n [\n \"Issuer\",\n urwid.BoxAdapter(\n urwid.ListBox(\n common.format_keyvals(\n c.issuer, key=\"highlight\", val=\"text\"\n )\n ),\n len(c.issuer)\n )\n ]\n ]\n\n if c.altnames:\n parts.append(\n [\n \"Alt names\",\n \", \".join(c.altnames)\n ]\n )\n text.extend(\n common.format_keyvals(parts, key=\"key\", val=\"text\", indent=4)\n )\n\n if cc:\n text.append(urwid.Text([(\"head\", \"Client Connection:\")]))\n\n parts = [\n [\"Address\", \"%s:%s\" % cc.address()],\n # [\"Requests\", \"%s\"%cc.requestcount],\n ]\n\n text.extend(\n common.format_keyvals(parts, key=\"key\", val=\"text\", indent=4)\n )\n\n parts = []\n\n parts.append(\n [\n \"Client conn. established\",\n maybe_timestamp(cc, \"timestamp_start\")\n ]\n )\n parts.append(\n [\n \"Server conn. initiated\",\n maybe_timestamp(sc, \"timestamp_start\")\n ]\n )\n parts.append(\n [\n \"Server conn. TCP handshake\",\n maybe_timestamp(sc, \"timestamp_tcp_setup\")\n ]\n )\n if sc.ssl_established:\n parts.append(\n [\n \"Server conn. SSL handshake\",\n maybe_timestamp(sc, \"timestamp_ssl_setup\")\n ]\n )\n parts.append(\n [\n \"Client conn. SSL handshake\",\n maybe_timestamp(cc, \"timestamp_ssl_setup\")\n ]\n )\n parts.append(\n [\n \"First request byte\",\n maybe_timestamp(req, \"timestamp_start\")\n ]\n )\n parts.append(\n [\n \"Request complete\",\n maybe_timestamp(req, \"timestamp_end\")\n ]\n )\n parts.append(\n [\n \"First response byte\",\n maybe_timestamp(resp, \"timestamp_start\")\n ]\n )\n parts.append(\n [\n \"Response complete\",\n maybe_timestamp(resp, \"timestamp_end\")\n ]\n )\n\n # sort operations by timestamp\n parts = sorted(parts, key=lambda p: p[1])\n\n text.append(urwid.Text([(\"head\", \"Timing:\")]))\n text.extend(common.format_keyvals(parts, key=\"key\", val=\"text\", indent=4))\n return searchable.Searchable(state, text)\n", "path": "libmproxy/console/flowdetailview.py"}]} | 1,802 | 184 |
gh_patches_debug_16006 | rasdani/github-patches | git_diff | openvinotoolkit__datumaro-275 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CLI help usage is misleading
Recommendation of usage in cli help message is misleading.
**Steps:**
```
datum -h
```
**Expected result:**
Correct form for usage recommendation
**Current result:**
```
usage: datumaro [-h] [--version] [--loglevel LOGLEVEL]
```
``` Run 'datumaro COMMAND --help```
**Notes:**
datumaro is not correct form, and results in "command not found". The same is command specific help messages.
</issue>
<code>
[start of datumaro/cli/__main__.py]
1
2 # Copyright (C) 2019-2021 Intel Corporation
3 #
4 # SPDX-License-Identifier: MIT
5
6 import argparse
7 import logging as log
8 import sys
9
10 from . import contexts, commands
11 from .util import CliException, add_subparser
12 from ..version import VERSION
13
14
15 _log_levels = {
16 'debug': log.DEBUG,
17 'info': log.INFO,
18 'warning': log.WARNING,
19 'error': log.ERROR,
20 'critical': log.CRITICAL
21 }
22
23 def loglevel(name):
24 return _log_levels[name]
25
26 class _LogManager:
27 @classmethod
28 def init_logger(cls, args=None):
29 # Define minimalistic parser only to obtain loglevel
30 parser = argparse.ArgumentParser(add_help=False)
31 cls._define_loglevel_option(parser)
32 args, _ = parser.parse_known_args(args)
33
34 log.basicConfig(format='%(asctime)s %(levelname)s: %(message)s',
35 level=args.loglevel)
36
37 @staticmethod
38 def _define_loglevel_option(parser):
39 parser.add_argument('--loglevel', type=loglevel, default='info',
40 help="Logging level (options: %s; default: %s)" % \
41 (', '.join(_log_levels.keys()), "%(default)s"))
42 return parser
43
44
45 def _make_subcommands_help(commands, help_line_start=0):
46 desc = ""
47 for command_name, _, command_help in commands:
48 desc += (" %-" + str(max(0, help_line_start - 2 - 1)) + "s%s\n") % \
49 (command_name, command_help)
50 return desc
51
52 def make_parser():
53 parser = argparse.ArgumentParser(prog="datumaro",
54 description="Dataset Framework",
55 formatter_class=argparse.RawDescriptionHelpFormatter)
56
57 parser.add_argument('--version', action='version', version=VERSION)
58 _LogManager._define_loglevel_option(parser)
59
60 known_contexts = [
61 ('project', contexts.project, "Actions with project (deprecated)"),
62 ('source', contexts.source, "Actions with data sources"),
63 ('model', contexts.model, "Actions with models"),
64 ]
65 known_commands = [
66 ('create', commands.create, "Create project"),
67 ('import', commands.import_, "Create project from existing dataset"),
68 ('add', commands.add, "Add data source to project"),
69 ('remove', commands.remove, "Remove data source from project"),
70 ('export', commands.export, "Export project in some format"),
71 ('filter', commands.filter, "Filter project"),
72 ('transform', commands.transform, "Transform project"),
73 ('merge', commands.merge, "Merge projects"),
74 ('convert', commands.convert, "Convert dataset into another format"),
75 ('diff', commands.diff, "Compare projects with intersection"),
76 ('ediff', commands.ediff, "Compare projects for equality"),
77 ('stats', commands.stats, "Compute project statistics"),
78 ('info', commands.info, "Print project info"),
79 ('explain', commands.explain, "Run Explainable AI algorithm for model"),
80 ('validate', commands.validate, "Validate project")
81 ]
82
83 # Argparse doesn't support subparser groups:
84 # https://stackoverflow.com/questions/32017020/grouping-argparse-subparser-arguments
85 help_line_start = max((len(e[0]) for e in known_contexts + known_commands),
86 default=0)
87 help_line_start = max((2 + help_line_start) // 4 + 1, 6) * 4 # align to tabs
88 subcommands_desc = ""
89 if known_contexts:
90 subcommands_desc += "Contexts:\n"
91 subcommands_desc += _make_subcommands_help(known_contexts,
92 help_line_start)
93 if known_commands:
94 if subcommands_desc:
95 subcommands_desc += "\n"
96 subcommands_desc += "Commands:\n"
97 subcommands_desc += _make_subcommands_help(known_commands,
98 help_line_start)
99 if subcommands_desc:
100 subcommands_desc += \
101 "\nRun '%s COMMAND --help' for more information on a command." % \
102 parser.prog
103
104 subcommands = parser.add_subparsers(title=subcommands_desc,
105 description="", help=argparse.SUPPRESS)
106 for command_name, command, _ in known_contexts + known_commands:
107 add_subparser(subcommands, command_name, command.build_parser)
108
109 return parser
110
111
112 def main(args=None):
113 _LogManager.init_logger(args)
114
115 parser = make_parser()
116 args = parser.parse_args(args)
117
118 if 'command' not in args:
119 parser.print_help()
120 return 1
121
122 try:
123 return args.command(args)
124 except CliException as e:
125 log.error(e)
126 return 1
127 except Exception as e:
128 log.error(e)
129 raise
130
131
132 if __name__ == '__main__':
133 sys.exit(main())
[end of datumaro/cli/__main__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/datumaro/cli/__main__.py b/datumaro/cli/__main__.py
--- a/datumaro/cli/__main__.py
+++ b/datumaro/cli/__main__.py
@@ -5,6 +5,7 @@
import argparse
import logging as log
+import os.path as osp
import sys
from . import contexts, commands
@@ -50,9 +51,11 @@
return desc
def make_parser():
- parser = argparse.ArgumentParser(prog="datumaro",
+ parser = argparse.ArgumentParser(
description="Dataset Framework",
formatter_class=argparse.RawDescriptionHelpFormatter)
+ if parser.prog == osp.basename(__file__): # python -m datumaro ...
+ parser.prog = 'datumaro'
parser.add_argument('--version', action='version', version=VERSION)
_LogManager._define_loglevel_option(parser)
| {"golden_diff": "diff --git a/datumaro/cli/__main__.py b/datumaro/cli/__main__.py\n--- a/datumaro/cli/__main__.py\n+++ b/datumaro/cli/__main__.py\n@@ -5,6 +5,7 @@\n \n import argparse\n import logging as log\n+import os.path as osp\n import sys\n \n from . import contexts, commands\n@@ -50,9 +51,11 @@\n return desc\n \n def make_parser():\n- parser = argparse.ArgumentParser(prog=\"datumaro\",\n+ parser = argparse.ArgumentParser(\n description=\"Dataset Framework\",\n formatter_class=argparse.RawDescriptionHelpFormatter)\n+ if parser.prog == osp.basename(__file__): # python -m datumaro ...\n+ parser.prog = 'datumaro'\n \n parser.add_argument('--version', action='version', version=VERSION)\n _LogManager._define_loglevel_option(parser)\n", "issue": "CLI help usage is misleading\nRecommendation of usage in cli help message is misleading.\r\n\r\n**Steps:**\r\n```\r\ndatum -h\r\n```\r\n**Expected result:**\r\nCorrect form for usage recommendation\r\n\r\n**Current result:**\r\n```\r\nusage: datumaro [-h] [--version] [--loglevel LOGLEVEL]\r\n```\r\n``` Run 'datumaro COMMAND --help```\r\n\r\n**Notes:**\r\ndatumaro is not correct form, and results in \"command not found\". The same is command specific help messages. \r\n\n", "before_files": [{"content": "\n# Copyright (C) 2019-2021 Intel Corporation\n#\n# SPDX-License-Identifier: MIT\n\nimport argparse\nimport logging as log\nimport sys\n\nfrom . import contexts, commands\nfrom .util import CliException, add_subparser\nfrom ..version import VERSION\n\n\n_log_levels = {\n 'debug': log.DEBUG,\n 'info': log.INFO,\n 'warning': log.WARNING,\n 'error': log.ERROR,\n 'critical': log.CRITICAL\n}\n\ndef loglevel(name):\n return _log_levels[name]\n\nclass _LogManager:\n @classmethod\n def init_logger(cls, args=None):\n # Define minimalistic parser only to obtain loglevel\n parser = argparse.ArgumentParser(add_help=False)\n cls._define_loglevel_option(parser)\n args, _ = parser.parse_known_args(args)\n\n log.basicConfig(format='%(asctime)s %(levelname)s: %(message)s',\n level=args.loglevel)\n\n @staticmethod\n def _define_loglevel_option(parser):\n parser.add_argument('--loglevel', type=loglevel, default='info',\n help=\"Logging level (options: %s; default: %s)\" % \\\n (', '.join(_log_levels.keys()), \"%(default)s\"))\n return parser\n\n\ndef _make_subcommands_help(commands, help_line_start=0):\n desc = \"\"\n for command_name, _, command_help in commands:\n desc += (\" %-\" + str(max(0, help_line_start - 2 - 1)) + \"s%s\\n\") % \\\n (command_name, command_help)\n return desc\n\ndef make_parser():\n parser = argparse.ArgumentParser(prog=\"datumaro\",\n description=\"Dataset Framework\",\n formatter_class=argparse.RawDescriptionHelpFormatter)\n\n parser.add_argument('--version', action='version', version=VERSION)\n _LogManager._define_loglevel_option(parser)\n\n known_contexts = [\n ('project', contexts.project, \"Actions with project (deprecated)\"),\n ('source', contexts.source, \"Actions with data sources\"),\n ('model', contexts.model, \"Actions with models\"),\n ]\n known_commands = [\n ('create', commands.create, \"Create project\"),\n ('import', commands.import_, \"Create project from existing dataset\"),\n ('add', commands.add, \"Add data source to project\"),\n ('remove', commands.remove, \"Remove data source from project\"),\n ('export', commands.export, \"Export project in some format\"),\n ('filter', commands.filter, \"Filter project\"),\n ('transform', commands.transform, \"Transform project\"),\n ('merge', commands.merge, \"Merge projects\"),\n ('convert', commands.convert, \"Convert dataset into another format\"),\n ('diff', commands.diff, \"Compare projects with intersection\"),\n ('ediff', commands.ediff, \"Compare projects for equality\"),\n ('stats', commands.stats, \"Compute project statistics\"),\n ('info', commands.info, \"Print project info\"),\n ('explain', commands.explain, \"Run Explainable AI algorithm for model\"),\n ('validate', commands.validate, \"Validate project\")\n ]\n\n # Argparse doesn't support subparser groups:\n # https://stackoverflow.com/questions/32017020/grouping-argparse-subparser-arguments\n help_line_start = max((len(e[0]) for e in known_contexts + known_commands),\n default=0)\n help_line_start = max((2 + help_line_start) // 4 + 1, 6) * 4 # align to tabs\n subcommands_desc = \"\"\n if known_contexts:\n subcommands_desc += \"Contexts:\\n\"\n subcommands_desc += _make_subcommands_help(known_contexts,\n help_line_start)\n if known_commands:\n if subcommands_desc:\n subcommands_desc += \"\\n\"\n subcommands_desc += \"Commands:\\n\"\n subcommands_desc += _make_subcommands_help(known_commands,\n help_line_start)\n if subcommands_desc:\n subcommands_desc += \\\n \"\\nRun '%s COMMAND --help' for more information on a command.\" % \\\n parser.prog\n\n subcommands = parser.add_subparsers(title=subcommands_desc,\n description=\"\", help=argparse.SUPPRESS)\n for command_name, command, _ in known_contexts + known_commands:\n add_subparser(subcommands, command_name, command.build_parser)\n\n return parser\n\n\ndef main(args=None):\n _LogManager.init_logger(args)\n\n parser = make_parser()\n args = parser.parse_args(args)\n\n if 'command' not in args:\n parser.print_help()\n return 1\n\n try:\n return args.command(args)\n except CliException as e:\n log.error(e)\n return 1\n except Exception as e:\n log.error(e)\n raise\n\n\nif __name__ == '__main__':\n sys.exit(main())", "path": "datumaro/cli/__main__.py"}]} | 1,977 | 193 |
gh_patches_debug_15173 | rasdani/github-patches | git_diff | searxng__searxng-131 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
searx_extra/update/update_languages.py : crash on peertube engine
The URL https://peer.tube/api/v1/videos/languages doesn't exist anymore:
https://github.com/searxng/searxng/blob/ee83c99d2b40efaaf4f3e6635e7a43f7731cb128/searx/engines/peertube.py#L24-L25
See also https://framagit.org/framasoft/peertube/search-index/-/tree/master/server/controllers/api
----
See https://framagit.org/framasoft/peertube/search-index/-/blob/master/client/src/views/Search.vue#L644-667
</issue>
<code>
[start of searx/engines/peertube.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 peertube (Videos)
4 """
5
6 from json import loads
7 from datetime import datetime
8 from urllib.parse import urlencode
9 from searx.utils import html_to_text
10
11 # about
12 about = {
13 "website": 'https://joinpeertube.org',
14 "wikidata_id": 'Q50938515',
15 "official_api_documentation": 'https://docs.joinpeertube.org/api-rest-reference.html',
16 "use_official_api": True,
17 "require_api_key": False,
18 "results": 'JSON',
19 }
20
21 # engine dependent config
22 categories = ["videos"]
23 paging = True
24 base_url = "https://peer.tube"
25 supported_languages_url = base_url + "/api/v1/videos/languages"
26
27
28 # do search-request
29 def request(query, params):
30 sanitized_url = base_url.rstrip("/")
31 pageno = (params["pageno"] - 1) * 15
32 search_url = sanitized_url + "/api/v1/search/videos/?pageno={pageno}&{query}"
33 query_dict = {"search": query}
34 language = params["language"].split("-")[0]
35 # pylint: disable=undefined-variable
36 if "all" != language and language in supported_languages:
37 query_dict["languageOneOf"] = language
38 params["url"] = search_url.format(
39 query=urlencode(query_dict), pageno=pageno
40 )
41 return params
42
43
44 def _get_offset_from_pageno(pageno):
45 return (pageno - 1) * 15 + 1
46
47
48 # get response from search-request
49 def response(resp):
50 sanitized_url = base_url.rstrip("/")
51 results = []
52
53 search_res = loads(resp.text)
54
55 embedded_url = (
56 '<iframe width="560" height="315" sandbox="allow-same-origin allow-scripts allow-popups" '
57 + 'src="'
58 + sanitized_url
59 + '{embed_path}" frameborder="0" allowfullscreen></iframe>'
60 )
61 # return empty array if there are no results
62 if "data" not in search_res:
63 return []
64
65 # parse results
66 for res in search_res["data"]:
67 title = res["name"]
68 url = sanitized_url + "/videos/watch/" + res["uuid"]
69 description = res["description"]
70 if description:
71 content = html_to_text(res["description"])
72 else:
73 content = ""
74 thumbnail = sanitized_url + res["thumbnailPath"]
75 publishedDate = datetime.strptime(res["publishedAt"], "%Y-%m-%dT%H:%M:%S.%fZ")
76 embedded = embedded_url.format(embed_path=res["embedPath"])
77
78 results.append(
79 {
80 "template": "videos.html",
81 "url": url,
82 "title": title,
83 "content": content,
84 "publishedDate": publishedDate,
85 "embedded": embedded,
86 "thumbnail": thumbnail,
87 }
88 )
89
90 # return results
91 return results
92
93
94 def _fetch_supported_languages(resp):
95 peertube_languages = list(loads(resp.text).keys())
96 return peertube_languages
97
[end of searx/engines/peertube.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/searx/engines/peertube.py b/searx/engines/peertube.py
--- a/searx/engines/peertube.py
+++ b/searx/engines/peertube.py
@@ -22,7 +22,9 @@
categories = ["videos"]
paging = True
base_url = "https://peer.tube"
-supported_languages_url = base_url + "/api/v1/videos/languages"
+supported_languages_url = (
+ 'https://framagit.org/framasoft/peertube/search-index/-/raw/master/client/src/views/Search.vue'
+)
# do search-request
@@ -92,5 +94,9 @@
def _fetch_supported_languages(resp):
- peertube_languages = list(loads(resp.text).keys())
+ import re
+
+ # https://docs.python.org/3/howto/regex.html#greedy-versus-non-greedy
+ videolanguages = re.search(r"videoLanguages \(\) \{(.*?)\]", resp.text, re.DOTALL)
+ peertube_languages = [m.group(1) for m in re.finditer(r"\{ id: '([a-z]+)', label:", videolanguages.group(1))]
return peertube_languages
| {"golden_diff": "diff --git a/searx/engines/peertube.py b/searx/engines/peertube.py\n--- a/searx/engines/peertube.py\n+++ b/searx/engines/peertube.py\n@@ -22,7 +22,9 @@\n categories = [\"videos\"]\n paging = True\n base_url = \"https://peer.tube\"\n-supported_languages_url = base_url + \"/api/v1/videos/languages\"\n+supported_languages_url = (\n+ 'https://framagit.org/framasoft/peertube/search-index/-/raw/master/client/src/views/Search.vue'\n+)\n \n \n # do search-request\n@@ -92,5 +94,9 @@\n \n \n def _fetch_supported_languages(resp):\n- peertube_languages = list(loads(resp.text).keys())\n+ import re\n+\n+ # https://docs.python.org/3/howto/regex.html#greedy-versus-non-greedy\n+ videolanguages = re.search(r\"videoLanguages \\(\\) \\{(.*?)\\]\", resp.text, re.DOTALL)\n+ peertube_languages = [m.group(1) for m in re.finditer(r\"\\{ id: '([a-z]+)', label:\", videolanguages.group(1))]\n return peertube_languages\n", "issue": "searx_extra/update/update_languages.py : crash on peertube engine\nThe URL https://peer.tube/api/v1/videos/languages doesn't exist anymore:\r\nhttps://github.com/searxng/searxng/blob/ee83c99d2b40efaaf4f3e6635e7a43f7731cb128/searx/engines/peertube.py#L24-L25\r\n\r\nSee also https://framagit.org/framasoft/peertube/search-index/-/tree/master/server/controllers/api\r\n\r\n----\r\n\r\nSee https://framagit.org/framasoft/peertube/search-index/-/blob/master/client/src/views/Search.vue#L644-667\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n peertube (Videos)\n\"\"\"\n\nfrom json import loads\nfrom datetime import datetime\nfrom urllib.parse import urlencode\nfrom searx.utils import html_to_text\n\n# about\nabout = {\n \"website\": 'https://joinpeertube.org',\n \"wikidata_id\": 'Q50938515',\n \"official_api_documentation\": 'https://docs.joinpeertube.org/api-rest-reference.html',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\n# engine dependent config\ncategories = [\"videos\"]\npaging = True\nbase_url = \"https://peer.tube\"\nsupported_languages_url = base_url + \"/api/v1/videos/languages\"\n\n\n# do search-request\ndef request(query, params):\n sanitized_url = base_url.rstrip(\"/\")\n pageno = (params[\"pageno\"] - 1) * 15\n search_url = sanitized_url + \"/api/v1/search/videos/?pageno={pageno}&{query}\"\n query_dict = {\"search\": query}\n language = params[\"language\"].split(\"-\")[0]\n # pylint: disable=undefined-variable\n if \"all\" != language and language in supported_languages:\n query_dict[\"languageOneOf\"] = language\n params[\"url\"] = search_url.format(\n query=urlencode(query_dict), pageno=pageno\n )\n return params\n\n\ndef _get_offset_from_pageno(pageno):\n return (pageno - 1) * 15 + 1\n\n\n# get response from search-request\ndef response(resp):\n sanitized_url = base_url.rstrip(\"/\")\n results = []\n\n search_res = loads(resp.text)\n\n embedded_url = (\n '<iframe width=\"560\" height=\"315\" sandbox=\"allow-same-origin allow-scripts allow-popups\" '\n + 'src=\"'\n + sanitized_url\n + '{embed_path}\" frameborder=\"0\" allowfullscreen></iframe>'\n )\n # return empty array if there are no results\n if \"data\" not in search_res:\n return []\n\n # parse results\n for res in search_res[\"data\"]:\n title = res[\"name\"]\n url = sanitized_url + \"/videos/watch/\" + res[\"uuid\"]\n description = res[\"description\"]\n if description:\n content = html_to_text(res[\"description\"])\n else:\n content = \"\"\n thumbnail = sanitized_url + res[\"thumbnailPath\"]\n publishedDate = datetime.strptime(res[\"publishedAt\"], \"%Y-%m-%dT%H:%M:%S.%fZ\")\n embedded = embedded_url.format(embed_path=res[\"embedPath\"])\n\n results.append(\n {\n \"template\": \"videos.html\",\n \"url\": url,\n \"title\": title,\n \"content\": content,\n \"publishedDate\": publishedDate,\n \"embedded\": embedded,\n \"thumbnail\": thumbnail,\n }\n )\n\n # return results\n return results\n\n\ndef _fetch_supported_languages(resp):\n peertube_languages = list(loads(resp.text).keys())\n return peertube_languages\n", "path": "searx/engines/peertube.py"}]} | 1,579 | 283 |
gh_patches_debug_12234 | rasdani/github-patches | git_diff | ivy-llc__ivy-15998 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
multiply
</issue>
<code>
[start of ivy/functional/frontends/paddle/tensor/math.py]
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
10 @to_ivy_arrays_and_back
11 def sin(x, name=None):
12 return ivy.sin(x)
13
14
15 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
16 @to_ivy_arrays_and_back
17 def cos(x, name=None):
18 return ivy.cos(x)
19
20
21 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
22 @to_ivy_arrays_and_back
23 def acos(x, name=None):
24 return ivy.acos(x)
25
26
27 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
28 @to_ivy_arrays_and_back
29 def cosh(x, name=None):
30 return ivy.cosh(x)
31
32
33 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
34 @to_ivy_arrays_and_back
35 def tanh(x, name=None):
36 return ivy.tanh(x)
37
38
39 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
40 @to_ivy_arrays_and_back
41 def acosh(x, name=None):
42 return ivy.acosh(x)
43
44
45 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
46 @to_ivy_arrays_and_back
47 def asin(x, name=None):
48 return ivy.asin(x)
49
50
51 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
52 @to_ivy_arrays_and_back
53 def log_softmax(x, name=None):
54 return ivy.log_softmax(x)
55
56
57 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
58 @to_ivy_arrays_and_back
59 def log(x, name=None):
60 return ivy.log(x)
61
62
63 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
64 @to_ivy_arrays_and_back
65 def divide(x, y, name=None):
66 return ivy.divide(x, y)
67
68
69 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
70 @to_ivy_arrays_and_back
71 def sqrt(x, name=None):
72 return ivy.sqrt(x)
73
74
75 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
76 @to_ivy_arrays_and_back
77 def atanh(x, name=None):
78 return ivy.atanh(x)
79
80
81 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
82 @to_ivy_arrays_and_back
83 def atan(x, name=None):
84 return ivy.atan(x)
85
86
87 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
88 @to_ivy_arrays_and_back
89 def round(x, name=None):
90 return ivy.round(x)
91
92
93 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
94 @to_ivy_arrays_and_back
95 def ceil(x, name=None):
96 return ivy.ceil(x)
97
[end of ivy/functional/frontends/paddle/tensor/math.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py
--- a/ivy/functional/frontends/paddle/tensor/math.py
+++ b/ivy/functional/frontends/paddle/tensor/math.py
@@ -66,6 +66,24 @@
return ivy.divide(x, y)
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def multiply(x, y, name=None):
+ return ivy.multiply(x, y)
+
+
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def add(x, y, name=None):
+ return ivy.add(x, y)
+
+
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def subtract(x, y, name=None):
+ return ivy.subtract(x, y)
+
+
@with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
@to_ivy_arrays_and_back
def sqrt(x, name=None):
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py\n--- a/ivy/functional/frontends/paddle/tensor/math.py\n+++ b/ivy/functional/frontends/paddle/tensor/math.py\n@@ -66,6 +66,24 @@\n return ivy.divide(x, y)\n \n \n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def multiply(x, y, name=None):\n+ return ivy.multiply(x, y)\n+\n+\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def add(x, y, name=None):\n+ return ivy.add(x, y)\n+\n+\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def subtract(x, y, name=None):\n+ return ivy.subtract(x, y)\n+\n+\n @with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n @to_ivy_arrays_and_back\n def sqrt(x, name=None):\n", "issue": "multiply\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log_softmax(x, name=None):\n return ivy.log_softmax(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef log(x, name=None):\n return ivy.log(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef divide(x, y, name=None):\n return ivy.divide(x, y)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sqrt(x, name=None):\n return ivy.sqrt(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atanh(x, name=None):\n return ivy.atanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef atan(x, name=None):\n return ivy.atan(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef round(x, name=None):\n return ivy.round(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef ceil(x, name=None):\n return ivy.ceil(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}]} | 1,617 | 316 |
gh_patches_debug_5001 | rasdani/github-patches | git_diff | liqd__a4-product-1113 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
poll-comment export seems to be broken
```
Environment:
Request Method: GET
Request URL: http://localhost:8004/liqd-orga/dashboard/modules/umfrage/poll/export/
Django Version: 2.2.6
Python Version: 3.7.3
Installed Applications:
('django.contrib.sites',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.sitemaps',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.humanize',
'widget_tweaks',
'rest_framework',
'allauth',
'allauth.account',
'allauth.socialaccount',
'rules.apps.AutodiscoverRulesConfig',
'easy_thumbnails',
'ckeditor',
'ckeditor_uploader',
'capture_tag',
'background_task',
'wagtail.contrib.forms',
'wagtail.contrib.redirects',
'wagtail.contrib.settings',
'wagtail.contrib.styleguide',
'wagtail.embeds',
'wagtail.sites',
'wagtail.users',
'wagtail.snippets',
'wagtail.documents',
'wagtail.images',
'wagtail.search',
'wagtail.admin',
'wagtail.core',
'modelcluster',
'taggit',
'apps.cms.pages',
'apps.cms.settings',
'apps.cms.contacts',
'apps.cms.news',
'apps.cms.use_cases',
'apps.cms.images',
'adhocracy4.actions',
'adhocracy4.administrative_districts',
'adhocracy4.categories',
'adhocracy4.ckeditor',
'adhocracy4.comments',
'adhocracy4.dashboard',
'adhocracy4.filters',
'adhocracy4.follows',
'adhocracy4.forms',
'adhocracy4.images',
'adhocracy4.labels',
'adhocracy4.maps',
'adhocracy4.modules',
'adhocracy4.organisations',
'adhocracy4.phases',
'adhocracy4.projects',
'adhocracy4.ratings',
'adhocracy4.reports',
'adhocracy4.rules',
'apps.actions',
'apps.contrib',
'apps.likes',
'apps.maps',
'apps.moderatorfeedback',
'apps.moderatorremark',
'apps.newsletters',
'apps.notifications',
'apps.organisations',
'apps.partners',
'apps.questions',
'apps.users',
'apps.account',
'apps.dashboard',
'apps.embed',
'apps.exports',
'apps.offlineevents',
'apps.projects',
'apps.activities',
'apps.budgeting',
'apps.documents',
'apps.ideas',
'apps.mapideas',
'apps.polls',
'allauth.socialaccount.providers.facebook',
'allauth.socialaccount.providers.github',
'allauth.socialaccount.providers.google',
'allauth.socialaccount.providers.twitter')
Installed Middleware:
('django.middleware.security.SecurityMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'django_cloudflare_push.middleware.push_middleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'apps.embed.middleware.AjaxPathMiddleware',
'wagtail.core.middleware.SiteMiddleware',
'wagtail.contrib.redirects.middleware.RedirectMiddleware')
Traceback:
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/exception.py" in inner
34. response = get_response(request)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response
115. response = self.process_exception_by_middleware(e, request)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/base.py" in _get_response
113. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py" in view
71. return self.dispatch(request, *args, **kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/contrib/auth/mixins.py" in dispatch
85. return super().dispatch(request, *args, **kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py" in dispatch
97. return handler(request, *args, **kwargs)
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py" in get
158. context = self.get_context_data(**kwargs)
File "/home/katharina/a4-product/apps/polls/views.py" in get_context_data
84. kwargs={'module_slug': self.module.slug})
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/urls/base.py" in reverse
90. return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))
File "/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/urls/resolvers.py" in _reverse_with_prefix
673. raise NoReverseMatch(msg)
Exception Type: NoReverseMatch at /liqd-orga/dashboard/modules/umfrage/poll/export/
Exception Value: Reverse for 'poll-comment-export' with keyword arguments '{'module_slug': 'umfrage'}' not found. 1 pattern(s) tried: ['(?P<organisation_slug>[-\\w_]+)/dashboard/modules/(?P<module_slug>[-\\w_]+)/poll/export/comments/$']
```
</issue>
<code>
[start of apps/polls/views.py]
1 from django.http import Http404
2 from django.shortcuts import get_object_or_404
3 from django.shortcuts import render_to_response
4 from django.urls import reverse
5 from django.views import generic
6
7 from adhocracy4.dashboard import mixins as dashboard_mixins
8 from adhocracy4.projects.mixins import DisplayProjectOrModuleMixin
9 from adhocracy4.projects.mixins import ProjectMixin
10 from adhocracy4.rules import mixins as rules_mixins
11 from apps.exports.views import DashboardExportView
12
13 from . import models
14
15
16 class PollDetailView(ProjectMixin,
17 rules_mixins.PermissionRequiredMixin,
18 generic.DetailView,
19 DisplayProjectOrModuleMixin):
20 model = models.Poll
21 permission_required = 'a4_candy_polls.view_poll'
22
23 def get(self, request, *args, **kwargs):
24 try:
25 self.object = self.get_object()
26 context = self.get_context_data(object=self.object)
27 return self.render_to_response(context)
28
29 except Http404:
30 self.object = None
31 context = self.get_context_data(object=None, request=self.request,)
32 return render_to_response(
33 'a4_candy_polls/poll_404.html',
34 context=context,
35 status=404
36 )
37
38 def get_object(self):
39 return get_object_or_404(models.Poll, module=self.module)
40
41 def get_context_data(self, **kwargs):
42 context = super().get_context_data(**kwargs)
43 if self.object:
44 context['question_list'] = \
45 self.object.questions.annotate_vote_count().all()
46 return context
47
48 def get_permission_object(self):
49 return self.module
50
51
52 class PollDashboardView(ProjectMixin,
53 dashboard_mixins.DashboardBaseMixin,
54 dashboard_mixins.DashboardComponentMixin,
55 generic.TemplateView):
56 template_name = 'a4_candy_polls/poll_dashboard.html'
57 permission_required = 'a4_candy_projects.change_project'
58
59 def get_context_data(self, **kwargs):
60 context = super().get_context_data(**kwargs)
61 context['poll'] = self.get_or_create_poll()
62 return context
63
64 def get_or_create_poll(self):
65 try:
66 obj = models.Poll.objects.get(module=self.module)
67 except models.Poll.DoesNotExist:
68 obj = models.Poll(module=self.module,
69 creator=self.request.user)
70 obj.save()
71 return obj
72
73 def get_permission_object(self):
74 return self.project
75
76
77 class PollDashboardExportView(DashboardExportView):
78 template_name = 'a4_candy_exports/export_dashboard.html'
79
80 def get_context_data(self, **kwargs):
81 context = super().get_context_data(**kwargs)
82 context['comment_export'] = reverse(
83 'a4dashboard:poll-comment-export',
84 kwargs={'module_slug': self.module.slug})
85 return context
86
[end of apps/polls/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/polls/views.py b/apps/polls/views.py
--- a/apps/polls/views.py
+++ b/apps/polls/views.py
@@ -81,5 +81,8 @@
context = super().get_context_data(**kwargs)
context['comment_export'] = reverse(
'a4dashboard:poll-comment-export',
- kwargs={'module_slug': self.module.slug})
+ kwargs={
+ 'organisation_slug': self.module.project.organisation.slug,
+ 'module_slug': self.module.slug
+ })
return context
| {"golden_diff": "diff --git a/apps/polls/views.py b/apps/polls/views.py\n--- a/apps/polls/views.py\n+++ b/apps/polls/views.py\n@@ -81,5 +81,8 @@\n context = super().get_context_data(**kwargs)\n context['comment_export'] = reverse(\n 'a4dashboard:poll-comment-export',\n- kwargs={'module_slug': self.module.slug})\n+ kwargs={\n+ 'organisation_slug': self.module.project.organisation.slug,\n+ 'module_slug': self.module.slug\n+ })\n return context\n", "issue": "poll-comment export seems to be broken\n```\r\nEnvironment:\r\n\r\n\r\nRequest Method: GET\r\nRequest URL: http://localhost:8004/liqd-orga/dashboard/modules/umfrage/poll/export/\r\n\r\nDjango Version: 2.2.6\r\nPython Version: 3.7.3\r\nInstalled Applications:\r\n('django.contrib.sites',\r\n 'django.contrib.admin',\r\n 'django.contrib.auth',\r\n 'django.contrib.contenttypes',\r\n 'django.contrib.sessions',\r\n 'django.contrib.sitemaps',\r\n 'django.contrib.messages',\r\n 'django.contrib.staticfiles',\r\n 'django.contrib.humanize',\r\n 'widget_tweaks',\r\n 'rest_framework',\r\n 'allauth',\r\n 'allauth.account',\r\n 'allauth.socialaccount',\r\n 'rules.apps.AutodiscoverRulesConfig',\r\n 'easy_thumbnails',\r\n 'ckeditor',\r\n 'ckeditor_uploader',\r\n 'capture_tag',\r\n 'background_task',\r\n 'wagtail.contrib.forms',\r\n 'wagtail.contrib.redirects',\r\n 'wagtail.contrib.settings',\r\n 'wagtail.contrib.styleguide',\r\n 'wagtail.embeds',\r\n 'wagtail.sites',\r\n 'wagtail.users',\r\n 'wagtail.snippets',\r\n 'wagtail.documents',\r\n 'wagtail.images',\r\n 'wagtail.search',\r\n 'wagtail.admin',\r\n 'wagtail.core',\r\n 'modelcluster',\r\n 'taggit',\r\n 'apps.cms.pages',\r\n 'apps.cms.settings',\r\n 'apps.cms.contacts',\r\n 'apps.cms.news',\r\n 'apps.cms.use_cases',\r\n 'apps.cms.images',\r\n 'adhocracy4.actions',\r\n 'adhocracy4.administrative_districts',\r\n 'adhocracy4.categories',\r\n 'adhocracy4.ckeditor',\r\n 'adhocracy4.comments',\r\n 'adhocracy4.dashboard',\r\n 'adhocracy4.filters',\r\n 'adhocracy4.follows',\r\n 'adhocracy4.forms',\r\n 'adhocracy4.images',\r\n 'adhocracy4.labels',\r\n 'adhocracy4.maps',\r\n 'adhocracy4.modules',\r\n 'adhocracy4.organisations',\r\n 'adhocracy4.phases',\r\n 'adhocracy4.projects',\r\n 'adhocracy4.ratings',\r\n 'adhocracy4.reports',\r\n 'adhocracy4.rules',\r\n 'apps.actions',\r\n 'apps.contrib',\r\n 'apps.likes',\r\n 'apps.maps',\r\n 'apps.moderatorfeedback',\r\n 'apps.moderatorremark',\r\n 'apps.newsletters',\r\n 'apps.notifications',\r\n 'apps.organisations',\r\n 'apps.partners',\r\n 'apps.questions',\r\n 'apps.users',\r\n 'apps.account',\r\n 'apps.dashboard',\r\n 'apps.embed',\r\n 'apps.exports',\r\n 'apps.offlineevents',\r\n 'apps.projects',\r\n 'apps.activities',\r\n 'apps.budgeting',\r\n 'apps.documents',\r\n 'apps.ideas',\r\n 'apps.mapideas',\r\n 'apps.polls',\r\n 'allauth.socialaccount.providers.facebook',\r\n 'allauth.socialaccount.providers.github',\r\n 'allauth.socialaccount.providers.google',\r\n 'allauth.socialaccount.providers.twitter')\r\nInstalled Middleware:\r\n('django.middleware.security.SecurityMiddleware',\r\n 'whitenoise.middleware.WhiteNoiseMiddleware',\r\n 'django.middleware.csrf.CsrfViewMiddleware',\r\n 'django.middleware.clickjacking.XFrameOptionsMiddleware',\r\n 'django_cloudflare_push.middleware.push_middleware',\r\n 'django.contrib.sessions.middleware.SessionMiddleware',\r\n 'django.middleware.locale.LocaleMiddleware',\r\n 'django.middleware.common.CommonMiddleware',\r\n 'django.contrib.auth.middleware.AuthenticationMiddleware',\r\n 'django.contrib.messages.middleware.MessageMiddleware',\r\n 'apps.embed.middleware.AjaxPathMiddleware',\r\n 'wagtail.core.middleware.SiteMiddleware',\r\n 'wagtail.contrib.redirects.middleware.RedirectMiddleware')\r\n\r\n\r\n\r\nTraceback:\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/exception.py\" in inner\r\n 34. response = get_response(request)\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/base.py\" in _get_response\r\n 115. response = self.process_exception_by_middleware(e, request)\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/core/handlers/base.py\" in _get_response\r\n 113. response = wrapped_callback(request, *callback_args, **callback_kwargs)\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py\" in view\r\n 71. return self.dispatch(request, *args, **kwargs)\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/contrib/auth/mixins.py\" in dispatch\r\n 85. return super().dispatch(request, *args, **kwargs)\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py\" in dispatch\r\n 97. return handler(request, *args, **kwargs)\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/views/generic/base.py\" in get\r\n 158. context = self.get_context_data(**kwargs)\r\n\r\nFile \"/home/katharina/a4-product/apps/polls/views.py\" in get_context_data\r\n 84. kwargs={'module_slug': self.module.slug})\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/urls/base.py\" in reverse\r\n 90. return iri_to_uri(resolver._reverse_with_prefix(view, prefix, *args, **kwargs))\r\n\r\nFile \"/home/katharina/a4-product/venv/lib/python3.7/site-packages/django/urls/resolvers.py\" in _reverse_with_prefix\r\n 673. raise NoReverseMatch(msg)\r\n\r\nException Type: NoReverseMatch at /liqd-orga/dashboard/modules/umfrage/poll/export/\r\nException Value: Reverse for 'poll-comment-export' with keyword arguments '{'module_slug': 'umfrage'}' not found. 1 pattern(s) tried: ['(?P<organisation_slug>[-\\\\w_]+)/dashboard/modules/(?P<module_slug>[-\\\\w_]+)/poll/export/comments/$']\r\n```\n", "before_files": [{"content": "from django.http import Http404\nfrom django.shortcuts import get_object_or_404\nfrom django.shortcuts import render_to_response\nfrom django.urls import reverse\nfrom django.views import generic\n\nfrom adhocracy4.dashboard import mixins as dashboard_mixins\nfrom adhocracy4.projects.mixins import DisplayProjectOrModuleMixin\nfrom adhocracy4.projects.mixins import ProjectMixin\nfrom adhocracy4.rules import mixins as rules_mixins\nfrom apps.exports.views import DashboardExportView\n\nfrom . import models\n\n\nclass PollDetailView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.DetailView,\n DisplayProjectOrModuleMixin):\n model = models.Poll\n permission_required = 'a4_candy_polls.view_poll'\n\n def get(self, request, *args, **kwargs):\n try:\n self.object = self.get_object()\n context = self.get_context_data(object=self.object)\n return self.render_to_response(context)\n\n except Http404:\n self.object = None\n context = self.get_context_data(object=None, request=self.request,)\n return render_to_response(\n 'a4_candy_polls/poll_404.html',\n context=context,\n status=404\n )\n\n def get_object(self):\n return get_object_or_404(models.Poll, module=self.module)\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n if self.object:\n context['question_list'] = \\\n self.object.questions.annotate_vote_count().all()\n return context\n\n def get_permission_object(self):\n return self.module\n\n\nclass PollDashboardView(ProjectMixin,\n dashboard_mixins.DashboardBaseMixin,\n dashboard_mixins.DashboardComponentMixin,\n generic.TemplateView):\n template_name = 'a4_candy_polls/poll_dashboard.html'\n permission_required = 'a4_candy_projects.change_project'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['poll'] = self.get_or_create_poll()\n return context\n\n def get_or_create_poll(self):\n try:\n obj = models.Poll.objects.get(module=self.module)\n except models.Poll.DoesNotExist:\n obj = models.Poll(module=self.module,\n creator=self.request.user)\n obj.save()\n return obj\n\n def get_permission_object(self):\n return self.project\n\n\nclass PollDashboardExportView(DashboardExportView):\n template_name = 'a4_candy_exports/export_dashboard.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['comment_export'] = reverse(\n 'a4dashboard:poll-comment-export',\n kwargs={'module_slug': self.module.slug})\n return context\n", "path": "apps/polls/views.py"}]} | 2,560 | 119 |
gh_patches_debug_59756 | rasdani/github-patches | git_diff | pyca__cryptography-3819 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Signer/Verifier deprecation warning has wrong stacklevel
Seeing this with Cryptography 2.0:
```
.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
_warn_sign_verify_deprecated()
.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:382: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.
_warn_sign_verify_deprecated()
```
I see a few open issues related to deprecations (e.g. #3794), but I'm not sure if any of them cover this particular message.
</issue>
<code>
[start of src/cryptography/hazmat/backends/openssl/utils.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 from __future__ import absolute_import, division, print_function
6
7 import warnings
8
9 from cryptography import utils
10 from cryptography.hazmat.primitives import hashes
11 from cryptography.hazmat.primitives.asymmetric.utils import Prehashed
12
13
14 def _calculate_digest_and_algorithm(backend, data, algorithm):
15 if not isinstance(algorithm, Prehashed):
16 hash_ctx = hashes.Hash(algorithm, backend)
17 hash_ctx.update(data)
18 data = hash_ctx.finalize()
19 else:
20 algorithm = algorithm._algorithm
21
22 if len(data) != algorithm.digest_size:
23 raise ValueError(
24 "The provided data must be the same length as the hash "
25 "algorithm's digest size."
26 )
27
28 return (data, algorithm)
29
30
31 def _check_not_prehashed(signature_algorithm):
32 if isinstance(signature_algorithm, Prehashed):
33 raise TypeError(
34 "Prehashed is only supported in the sign and verify methods. "
35 "It cannot be used with signer or verifier."
36 )
37
38
39 def _warn_sign_verify_deprecated():
40 warnings.warn(
41 "signer and verifier have been deprecated. Please use sign "
42 "and verify instead.",
43 utils.PersistentlyDeprecated,
44 stacklevel=2
45 )
46
[end of src/cryptography/hazmat/backends/openssl/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py
--- a/src/cryptography/hazmat/backends/openssl/utils.py
+++ b/src/cryptography/hazmat/backends/openssl/utils.py
@@ -41,5 +41,5 @@
"signer and verifier have been deprecated. Please use sign "
"and verify instead.",
utils.PersistentlyDeprecated,
- stacklevel=2
+ stacklevel=3
)
| {"golden_diff": "diff --git a/src/cryptography/hazmat/backends/openssl/utils.py b/src/cryptography/hazmat/backends/openssl/utils.py\n--- a/src/cryptography/hazmat/backends/openssl/utils.py\n+++ b/src/cryptography/hazmat/backends/openssl/utils.py\n@@ -41,5 +41,5 @@\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n- stacklevel=2\n+ stacklevel=3\n )\n", "issue": "Signer/Verifier deprecation warning has wrong stacklevel\nSeeing this with Cryptography 2.0:\r\n\r\n```\r\n.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:477: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.\r\n _warn_sign_verify_deprecated()\r\n.../python3.5/site-packages/cryptography/hazmat/backends/openssl/rsa.py:382: DeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.\r\n _warn_sign_verify_deprecated()\r\n```\r\n\r\nI see a few open issues related to deprecations (e.g. #3794), but I'm not sure if any of them cover this particular message.\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport warnings\n\nfrom cryptography import utils\nfrom cryptography.hazmat.primitives import hashes\nfrom cryptography.hazmat.primitives.asymmetric.utils import Prehashed\n\n\ndef _calculate_digest_and_algorithm(backend, data, algorithm):\n if not isinstance(algorithm, Prehashed):\n hash_ctx = hashes.Hash(algorithm, backend)\n hash_ctx.update(data)\n data = hash_ctx.finalize()\n else:\n algorithm = algorithm._algorithm\n\n if len(data) != algorithm.digest_size:\n raise ValueError(\n \"The provided data must be the same length as the hash \"\n \"algorithm's digest size.\"\n )\n\n return (data, algorithm)\n\n\ndef _check_not_prehashed(signature_algorithm):\n if isinstance(signature_algorithm, Prehashed):\n raise TypeError(\n \"Prehashed is only supported in the sign and verify methods. \"\n \"It cannot be used with signer or verifier.\"\n )\n\n\ndef _warn_sign_verify_deprecated():\n warnings.warn(\n \"signer and verifier have been deprecated. Please use sign \"\n \"and verify instead.\",\n utils.PersistentlyDeprecated,\n stacklevel=2\n )\n", "path": "src/cryptography/hazmat/backends/openssl/utils.py"}]} | 1,095 | 115 |
gh_patches_debug_40797 | rasdani/github-patches | git_diff | PaddlePaddle__models-5121 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
work around hung
The fix will follow closely.
</issue>
<code>
[start of PaddleNLP/benchmark/transformer/reader.py]
1 # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import sys
17 import os
18 import io
19 import itertools
20 from functools import partial
21
22 import numpy as np
23 from paddle.io import BatchSampler, DataLoader, Dataset
24 from paddlenlp.data import Pad
25 from paddlenlp.datasets import WMT14ende
26 from paddlenlp.data.sampler import SamplerHelper
27
28
29 def min_max_filer(data, max_len, min_len=0):
30 # 1 for special tokens.
31 data_min_len = min(len(data[0]), len(data[1])) + 1
32 data_max_len = max(len(data[0]), len(data[1])) + 1
33 return (data_min_len >= min_len) and (data_max_len <= max_len)
34
35
36 def create_data_loader(args):
37 root = None if args.root == "None" else args.root
38 (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)
39 args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)
40 transform_func = WMT14ende.get_default_transform_func(root=root)
41 datasets = [
42 WMT14ende.get_datasets(
43 mode=m, transform_func=transform_func) for m in ["train", "dev"]
44 ]
45
46 def _max_token_fn(current_idx, current_batch_size, tokens_sofar,
47 data_source):
48 return max(tokens_sofar,
49 len(data_source[current_idx][0]) + 1,
50 len(data_source[current_idx][1]) + 1)
51
52 def _key(size_so_far, minibatch_len):
53 return size_so_far * minibatch_len
54
55 data_loaders = [(None)] * 2
56 for i, dataset in enumerate(datasets):
57 m = dataset.mode
58 dataset = dataset.filter(
59 partial(
60 min_max_filer, max_len=args.max_length))
61 sampler = SamplerHelper(dataset)
62
63 src_key = (lambda x, data_source: len(data_source[x][0]) + 1)
64 if args.sort_type == SortType.GLOBAL:
65 buffer_size = -1
66 trg_key = (lambda x, data_source: len(data_source[x][1]) + 1)
67 # Sort twice
68 sampler = sampler.sort(
69 key=trg_key, buffer_size=buffer_size).sort(
70 key=src_key, buffer_size=buffer_size)
71 else:
72 sampler = sampler.shuffle()
73 if args.sort_type == SortType.POOL:
74 buffer_size = args.pool_size
75 sampler = sampler.sort(key=src_key, buffer_size=buffer_size)
76
77 batch_sampler = sampler.batch(
78 batch_size=args.batch_size,
79 drop_last=False,
80 batch_size_fn=_max_token_fn,
81 key=_key)
82
83 if m == "train":
84 batch_sampler = batch_sampler.shard()
85
86 data_loader = DataLoader(
87 dataset=dataset,
88 batch_sampler=batch_sampler,
89 collate_fn=partial(
90 prepare_train_input,
91 bos_idx=args.bos_idx,
92 eos_idx=args.eos_idx,
93 pad_idx=args.bos_idx),
94 num_workers=0,
95 return_list=True)
96 data_loaders[i] = (data_loader)
97 return data_loaders
98
99
100 def create_infer_loader(args):
101 root = None if args.root == "None" else args.root
102 (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)
103 args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)
104 transform_func = WMT14ende.get_default_transform_func(root=root)
105 dataset = WMT14ende.get_datasets(
106 mode="test", transform_func=transform_func).filter(
107 partial(
108 min_max_filer, max_len=args.max_length))
109
110 batch_sampler = SamplerHelper(dataset).batch(
111 batch_size=args.infer_batch_size, drop_last=False)
112
113 data_loader = DataLoader(
114 dataset=dataset,
115 batch_sampler=batch_sampler,
116 collate_fn=partial(
117 prepare_infer_input,
118 bos_idx=args.bos_idx,
119 eos_idx=args.eos_idx,
120 pad_idx=args.bos_idx),
121 num_workers=0,
122 return_list=True)
123 return data_loader, trg_vocab.to_tokens
124
125
126 def prepare_train_input(insts, bos_idx, eos_idx, pad_idx):
127 """
128 Put all padded data needed by training into a list.
129 """
130 word_pad = Pad(pad_idx)
131 src_word = word_pad([inst[0] + [eos_idx] for inst in insts])
132 trg_word = word_pad([[bos_idx] + inst[1] for inst in insts])
133 lbl_word = np.expand_dims(
134 word_pad([inst[1] + [eos_idx] for inst in insts]), axis=2)
135
136 data_inputs = [src_word, trg_word, lbl_word]
137
138 return data_inputs
139
140
141 def prepare_infer_input(insts, bos_idx, eos_idx, pad_idx):
142 """
143 Put all padded data needed by beam search decoder into a list.
144 """
145 word_pad = Pad(pad_idx)
146 src_word = word_pad([inst[0] + [eos_idx] for inst in insts])
147
148 return [src_word, ]
149
150
151 class SortType(object):
152 GLOBAL = 'global'
153 POOL = 'pool'
154 NONE = "none"
155
[end of PaddleNLP/benchmark/transformer/reader.py]
[start of PaddleNLP/examples/machine_translation/transformer/reader.py]
1 # Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import glob
16 import sys
17 import os
18 import io
19 import itertools
20 from functools import partial
21
22 import numpy as np
23 from paddle.io import BatchSampler, DataLoader, Dataset
24 from paddlenlp.data import Pad
25 from paddlenlp.datasets import WMT14ende
26 from paddlenlp.data.sampler import SamplerHelper
27
28
29 def min_max_filer(data, max_len, min_len=0):
30 # 1 for special tokens.
31 data_min_len = min(len(data[0]), len(data[1])) + 1
32 data_max_len = max(len(data[0]), len(data[1])) + 1
33 return (data_min_len >= min_len) and (data_max_len <= max_len)
34
35
36 def create_data_loader(args):
37 root = None if args.root == "None" else args.root
38 (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)
39 args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)
40 transform_func = WMT14ende.get_default_transform_func(root=root)
41 datasets = [
42 WMT14ende.get_datasets(
43 mode=m, transform_func=transform_func) for m in ["train", "dev"]
44 ]
45
46 def _max_token_fn(current_idx, current_batch_size, tokens_sofar,
47 data_source):
48 return max(tokens_sofar,
49 len(data_source[current_idx][0]) + 1,
50 len(data_source[current_idx][1]) + 1)
51
52 def _key(size_so_far, minibatch_len):
53 return size_so_far * minibatch_len
54
55 data_loaders = [(None)] * 2
56 for i, dataset in enumerate(datasets):
57 m = dataset.mode
58 dataset = dataset.filter(
59 partial(
60 min_max_filer, max_len=args.max_length))
61 sampler = SamplerHelper(dataset)
62
63 src_key = (lambda x, data_source: len(data_source[x][0]) + 1)
64 if args.sort_type == SortType.GLOBAL:
65 buffer_size = -1
66 trg_key = (lambda x, data_source: len(data_source[x][1]) + 1)
67 # Sort twice
68 sampler = sampler.sort(
69 key=trg_key, buffer_size=buffer_size).sort(
70 key=src_key, buffer_size=buffer_size)
71 else:
72 sampler = sampler.shuffle()
73 if args.sort_type == SortType.POOL:
74 buffer_size = args.pool_size
75 sampler = sampler.sort(key=src_key, buffer_size=buffer_size)
76
77 batch_sampler = sampler.batch(
78 batch_size=args.batch_size,
79 drop_last=False,
80 batch_size_fn=_max_token_fn,
81 key=_key)
82
83 if m == "train":
84 batch_sampler = batch_sampler.shard()
85
86 data_loader = DataLoader(
87 dataset=dataset,
88 batch_sampler=batch_sampler,
89 collate_fn=partial(
90 prepare_train_input,
91 bos_idx=args.bos_idx,
92 eos_idx=args.eos_idx,
93 pad_idx=args.bos_idx),
94 num_workers=0,
95 return_list=True)
96 data_loaders[i] = (data_loader)
97 return data_loaders
98
99
100 def create_infer_loader(args):
101 root = None if args.root == "None" else args.root
102 (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)
103 args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)
104 transform_func = WMT14ende.get_default_transform_func(root=root)
105 dataset = WMT14ende.get_datasets(
106 mode="test", transform_func=transform_func).filter(
107 partial(
108 min_max_filer, max_len=args.max_length))
109
110 batch_sampler = SamplerHelper(dataset).batch(
111 batch_size=args.infer_batch_size, drop_last=False)
112
113 data_loader = DataLoader(
114 dataset=dataset,
115 batch_sampler=batch_sampler,
116 collate_fn=partial(
117 prepare_infer_input,
118 bos_idx=args.bos_idx,
119 eos_idx=args.eos_idx,
120 pad_idx=args.bos_idx),
121 num_workers=0,
122 return_list=True)
123 return data_loader, trg_vocab.to_tokens
124
125
126 def prepare_train_input(insts, bos_idx, eos_idx, pad_idx):
127 """
128 Put all padded data needed by training into a list.
129 """
130 word_pad = Pad(pad_idx)
131 src_word = word_pad([inst[0] + [eos_idx] for inst in insts])
132 trg_word = word_pad([[bos_idx] + inst[1] for inst in insts])
133 lbl_word = np.expand_dims(
134 word_pad([inst[1] + [eos_idx] for inst in insts]), axis=2)
135
136 data_inputs = [src_word, trg_word, lbl_word]
137
138 return data_inputs
139
140
141 def prepare_infer_input(insts, bos_idx, eos_idx, pad_idx):
142 """
143 Put all padded data needed by beam search decoder into a list.
144 """
145 word_pad = Pad(pad_idx)
146 src_word = word_pad([inst[0] + [eos_idx] for inst in insts])
147
148 return [src_word, ]
149
150
151 class SortType(object):
152 GLOBAL = 'global'
153 POOL = 'pool'
154 NONE = "none"
155
[end of PaddleNLP/examples/machine_translation/transformer/reader.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/PaddleNLP/benchmark/transformer/reader.py b/PaddleNLP/benchmark/transformer/reader.py
--- a/PaddleNLP/benchmark/transformer/reader.py
+++ b/PaddleNLP/benchmark/transformer/reader.py
@@ -43,6 +43,12 @@
mode=m, transform_func=transform_func) for m in ["train", "dev"]
]
+ if args.shuffle or args.shuffle_batch:
+ if args.shuffle_seed == "None" or args.shuffle_seed is None:
+ shuffle_seed = 0
+ else:
+ shuffle_seed = args.shuffle_seed
+
def _max_token_fn(current_idx, current_batch_size, tokens_sofar,
data_source):
return max(tokens_sofar,
@@ -69,7 +75,8 @@
key=trg_key, buffer_size=buffer_size).sort(
key=src_key, buffer_size=buffer_size)
else:
- sampler = sampler.shuffle()
+ if args.shuffle:
+ sampler = sampler.shuffle(seed=shuffle_seed)
if args.sort_type == SortType.POOL:
buffer_size = args.pool_size
sampler = sampler.sort(key=src_key, buffer_size=buffer_size)
@@ -83,6 +90,9 @@
if m == "train":
batch_sampler = batch_sampler.shard()
+ if args.shuffle_batch:
+ batch_sampler.shuffle(seed=shuffle_seed)
+
data_loader = DataLoader(
dataset=dataset,
batch_sampler=batch_sampler,
diff --git a/PaddleNLP/examples/machine_translation/transformer/reader.py b/PaddleNLP/examples/machine_translation/transformer/reader.py
--- a/PaddleNLP/examples/machine_translation/transformer/reader.py
+++ b/PaddleNLP/examples/machine_translation/transformer/reader.py
@@ -43,6 +43,12 @@
mode=m, transform_func=transform_func) for m in ["train", "dev"]
]
+ if args.shuffle or args.shuffle_batch:
+ if args.shuffle_seed == "None" or args.shuffle_seed is None:
+ shuffle_seed = 0
+ else:
+ shuffle_seed = args.shuffle_seed
+
def _max_token_fn(current_idx, current_batch_size, tokens_sofar,
data_source):
return max(tokens_sofar,
@@ -69,7 +75,8 @@
key=trg_key, buffer_size=buffer_size).sort(
key=src_key, buffer_size=buffer_size)
else:
- sampler = sampler.shuffle()
+ if args.shuffle:
+ sampler = sampler.shuffle(seed=shuffle_seed)
if args.sort_type == SortType.POOL:
buffer_size = args.pool_size
sampler = sampler.sort(key=src_key, buffer_size=buffer_size)
@@ -83,6 +90,9 @@
if m == "train":
batch_sampler = batch_sampler.shard()
+ if args.shuffle_batch:
+ batch_sampler.shuffle(seed=shuffle_seed)
+
data_loader = DataLoader(
dataset=dataset,
batch_sampler=batch_sampler,
| {"golden_diff": "diff --git a/PaddleNLP/benchmark/transformer/reader.py b/PaddleNLP/benchmark/transformer/reader.py\n--- a/PaddleNLP/benchmark/transformer/reader.py\n+++ b/PaddleNLP/benchmark/transformer/reader.py\n@@ -43,6 +43,12 @@\n mode=m, transform_func=transform_func) for m in [\"train\", \"dev\"]\n ]\n \n+ if args.shuffle or args.shuffle_batch:\n+ if args.shuffle_seed == \"None\" or args.shuffle_seed is None:\n+ shuffle_seed = 0\n+ else:\n+ shuffle_seed = args.shuffle_seed\n+\n def _max_token_fn(current_idx, current_batch_size, tokens_sofar,\n data_source):\n return max(tokens_sofar,\n@@ -69,7 +75,8 @@\n key=trg_key, buffer_size=buffer_size).sort(\n key=src_key, buffer_size=buffer_size)\n else:\n- sampler = sampler.shuffle()\n+ if args.shuffle:\n+ sampler = sampler.shuffle(seed=shuffle_seed)\n if args.sort_type == SortType.POOL:\n buffer_size = args.pool_size\n sampler = sampler.sort(key=src_key, buffer_size=buffer_size)\n@@ -83,6 +90,9 @@\n if m == \"train\":\n batch_sampler = batch_sampler.shard()\n \n+ if args.shuffle_batch:\n+ batch_sampler.shuffle(seed=shuffle_seed)\n+\n data_loader = DataLoader(\n dataset=dataset,\n batch_sampler=batch_sampler,\ndiff --git a/PaddleNLP/examples/machine_translation/transformer/reader.py b/PaddleNLP/examples/machine_translation/transformer/reader.py\n--- a/PaddleNLP/examples/machine_translation/transformer/reader.py\n+++ b/PaddleNLP/examples/machine_translation/transformer/reader.py\n@@ -43,6 +43,12 @@\n mode=m, transform_func=transform_func) for m in [\"train\", \"dev\"]\n ]\n \n+ if args.shuffle or args.shuffle_batch:\n+ if args.shuffle_seed == \"None\" or args.shuffle_seed is None:\n+ shuffle_seed = 0\n+ else:\n+ shuffle_seed = args.shuffle_seed\n+\n def _max_token_fn(current_idx, current_batch_size, tokens_sofar,\n data_source):\n return max(tokens_sofar,\n@@ -69,7 +75,8 @@\n key=trg_key, buffer_size=buffer_size).sort(\n key=src_key, buffer_size=buffer_size)\n else:\n- sampler = sampler.shuffle()\n+ if args.shuffle:\n+ sampler = sampler.shuffle(seed=shuffle_seed)\n if args.sort_type == SortType.POOL:\n buffer_size = args.pool_size\n sampler = sampler.sort(key=src_key, buffer_size=buffer_size)\n@@ -83,6 +90,9 @@\n if m == \"train\":\n batch_sampler = batch_sampler.shard()\n \n+ if args.shuffle_batch:\n+ batch_sampler.shuffle(seed=shuffle_seed)\n+\n data_loader = DataLoader(\n dataset=dataset,\n batch_sampler=batch_sampler,\n", "issue": "work around hung\nThe fix will follow closely. \n", "before_files": [{"content": "# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport sys\nimport os\nimport io\nimport itertools\nfrom functools import partial\n\nimport numpy as np\nfrom paddle.io import BatchSampler, DataLoader, Dataset\nfrom paddlenlp.data import Pad\nfrom paddlenlp.datasets import WMT14ende\nfrom paddlenlp.data.sampler import SamplerHelper\n\n\ndef min_max_filer(data, max_len, min_len=0):\n # 1 for special tokens.\n data_min_len = min(len(data[0]), len(data[1])) + 1\n data_max_len = max(len(data[0]), len(data[1])) + 1\n return (data_min_len >= min_len) and (data_max_len <= max_len)\n\n\ndef create_data_loader(args):\n root = None if args.root == \"None\" else args.root\n (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)\n args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)\n transform_func = WMT14ende.get_default_transform_func(root=root)\n datasets = [\n WMT14ende.get_datasets(\n mode=m, transform_func=transform_func) for m in [\"train\", \"dev\"]\n ]\n\n def _max_token_fn(current_idx, current_batch_size, tokens_sofar,\n data_source):\n return max(tokens_sofar,\n len(data_source[current_idx][0]) + 1,\n len(data_source[current_idx][1]) + 1)\n\n def _key(size_so_far, minibatch_len):\n return size_so_far * minibatch_len\n\n data_loaders = [(None)] * 2\n for i, dataset in enumerate(datasets):\n m = dataset.mode\n dataset = dataset.filter(\n partial(\n min_max_filer, max_len=args.max_length))\n sampler = SamplerHelper(dataset)\n\n src_key = (lambda x, data_source: len(data_source[x][0]) + 1)\n if args.sort_type == SortType.GLOBAL:\n buffer_size = -1\n trg_key = (lambda x, data_source: len(data_source[x][1]) + 1)\n # Sort twice\n sampler = sampler.sort(\n key=trg_key, buffer_size=buffer_size).sort(\n key=src_key, buffer_size=buffer_size)\n else:\n sampler = sampler.shuffle()\n if args.sort_type == SortType.POOL:\n buffer_size = args.pool_size\n sampler = sampler.sort(key=src_key, buffer_size=buffer_size)\n\n batch_sampler = sampler.batch(\n batch_size=args.batch_size,\n drop_last=False,\n batch_size_fn=_max_token_fn,\n key=_key)\n\n if m == \"train\":\n batch_sampler = batch_sampler.shard()\n\n data_loader = DataLoader(\n dataset=dataset,\n batch_sampler=batch_sampler,\n collate_fn=partial(\n prepare_train_input,\n bos_idx=args.bos_idx,\n eos_idx=args.eos_idx,\n pad_idx=args.bos_idx),\n num_workers=0,\n return_list=True)\n data_loaders[i] = (data_loader)\n return data_loaders\n\n\ndef create_infer_loader(args):\n root = None if args.root == \"None\" else args.root\n (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)\n args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)\n transform_func = WMT14ende.get_default_transform_func(root=root)\n dataset = WMT14ende.get_datasets(\n mode=\"test\", transform_func=transform_func).filter(\n partial(\n min_max_filer, max_len=args.max_length))\n\n batch_sampler = SamplerHelper(dataset).batch(\n batch_size=args.infer_batch_size, drop_last=False)\n\n data_loader = DataLoader(\n dataset=dataset,\n batch_sampler=batch_sampler,\n collate_fn=partial(\n prepare_infer_input,\n bos_idx=args.bos_idx,\n eos_idx=args.eos_idx,\n pad_idx=args.bos_idx),\n num_workers=0,\n return_list=True)\n return data_loader, trg_vocab.to_tokens\n\n\ndef prepare_train_input(insts, bos_idx, eos_idx, pad_idx):\n \"\"\"\n Put all padded data needed by training into a list.\n \"\"\"\n word_pad = Pad(pad_idx)\n src_word = word_pad([inst[0] + [eos_idx] for inst in insts])\n trg_word = word_pad([[bos_idx] + inst[1] for inst in insts])\n lbl_word = np.expand_dims(\n word_pad([inst[1] + [eos_idx] for inst in insts]), axis=2)\n\n data_inputs = [src_word, trg_word, lbl_word]\n\n return data_inputs\n\n\ndef prepare_infer_input(insts, bos_idx, eos_idx, pad_idx):\n \"\"\"\n Put all padded data needed by beam search decoder into a list.\n \"\"\"\n word_pad = Pad(pad_idx)\n src_word = word_pad([inst[0] + [eos_idx] for inst in insts])\n\n return [src_word, ]\n\n\nclass SortType(object):\n GLOBAL = 'global'\n POOL = 'pool'\n NONE = \"none\"\n", "path": "PaddleNLP/benchmark/transformer/reader.py"}, {"content": "# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport glob\nimport sys\nimport os\nimport io\nimport itertools\nfrom functools import partial\n\nimport numpy as np\nfrom paddle.io import BatchSampler, DataLoader, Dataset\nfrom paddlenlp.data import Pad\nfrom paddlenlp.datasets import WMT14ende\nfrom paddlenlp.data.sampler import SamplerHelper\n\n\ndef min_max_filer(data, max_len, min_len=0):\n # 1 for special tokens.\n data_min_len = min(len(data[0]), len(data[1])) + 1\n data_max_len = max(len(data[0]), len(data[1])) + 1\n return (data_min_len >= min_len) and (data_max_len <= max_len)\n\n\ndef create_data_loader(args):\n root = None if args.root == \"None\" else args.root\n (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)\n args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)\n transform_func = WMT14ende.get_default_transform_func(root=root)\n datasets = [\n WMT14ende.get_datasets(\n mode=m, transform_func=transform_func) for m in [\"train\", \"dev\"]\n ]\n\n def _max_token_fn(current_idx, current_batch_size, tokens_sofar,\n data_source):\n return max(tokens_sofar,\n len(data_source[current_idx][0]) + 1,\n len(data_source[current_idx][1]) + 1)\n\n def _key(size_so_far, minibatch_len):\n return size_so_far * minibatch_len\n\n data_loaders = [(None)] * 2\n for i, dataset in enumerate(datasets):\n m = dataset.mode\n dataset = dataset.filter(\n partial(\n min_max_filer, max_len=args.max_length))\n sampler = SamplerHelper(dataset)\n\n src_key = (lambda x, data_source: len(data_source[x][0]) + 1)\n if args.sort_type == SortType.GLOBAL:\n buffer_size = -1\n trg_key = (lambda x, data_source: len(data_source[x][1]) + 1)\n # Sort twice\n sampler = sampler.sort(\n key=trg_key, buffer_size=buffer_size).sort(\n key=src_key, buffer_size=buffer_size)\n else:\n sampler = sampler.shuffle()\n if args.sort_type == SortType.POOL:\n buffer_size = args.pool_size\n sampler = sampler.sort(key=src_key, buffer_size=buffer_size)\n\n batch_sampler = sampler.batch(\n batch_size=args.batch_size,\n drop_last=False,\n batch_size_fn=_max_token_fn,\n key=_key)\n\n if m == \"train\":\n batch_sampler = batch_sampler.shard()\n\n data_loader = DataLoader(\n dataset=dataset,\n batch_sampler=batch_sampler,\n collate_fn=partial(\n prepare_train_input,\n bos_idx=args.bos_idx,\n eos_idx=args.eos_idx,\n pad_idx=args.bos_idx),\n num_workers=0,\n return_list=True)\n data_loaders[i] = (data_loader)\n return data_loaders\n\n\ndef create_infer_loader(args):\n root = None if args.root == \"None\" else args.root\n (src_vocab, trg_vocab) = WMT14ende.get_vocab(root=root)\n args.src_vocab_size, args.trg_vocab_size = len(src_vocab), len(trg_vocab)\n transform_func = WMT14ende.get_default_transform_func(root=root)\n dataset = WMT14ende.get_datasets(\n mode=\"test\", transform_func=transform_func).filter(\n partial(\n min_max_filer, max_len=args.max_length))\n\n batch_sampler = SamplerHelper(dataset).batch(\n batch_size=args.infer_batch_size, drop_last=False)\n\n data_loader = DataLoader(\n dataset=dataset,\n batch_sampler=batch_sampler,\n collate_fn=partial(\n prepare_infer_input,\n bos_idx=args.bos_idx,\n eos_idx=args.eos_idx,\n pad_idx=args.bos_idx),\n num_workers=0,\n return_list=True)\n return data_loader, trg_vocab.to_tokens\n\n\ndef prepare_train_input(insts, bos_idx, eos_idx, pad_idx):\n \"\"\"\n Put all padded data needed by training into a list.\n \"\"\"\n word_pad = Pad(pad_idx)\n src_word = word_pad([inst[0] + [eos_idx] for inst in insts])\n trg_word = word_pad([[bos_idx] + inst[1] for inst in insts])\n lbl_word = np.expand_dims(\n word_pad([inst[1] + [eos_idx] for inst in insts]), axis=2)\n\n data_inputs = [src_word, trg_word, lbl_word]\n\n return data_inputs\n\n\ndef prepare_infer_input(insts, bos_idx, eos_idx, pad_idx):\n \"\"\"\n Put all padded data needed by beam search decoder into a list.\n \"\"\"\n word_pad = Pad(pad_idx)\n src_word = word_pad([inst[0] + [eos_idx] for inst in insts])\n\n return [src_word, ]\n\n\nclass SortType(object):\n GLOBAL = 'global'\n POOL = 'pool'\n NONE = \"none\"\n", "path": "PaddleNLP/examples/machine_translation/transformer/reader.py"}]} | 3,865 | 686 |
gh_patches_debug_23200 | rasdani/github-patches | git_diff | RedHatInsights__insights-core-3248 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update pydoc for Messages (/var/log/messages)
Correct the examples: https://github.com/RedHatInsights/insights-core/blob/master/insights/parsers/messages.py#L32
</issue>
<code>
[start of insights/parsers/messages.py]
1 """
2 Messages file ``/var/log/messages``
3 ===================================
4 """
5
6 from .. import Syslog, parser
7 from insights.specs import Specs
8
9
10 @parser(Specs.messages)
11 class Messages(Syslog):
12 """
13 Read the ``/var/log/messages`` file.
14
15 .. note::
16 Please refer to its super-class :class:`insights.core.Syslog` for more
17 details.
18
19 Sample log lines::
20
21 May 18 15:13:34 lxc-rhel68-sat56 jabberd/sm[11057]: session started: [email protected]/superclient
22 May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon
23 May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: Launching a JVM...
24 May 18 15:24:28 lxc-rhel68-sat56 yum[11597]: Installed: lynx-2.8.6-27.el6.x86_64
25 May 18 15:36:19 lxc-rhel68-sat56 yum[11954]: Updated: sos-3.2-40.el6.noarch
26
27 .. note::
28 Because /var/log/messages timestamps by default have no year,
29 the year of the logs will be inferred from the year in your timestamp.
30 This will also work around December/January crossovers.
31
32 Examples:
33 >>> Messages.filters.append('wrapper')
34 >>> Messages.token_scan('daemon_start', 'Wrapper Started as Daemon')
35 >>> msgs = shared[Messages]
36 >>> len(msgs.lines)
37 >>> wrapper_msgs = msgs.get('wrapper') # Can only rely on lines filtered being present
38 >>> wrapper_msgs[0]
39 {'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56',
40 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',
41 'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'
42 }
43 >>> msgs.daemon_start # Token set if matching lines present in logs
44 True
45 """
46 pass
47
[end of insights/parsers/messages.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/insights/parsers/messages.py b/insights/parsers/messages.py
--- a/insights/parsers/messages.py
+++ b/insights/parsers/messages.py
@@ -30,17 +30,14 @@
This will also work around December/January crossovers.
Examples:
- >>> Messages.filters.append('wrapper')
>>> Messages.token_scan('daemon_start', 'Wrapper Started as Daemon')
- >>> msgs = shared[Messages]
+ >>> type(msgs)
+ <class 'insights.parsers.messages.Messages'>
>>> len(msgs.lines)
+ 9
>>> wrapper_msgs = msgs.get('wrapper') # Can only rely on lines filtered being present
- >>> wrapper_msgs[0]
- {'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56',
- 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',
- 'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'
- }
- >>> msgs.daemon_start # Token set if matching lines present in logs
+ >>> result = {'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon', 'message': '--> Wrapper Started as Daemon', 'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56', 'procname': 'wrapper[11375]'}
+ >>> wrapper_msgs[0] == result
True
"""
pass
| {"golden_diff": "diff --git a/insights/parsers/messages.py b/insights/parsers/messages.py\n--- a/insights/parsers/messages.py\n+++ b/insights/parsers/messages.py\n@@ -30,17 +30,14 @@\n This will also work around December/January crossovers.\n \n Examples:\n- >>> Messages.filters.append('wrapper')\n >>> Messages.token_scan('daemon_start', 'Wrapper Started as Daemon')\n- >>> msgs = shared[Messages]\n+ >>> type(msgs)\n+ <class 'insights.parsers.messages.Messages'>\n >>> len(msgs.lines)\n+ 9\n >>> wrapper_msgs = msgs.get('wrapper') # Can only rely on lines filtered being present\n- >>> wrapper_msgs[0]\n- {'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56',\n- 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',\n- 'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'\n- }\n- >>> msgs.daemon_start # Token set if matching lines present in logs\n+ >>> result = {'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon', 'message': '--> Wrapper Started as Daemon', 'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56', 'procname': 'wrapper[11375]'}\n+ >>> wrapper_msgs[0] == result\n True\n \"\"\"\n pass\n", "issue": "Update pydoc for Messages (/var/log/messages)\nCorrect the examples: https://github.com/RedHatInsights/insights-core/blob/master/insights/parsers/messages.py#L32\n", "before_files": [{"content": "\"\"\"\nMessages file ``/var/log/messages``\n===================================\n\"\"\"\n\nfrom .. import Syslog, parser\nfrom insights.specs import Specs\n\n\n@parser(Specs.messages)\nclass Messages(Syslog):\n \"\"\"\n Read the ``/var/log/messages`` file.\n\n .. note::\n Please refer to its super-class :class:`insights.core.Syslog` for more\n details.\n\n Sample log lines::\n\n May 18 15:13:34 lxc-rhel68-sat56 jabberd/sm[11057]: session started: [email protected]/superclient\n May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon\n May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: Launching a JVM...\n May 18 15:24:28 lxc-rhel68-sat56 yum[11597]: Installed: lynx-2.8.6-27.el6.x86_64\n May 18 15:36:19 lxc-rhel68-sat56 yum[11954]: Updated: sos-3.2-40.el6.noarch\n\n .. note::\n Because /var/log/messages timestamps by default have no year,\n the year of the logs will be inferred from the year in your timestamp.\n This will also work around December/January crossovers.\n\n Examples:\n >>> Messages.filters.append('wrapper')\n >>> Messages.token_scan('daemon_start', 'Wrapper Started as Daemon')\n >>> msgs = shared[Messages]\n >>> len(msgs.lines)\n >>> wrapper_msgs = msgs.get('wrapper') # Can only rely on lines filtered being present\n >>> wrapper_msgs[0]\n {'timestamp': 'May 18 15:13:36', 'hostname': 'lxc-rhel68-sat56',\n 'procname': wrapper[11375]', 'message': '--> Wrapper Started as Daemon',\n 'raw_message': 'May 18 15:13:36 lxc-rhel68-sat56 wrapper[11375]: --> Wrapper Started as Daemon'\n }\n >>> msgs.daemon_start # Token set if matching lines present in logs\n True\n \"\"\"\n pass\n", "path": "insights/parsers/messages.py"}]} | 1,233 | 408 |
gh_patches_debug_11114 | rasdani/github-patches | git_diff | freedomofpress__securedrop-1890 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tails 3.0 does not restore the original filename when uncompressing a file
# Bug
Tails 3.0 does not preserve the filename when extracting an archive.
## Description
When extracting an archive, Tails 3.0 will rename the file inside the archive to _data_.
## Steps to Reproduce
Download a SecureDrop submission, decrypt the submission and extract the archive.
## Expected Behavior
Tails should preserve the filename and leave you with, say, _code_name.pdf_.
## Actual Behavior
Tails 3.0 leaves you with a file called _data_ instead.
## Comments
I suspect this is an issue with Tails 3.0 and not the way SecureDrop creates the archive in the first place. Filing here just in case.
</issue>
<code>
[start of install_files/ansible-base/roles/tails-config/files/securedrop_init.py]
1 #!/usr/bin/python
2
3 import os
4 import sys
5 import subprocess
6
7
8 # check for root
9 if os.geteuid() != 0:
10 sys.exit('You need to run this as root')
11
12 # paths
13 path_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'
14 path_torrc_backup = '/etc/tor/torrc.bak'
15 path_torrc = '/etc/tor/torrc'
16
17 # load torrc_additions
18 if os.path.isfile(path_torrc_additions):
19 torrc_additions = open(path_torrc_additions).read()
20 else:
21 sys.exit('Error opening {0} for reading'.format(path_torrc_additions))
22
23 # load torrc
24 if os.path.isfile(path_torrc_backup):
25 torrc = open(path_torrc_backup).read()
26 else:
27 if os.path.isfile(path_torrc):
28 torrc = open(path_torrc).read()
29 else:
30 sys.exit('Error opening {0} for reading'.format(path_torrc))
31
32 # save a backup
33 open(path_torrc_backup, 'w').write(torrc)
34
35 # append the additions
36 open(path_torrc, 'w').write(torrc + torrc_additions)
37
38 # reload tor
39 try:
40 subprocess.check_call(['systemctl', 'reload', '[email protected]'])
41 except subprocess.CalledProcessError:
42 sys.exit('Error reloading Tor')
43
44 # notify the user
45 subprocess.call(['tails-notify-user',
46 'SecureDrop successfully auto-configured!',
47 'You can now access the Journalist Interface.\nIf you are an admin, you can now SSH to the servers.'])
48
[end of install_files/ansible-base/roles/tails-config/files/securedrop_init.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py
--- a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py
+++ b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py
@@ -41,6 +41,13 @@
except subprocess.CalledProcessError:
sys.exit('Error reloading Tor')
+# Turn off "automatic-decompression" in Nautilus to ensure the original
+# submission filename is restored (see
+# https://github.com/freedomofpress/securedrop/issues/1862#issuecomment-311519750).
+subprocess.call(['/usr/bin/dconf', 'write',
+ '/org/gnome/nautilus/preferences/automatic-decompression',
+ 'false'])
+
# notify the user
subprocess.call(['tails-notify-user',
'SecureDrop successfully auto-configured!',
| {"golden_diff": "diff --git a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py\n--- a/install_files/ansible-base/roles/tails-config/files/securedrop_init.py\n+++ b/install_files/ansible-base/roles/tails-config/files/securedrop_init.py\n@@ -41,6 +41,13 @@\n except subprocess.CalledProcessError:\n sys.exit('Error reloading Tor')\n \n+# Turn off \"automatic-decompression\" in Nautilus to ensure the original\n+# submission filename is restored (see\n+# https://github.com/freedomofpress/securedrop/issues/1862#issuecomment-311519750).\n+subprocess.call(['/usr/bin/dconf', 'write',\n+ '/org/gnome/nautilus/preferences/automatic-decompression',\n+ 'false'])\n+\n # notify the user\n subprocess.call(['tails-notify-user',\n 'SecureDrop successfully auto-configured!',\n", "issue": "Tails 3.0 does not restore the original filename when uncompressing a file\n# Bug\r\n\r\nTails 3.0 does not preserve the filename when extracting an archive.\r\n\r\n## Description\r\n\r\nWhen extracting an archive, Tails 3.0 will rename the file inside the archive to _data_.\r\n\r\n## Steps to Reproduce\r\n\r\nDownload a SecureDrop submission, decrypt the submission and extract the archive.\r\n\r\n## Expected Behavior\r\n\r\nTails should preserve the filename and leave you with, say, _code_name.pdf_.\r\n\r\n## Actual Behavior\r\n\r\nTails 3.0 leaves you with a file called _data_ instead.\r\n\r\n## Comments\r\n\r\nI suspect this is an issue with Tails 3.0 and not the way SecureDrop creates the archive in the first place. Filing here just in case.\n", "before_files": [{"content": "#!/usr/bin/python\n\nimport os\nimport sys\nimport subprocess\n\n\n# check for root\nif os.geteuid() != 0:\n sys.exit('You need to run this as root')\n\n# paths\npath_torrc_additions = '/home/amnesia/Persistent/.securedrop/torrc_additions'\npath_torrc_backup = '/etc/tor/torrc.bak'\npath_torrc = '/etc/tor/torrc'\n\n# load torrc_additions\nif os.path.isfile(path_torrc_additions):\n torrc_additions = open(path_torrc_additions).read()\nelse:\n sys.exit('Error opening {0} for reading'.format(path_torrc_additions))\n\n# load torrc\nif os.path.isfile(path_torrc_backup):\n torrc = open(path_torrc_backup).read()\nelse:\n if os.path.isfile(path_torrc):\n torrc = open(path_torrc).read()\n else:\n sys.exit('Error opening {0} for reading'.format(path_torrc))\n\n # save a backup\n open(path_torrc_backup, 'w').write(torrc)\n\n# append the additions\nopen(path_torrc, 'w').write(torrc + torrc_additions)\n\n# reload tor\ntry:\n subprocess.check_call(['systemctl', 'reload', '[email protected]'])\nexcept subprocess.CalledProcessError:\n sys.exit('Error reloading Tor')\n\n# notify the user\nsubprocess.call(['tails-notify-user',\n 'SecureDrop successfully auto-configured!',\n 'You can now access the Journalist Interface.\\nIf you are an admin, you can now SSH to the servers.'])\n", "path": "install_files/ansible-base/roles/tails-config/files/securedrop_init.py"}]} | 1,177 | 218 |
gh_patches_debug_9358 | rasdani/github-patches | git_diff | mkdocs__mkdocs-1565 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"Files not present in docs directory" message shown for external links
Hi!
### STR
1. `mkvirtualenv mkdocs -p python 3.6`
2. `pip install mkdocs==1.0b1`
3. Update [an existing project's mkdocs.yml](https://github.com/mozilla-neutrino/neutrino-dev/blob/cab8e9e0f50c83fbb5ae0f6fb0e98b286beef049/mkdocs.yml) to use `nav` and take advantage of external link support. eg:
```yaml
docs_dir: 'docs'
site_dir: 'build'
strict: true
theme:
name: 'material'
# ...
nav:
- Introduction: './index.md'
# <snip>
- v8 Documentation: 'https://neutrinojs.org/'
- v7 Documentation: 'https://release-v7.neutrinojs.org/'
- v6 Documentation: 'https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v6/docs'
- v5 Documentation: 'https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v5/docs'
- v4 Documentation: 'https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v4/docs'
```
4. `mkdocs serve`
### Expected
Console output doesn't output warnings that give the impression something is broken.
### Actual
```
(mkdocs) neutrino-dev $ mkdocs serve
INFO - Building documentation...
INFO - Cleaning site directory
INFO - The following paths are included in the "nav" configuration, but do not exist in the docs directory:
- https://neutrinojs.org/
- https://release-v7.neutrinojs.org/
- https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v6/docs
- https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v5/docs
- https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v4/docs
[I 180723 17:15:03 server:292] Serving on http://127.0.0.1:8000
...
```
For links that are clearly external links (ie contain a full URL), it would be good to not output this message (even at log level INFO), since it's likely to be confusing for end users working on our documentation. (Perhaps log level DEBUG is the most appropriate?)
Many thanks :-)
</issue>
<code>
[start of mkdocs/structure/nav.py]
1 # coding: utf-8
2
3 from __future__ import unicode_literals
4 import logging
5
6 from mkdocs.structure.pages import Page
7 from mkdocs.utils import string_types, nest_paths
8
9 log = logging.getLogger(__name__)
10
11
12 class Navigation(object):
13 def __init__(self, items, pages):
14 self.items = items # Nested List with full navigation of Sections, Pages, and Links.
15 self.pages = pages # Flat List of subset of Pages in nav, in order.
16
17 self.homepage = None
18 for page in pages:
19 if page.is_homepage:
20 self.homepage = page
21 break
22
23 def __repr__(self):
24 return '\n'.join([item._indent_print() for item in self])
25
26 def __iter__(self):
27 return iter(self.items)
28
29 def __len__(self):
30 return len(self.items)
31
32
33 class Section(object):
34 def __init__(self, title, children):
35 self.title = title
36 self.children = children
37
38 self.parent = None
39 self.active = False
40
41 self.is_section = True
42 self.is_page = False
43 self.is_link = False
44
45 def __repr__(self):
46 return "Section(title='{0}')".format(self.title)
47
48 def _get_active(self):
49 """ Return active status of section. """
50 return self.__active
51
52 def _set_active(self, value):
53 """ Set active status of section and ancestors. """
54 self.__active = bool(value)
55 if self.parent is not None:
56 self.parent.active = bool(value)
57
58 active = property(_get_active, _set_active)
59
60 @property
61 def ancestors(self):
62 if self.parent is None:
63 return []
64 return [self.parent] + self.parent.ancestors
65
66 def _indent_print(self, depth=0):
67 ret = ['{}{}'.format(' ' * depth, repr(self))]
68 for item in self.children:
69 ret.append(item._indent_print(depth + 1))
70 return '\n'.join(ret)
71
72
73 class Link(object):
74 def __init__(self, title, url):
75 self.title = title
76 self.url = url
77 self.parent = None
78
79 # These should never change but are included for consistency with sections and pages.
80 self.children = None
81 self.active = False
82 self.is_section = False
83 self.is_page = False
84 self.is_link = True
85
86 def __repr__(self):
87 title = "'{}'".format(self.title) if (self.title is not None) else '[blank]'
88 return "Link(title={}, url='{}')".format(title, self.url)
89
90 @property
91 def ancestors(self):
92 if self.parent is None:
93 return []
94 return [self.parent] + self.parent.ancestors
95
96 def _indent_print(self, depth=0):
97 return '{}{}'.format(' ' * depth, repr(self))
98
99
100 def get_navigation(files, config):
101 """ Build site navigation from config and files."""
102 nav_config = config['nav'] or nest_paths(f.src_path for f in files.documentation_pages())
103 items = _data_to_navigation(nav_config, files, config)
104 if not isinstance(items, list):
105 items = [items]
106
107 # Get only the pages from the navigation, ignoring any sections and links.
108 pages = _get_by_type(items, Page)
109
110 # Include next, previous and parent links.
111 _add_previous_and_next_links(pages)
112 _add_parent_links(items)
113
114 missing_from_config = [file for file in files.documentation_pages() if file.page is None]
115 if missing_from_config:
116 log.info(
117 'The following pages exist in the docs directory, but are not '
118 'included in the "nav" configuration:\n - {}'.format(
119 '\n - '.join([file.src_path for file in missing_from_config]))
120 )
121 # Any documentation files not found in the nav should still have an associated page.
122 # However, these page objects are only accessable from File instances as `file.page`.
123 for file in missing_from_config:
124 Page(None, file, config)
125
126 links = _get_by_type(items, Link)
127 if links:
128 # Assume all links are external.
129 # TODO: warn or error on internal links?
130 log.info(
131 'The following paths are included in the "nav" configuration, '
132 'but do not exist in the docs directory:\n - {}'.format(
133 '\n - '.join([link.url for link in links]))
134 )
135 return Navigation(items, pages)
136
137
138 def _data_to_navigation(data, files, config):
139 if isinstance(data, dict):
140 return [
141 _data_to_navigation((key, value), files, config)
142 if isinstance(value, string_types) else
143 Section(title=key, children=_data_to_navigation(value, files, config))
144 for key, value in data.items()
145 ]
146 elif isinstance(data, list):
147 return [
148 _data_to_navigation(item, files, config)[0]
149 if isinstance(item, dict) and len(item) == 1 else
150 _data_to_navigation(item, files, config)
151 for item in data
152 ]
153 title, path = data if isinstance(data, tuple) else (None, data)
154 file = files.get_file_from_path(path)
155 if file:
156 return Page(title, file, config)
157 return Link(title, path)
158
159
160 def _get_by_type(nav, T):
161 ret = []
162 for item in nav:
163 if isinstance(item, T):
164 ret.append(item)
165 elif item.children:
166 ret.extend(_get_by_type(item.children, T))
167 return ret
168
169
170 def _add_parent_links(nav):
171 for item in nav:
172 if item.is_section:
173 for child in item.children:
174 child.parent = item
175 _add_parent_links(item.children)
176
177
178 def _add_previous_and_next_links(pages):
179 bookended = [None] + pages + [None]
180 zipped = zip(bookended[:-2], bookended[1:-1], bookended[2:])
181 for page0, page1, page2 in zipped:
182 page1.previous_page, page1.next_page = page0, page2
183
[end of mkdocs/structure/nav.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/structure/nav.py b/mkdocs/structure/nav.py
--- a/mkdocs/structure/nav.py
+++ b/mkdocs/structure/nav.py
@@ -126,8 +126,7 @@
links = _get_by_type(items, Link)
if links:
# Assume all links are external.
- # TODO: warn or error on internal links?
- log.info(
+ log.debug(
'The following paths are included in the "nav" configuration, '
'but do not exist in the docs directory:\n - {}'.format(
'\n - '.join([link.url for link in links]))
| {"golden_diff": "diff --git a/mkdocs/structure/nav.py b/mkdocs/structure/nav.py\n--- a/mkdocs/structure/nav.py\n+++ b/mkdocs/structure/nav.py\n@@ -126,8 +126,7 @@\n links = _get_by_type(items, Link)\n if links:\n # Assume all links are external.\n- # TODO: warn or error on internal links?\n- log.info(\n+ log.debug(\n 'The following paths are included in the \"nav\" configuration, '\n 'but do not exist in the docs directory:\\n - {}'.format(\n '\\n - '.join([link.url for link in links]))\n", "issue": "\"Files not present in docs directory\" message shown for external links\nHi!\r\n\r\n### STR\r\n1. `mkvirtualenv mkdocs -p python 3.6`\r\n2. `pip install mkdocs==1.0b1`\r\n3. Update [an existing project's mkdocs.yml](https://github.com/mozilla-neutrino/neutrino-dev/blob/cab8e9e0f50c83fbb5ae0f6fb0e98b286beef049/mkdocs.yml) to use `nav` and take advantage of external link support. eg:\r\n\r\n```yaml\r\ndocs_dir: 'docs'\r\nsite_dir: 'build'\r\nstrict: true\r\ntheme:\r\n name: 'material'\r\n# ...\r\n\r\nnav:\r\n - Introduction: './index.md'\r\n # <snip>\r\n - v8 Documentation: 'https://neutrinojs.org/'\r\n - v7 Documentation: 'https://release-v7.neutrinojs.org/'\r\n - v6 Documentation: 'https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v6/docs'\r\n - v5 Documentation: 'https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v5/docs'\r\n - v4 Documentation: 'https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v4/docs'\r\n```\r\n\r\n4. `mkdocs serve`\r\n\r\n### Expected\r\n\r\nConsole output doesn't output warnings that give the impression something is broken.\r\n\r\n### Actual\r\n\r\n```\r\n(mkdocs) neutrino-dev $ mkdocs serve\r\nINFO - Building documentation...\r\nINFO - Cleaning site directory\r\nINFO - The following paths are included in the \"nav\" configuration, but do not exist in the docs directory:\r\n - https://neutrinojs.org/\r\n - https://release-v7.neutrinojs.org/\r\n - https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v6/docs\r\n - https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v5/docs\r\n - https://github.com/mozilla-neutrino/neutrino-dev/tree/release/v4/docs\r\n[I 180723 17:15:03 server:292] Serving on http://127.0.0.1:8000\r\n...\r\n```\r\n\r\nFor links that are clearly external links (ie contain a full URL), it would be good to not output this message (even at log level INFO), since it's likely to be confusing for end users working on our documentation. (Perhaps log level DEBUG is the most appropriate?)\r\n\r\nMany thanks :-)\n", "before_files": [{"content": "# coding: utf-8\n\nfrom __future__ import unicode_literals\nimport logging\n\nfrom mkdocs.structure.pages import Page\nfrom mkdocs.utils import string_types, nest_paths\n\nlog = logging.getLogger(__name__)\n\n\nclass Navigation(object):\n def __init__(self, items, pages):\n self.items = items # Nested List with full navigation of Sections, Pages, and Links.\n self.pages = pages # Flat List of subset of Pages in nav, in order.\n\n self.homepage = None\n for page in pages:\n if page.is_homepage:\n self.homepage = page\n break\n\n def __repr__(self):\n return '\\n'.join([item._indent_print() for item in self])\n\n def __iter__(self):\n return iter(self.items)\n\n def __len__(self):\n return len(self.items)\n\n\nclass Section(object):\n def __init__(self, title, children):\n self.title = title\n self.children = children\n\n self.parent = None\n self.active = False\n\n self.is_section = True\n self.is_page = False\n self.is_link = False\n\n def __repr__(self):\n return \"Section(title='{0}')\".format(self.title)\n\n def _get_active(self):\n \"\"\" Return active status of section. \"\"\"\n return self.__active\n\n def _set_active(self, value):\n \"\"\" Set active status of section and ancestors. \"\"\"\n self.__active = bool(value)\n if self.parent is not None:\n self.parent.active = bool(value)\n\n active = property(_get_active, _set_active)\n\n @property\n def ancestors(self):\n if self.parent is None:\n return []\n return [self.parent] + self.parent.ancestors\n\n def _indent_print(self, depth=0):\n ret = ['{}{}'.format(' ' * depth, repr(self))]\n for item in self.children:\n ret.append(item._indent_print(depth + 1))\n return '\\n'.join(ret)\n\n\nclass Link(object):\n def __init__(self, title, url):\n self.title = title\n self.url = url\n self.parent = None\n\n # These should never change but are included for consistency with sections and pages.\n self.children = None\n self.active = False\n self.is_section = False\n self.is_page = False\n self.is_link = True\n\n def __repr__(self):\n title = \"'{}'\".format(self.title) if (self.title is not None) else '[blank]'\n return \"Link(title={}, url='{}')\".format(title, self.url)\n\n @property\n def ancestors(self):\n if self.parent is None:\n return []\n return [self.parent] + self.parent.ancestors\n\n def _indent_print(self, depth=0):\n return '{}{}'.format(' ' * depth, repr(self))\n\n\ndef get_navigation(files, config):\n \"\"\" Build site navigation from config and files.\"\"\"\n nav_config = config['nav'] or nest_paths(f.src_path for f in files.documentation_pages())\n items = _data_to_navigation(nav_config, files, config)\n if not isinstance(items, list):\n items = [items]\n\n # Get only the pages from the navigation, ignoring any sections and links.\n pages = _get_by_type(items, Page)\n\n # Include next, previous and parent links.\n _add_previous_and_next_links(pages)\n _add_parent_links(items)\n\n missing_from_config = [file for file in files.documentation_pages() if file.page is None]\n if missing_from_config:\n log.info(\n 'The following pages exist in the docs directory, but are not '\n 'included in the \"nav\" configuration:\\n - {}'.format(\n '\\n - '.join([file.src_path for file in missing_from_config]))\n )\n # Any documentation files not found in the nav should still have an associated page.\n # However, these page objects are only accessable from File instances as `file.page`.\n for file in missing_from_config:\n Page(None, file, config)\n\n links = _get_by_type(items, Link)\n if links:\n # Assume all links are external.\n # TODO: warn or error on internal links?\n log.info(\n 'The following paths are included in the \"nav\" configuration, '\n 'but do not exist in the docs directory:\\n - {}'.format(\n '\\n - '.join([link.url for link in links]))\n )\n return Navigation(items, pages)\n\n\ndef _data_to_navigation(data, files, config):\n if isinstance(data, dict):\n return [\n _data_to_navigation((key, value), files, config)\n if isinstance(value, string_types) else\n Section(title=key, children=_data_to_navigation(value, files, config))\n for key, value in data.items()\n ]\n elif isinstance(data, list):\n return [\n _data_to_navigation(item, files, config)[0]\n if isinstance(item, dict) and len(item) == 1 else\n _data_to_navigation(item, files, config)\n for item in data\n ]\n title, path = data if isinstance(data, tuple) else (None, data)\n file = files.get_file_from_path(path)\n if file:\n return Page(title, file, config)\n return Link(title, path)\n\n\ndef _get_by_type(nav, T):\n ret = []\n for item in nav:\n if isinstance(item, T):\n ret.append(item)\n elif item.children:\n ret.extend(_get_by_type(item.children, T))\n return ret\n\n\ndef _add_parent_links(nav):\n for item in nav:\n if item.is_section:\n for child in item.children:\n child.parent = item\n _add_parent_links(item.children)\n\n\ndef _add_previous_and_next_links(pages):\n bookended = [None] + pages + [None]\n zipped = zip(bookended[:-2], bookended[1:-1], bookended[2:])\n for page0, page1, page2 in zipped:\n page1.previous_page, page1.next_page = page0, page2\n", "path": "mkdocs/structure/nav.py"}]} | 2,882 | 146 |
gh_patches_debug_21778 | rasdani/github-patches | git_diff | saleor__saleor-2814 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Return a variant price instead of the priceOverride
### What I'm trying to achieve
In my impression, `priceOverride` is an internal functionality which shouldn't be available in the API.
In addition, when I want to show a variant price I need to always introduce logic like:
```
let variantPrice = variants[0].priceOverride || product.price
```
### Describe a proposed solution
I propose to change the attribute `priceOverride` to `price` which shows product price when the override is empty.
</issue>
<code>
[start of saleor/graphql/product/types.py]
1 import graphene
2 from graphene import relay
3 from graphene_django.filter import DjangoFilterConnectionField
4 from graphql.error import GraphQLError
5
6 from ...product import models
7 from ...product.templatetags.product_images import get_thumbnail
8 from ...product.utils import products_with_details
9 from ...product.utils.availability import get_availability
10 from ...product.utils.costs import (
11 get_margin_for_variant, get_product_costs_data)
12 from ..core.decorators import permission_required
13 from ..core.types.common import CountableDjangoObjectType
14 from ..core.types.money import (
15 Money, MoneyRange, TaxedMoney, TaxedMoneyRange, TaxRateType)
16 from ..utils import get_database_id
17 from .filters import ProductFilterSet
18
19
20 def resolve_attribute_list(attributes):
21 keys = list(attributes.keys())
22 values = list(attributes.values())
23
24 attributes_map = {
25 att.pk: att for att in models.ProductAttribute.objects.filter(
26 pk__in=keys)}
27 values_map = {
28 val.pk: val for val in models.AttributeChoiceValue.objects.filter(
29 pk__in=values)}
30
31 attributes_list = [SelectedAttribute(
32 attribute=attributes_map.get(int(k)),
33 value=values_map.get(int(v)))
34 for k, v in attributes.items()]
35 return attributes_list
36
37
38 class ProductAttributeValue(CountableDjangoObjectType):
39 name = graphene.String(description='Visible name for display purposes.')
40 slug = graphene.String(
41 description='Internal representation of an attribute name.')
42
43 class Meta:
44 description = 'Represents a value of an attribute.'
45 exclude_fields = ['attribute']
46 interfaces = [relay.Node]
47 model = models.AttributeChoiceValue
48
49
50 class ProductAttribute(CountableDjangoObjectType):
51 name = graphene.String(description='Visible name for display purposes.')
52 slug = graphene.String(
53 description='Internal representation of an attribute name.')
54 values = graphene.List(
55 ProductAttributeValue, description='List of attribute\'s values.')
56
57 class Meta:
58 description = """Custom attribute of a product. Attributes can be
59 assigned to products and variants at the product type level."""
60 exclude_fields = ['product_types', 'product_variant_types']
61 interfaces = [relay.Node]
62 filter_fields = ['id', 'slug']
63 model = models.ProductAttribute
64
65 def resolve_values(self, info):
66 return self.values.all()
67
68
69 class Margin(graphene.ObjectType):
70 start = graphene.Int()
71 stop = graphene.Int()
72
73
74 class SelectedAttribute(graphene.ObjectType):
75 attribute = graphene.Field(
76 ProductAttribute,
77 default_value=None, description='Name of an attribute')
78 value = graphene.Field(
79 ProductAttributeValue,
80 default_value=None, description='Value of an attribute.')
81
82 class Meta:
83 description = 'Represents a custom product attribute.'
84
85
86 class ProductVariant(CountableDjangoObjectType):
87 stock_quantity = graphene.Int(
88 required=True, description='Quantity of a product available for sale.')
89 price_override = graphene.Field(
90 Money,
91 description="""Override the base price of a product if necessary.
92 A value of `null` indicates that the default product price is used.""")
93 attributes = graphene.List(
94 SelectedAttribute,
95 description='List of attributes assigned to this variant.')
96 cost_price = graphene.Field(
97 Money, description='Cost price of the variant.')
98 margin = graphene.Int(description='Gross margin percentage value.')
99
100 class Meta:
101 description = """Represents a version of a product such as different
102 size or color."""
103 exclude_fields = ['variant_images']
104 interfaces = [relay.Node]
105 model = models.ProductVariant
106 filter_fields = ['id']
107
108 def resolve_stock_quantity(self, info):
109 return self.quantity_available
110
111 def resolve_attributes(self, info):
112 return resolve_attribute_list(self.attributes)
113
114 def resolve_margin(self, info):
115 return get_margin_for_variant(self)
116
117
118 class ProductAvailability(graphene.ObjectType):
119 available = graphene.Boolean()
120 on_sale = graphene.Boolean()
121 discount = graphene.Field(TaxedMoney)
122 discount_local_currency = graphene.Field(TaxedMoney)
123 price_range = graphene.Field(TaxedMoneyRange)
124 price_range_undiscounted = graphene.Field(TaxedMoneyRange)
125 price_range_local_currency = graphene.Field(TaxedMoneyRange)
126
127 class Meta:
128 description = 'Represents availability of a product in the storefront.'
129
130
131 class Image(graphene.ObjectType):
132 url = graphene.String(
133 required=True,
134 description='The URL of the image.',
135 size=graphene.Int(description='Size of the image'))
136
137 class Meta:
138 description = 'Represents an image.'
139
140 def resolve_url(self, info, size=None):
141 if size:
142 return get_thumbnail(self, size, method='thumbnail')
143 return self.url
144
145
146 class Product(CountableDjangoObjectType):
147 url = graphene.String(
148 description='The storefront URL for the product.', required=True)
149 thumbnail_url = graphene.String(
150 description='The URL of a main thumbnail for a product.',
151 size=graphene.Argument(graphene.Int, description='Size of thumbnail'))
152 availability = graphene.Field(
153 ProductAvailability,
154 description="""Informs about product's availability in the storefront,
155 current price and discounts.""")
156 price = graphene.Field(
157 Money,
158 description="""The product's base price (without any discounts
159 applied).""")
160 attributes = graphene.List(
161 SelectedAttribute,
162 description='List of product attributes assigned to this product.')
163 purchase_cost = graphene.Field(MoneyRange)
164 margin = graphene.Field(Margin)
165 image_by_id = graphene.Field(
166 lambda: ProductImage,
167 id=graphene.Argument(
168 graphene.ID, description='ID of a product image.'),
169 description='Get a single product image by ID')
170
171 class Meta:
172 description = """Represents an individual item for sale in the
173 storefront."""
174 interfaces = [relay.Node]
175 model = models.Product
176
177 def resolve_thumbnail_url(self, info, *, size=None):
178 if not size:
179 size = 255
180 return get_thumbnail(self.get_first_image(), size, method='thumbnail')
181
182 def resolve_url(self, info):
183 return self.get_absolute_url()
184
185 def resolve_availability(self, info):
186 context = info.context
187 availability = get_availability(
188 self, context.discounts, context.taxes, context.currency)
189 return ProductAvailability(**availability._asdict())
190
191 def resolve_attributes(self, info):
192 return resolve_attribute_list(self.attributes)
193
194 def resolve_product_type(self, info):
195 return self.product_type
196
197 @permission_required('product.manage_products')
198 def resolve_purchase_cost(self, info):
199 purchase_cost, _ = get_product_costs_data(self)
200 return purchase_cost
201
202 @permission_required('product.manage_products')
203 def resolve_margin(self, info):
204 _, margin = get_product_costs_data(self)
205 return Margin(margin[0], margin[1])
206
207 def resolve_image_by_id(self, info, id):
208 pk = get_database_id(info, id, ProductImage)
209 try:
210 return self.images.get(pk=pk)
211 except models.ProductImage.DoesNotExist:
212 raise GraphQLError('Product image not found.')
213
214
215 class ProductType(CountableDjangoObjectType):
216 products = DjangoFilterConnectionField(
217 Product,
218 filterset_class=ProductFilterSet,
219 description='List of products of this type.')
220 tax_rate = TaxRateType(description='A type of tax rate.')
221
222 class Meta:
223 description = """Represents a type of product. It defines what
224 attributes are available to products of this type."""
225 interfaces = [relay.Node]
226 model = models.ProductType
227 filter_fields = ['id']
228
229 def resolve_products(self, info, **kwargs):
230 user = info.context.user
231 return products_with_details(
232 user=user).filter(product_type=self).distinct()
233
234
235 class Collection(CountableDjangoObjectType):
236 products = DjangoFilterConnectionField(
237 Product, filterset_class=ProductFilterSet,
238 description='List of collection products.')
239 background_image = graphene.Field(Image)
240
241 class Meta:
242 description = "Represents a collection of products."
243 exclude_fields = ['voucher_set', 'sale_set', 'menuitem_set']
244 filter_fields = {
245 'name': ['exact', 'icontains', 'istartswith']}
246 interfaces = [relay.Node]
247 model = models.Collection
248
249 def resolve_products(self, info, **kwargs):
250 user = info.context.user
251 return products_with_details(
252 user=user).filter(collections=self).distinct()
253
254
255 class Category(CountableDjangoObjectType):
256 products = DjangoFilterConnectionField(
257 Product,
258 filterset_class=ProductFilterSet,
259 description='List of products in the category.')
260 url = graphene.String(
261 description='The storefront\'s URL for the category.')
262 ancestors = DjangoFilterConnectionField(
263 lambda: Category,
264 description='List of ancestors of the category.')
265 children = DjangoFilterConnectionField(
266 lambda: Category,
267 description='List of children of the category.')
268 background_image = graphene.Field(Image)
269
270 class Meta:
271 description = """Represents a single category of products. Categories
272 allow to organize products in a tree-hierarchies which can be used for
273 navigation in the storefront."""
274 exclude_fields = [
275 'lft', 'rght', 'tree_id', 'voucher_set', 'sale_set',
276 'menuitem_set']
277 interfaces = [relay.Node]
278 filter_fields = ['id', 'name']
279 model = models.Category
280
281 def resolve_ancestors(self, info, **kwargs):
282 return self.get_ancestors().distinct()
283
284 def resolve_children(self, info, **kwargs):
285 return self.children.distinct()
286
287 def resolve_url(self, info):
288 return self.get_absolute_url()
289
290 def resolve_products(self, info, **kwargs):
291 qs = models.Product.objects.available_products()
292 qs = qs.filter(category=self)
293 return qs.distinct()
294
295
296 class ProductImage(CountableDjangoObjectType):
297 url = graphene.String(
298 required=True,
299 description='The URL of the image.',
300 size=graphene.Int(description='Size of the image'))
301
302 class Meta:
303 description = 'Represents a product image.'
304 exclude_fields = [
305 'image', 'product', 'ppoi', 'productvariant_set',
306 'variant_images']
307 interfaces = [relay.Node]
308 model = models.ProductImage
309
310 def resolve_url(self, info, *, size=None):
311 if size:
312 return get_thumbnail(self.image, size, method='thumbnail')
313 return self.image.url
314
[end of saleor/graphql/product/types.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/graphql/product/types.py b/saleor/graphql/product/types.py
--- a/saleor/graphql/product/types.py
+++ b/saleor/graphql/product/types.py
@@ -90,6 +90,7 @@
Money,
description="""Override the base price of a product if necessary.
A value of `null` indicates that the default product price is used.""")
+ price = graphene.Field(Money, description="Price of the product variant.")
attributes = graphene.List(
SelectedAttribute,
description='List of attributes assigned to this variant.')
@@ -114,6 +115,15 @@
def resolve_margin(self, info):
return get_margin_for_variant(self)
+ def resolve_price(self, info):
+ return (
+ self.price_override
+ if self.price_override is not None else self.product.price)
+
+ @permission_required('product.manage_products')
+ def resolve_price_override(self, info):
+ return self.price_override
+
class ProductAvailability(graphene.ObjectType):
available = graphene.Boolean()
| {"golden_diff": "diff --git a/saleor/graphql/product/types.py b/saleor/graphql/product/types.py\n--- a/saleor/graphql/product/types.py\n+++ b/saleor/graphql/product/types.py\n@@ -90,6 +90,7 @@\n Money,\n description=\"\"\"Override the base price of a product if necessary.\n A value of `null` indicates that the default product price is used.\"\"\")\n+ price = graphene.Field(Money, description=\"Price of the product variant.\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of attributes assigned to this variant.')\n@@ -114,6 +115,15 @@\n def resolve_margin(self, info):\n return get_margin_for_variant(self)\n \n+ def resolve_price(self, info):\n+ return (\n+ self.price_override\n+ if self.price_override is not None else self.product.price)\n+\n+ @permission_required('product.manage_products')\n+ def resolve_price_override(self, info):\n+ return self.price_override\n+\n \n class ProductAvailability(graphene.ObjectType):\n available = graphene.Boolean()\n", "issue": "Return a variant price instead of the priceOverride\n### What I'm trying to achieve\r\nIn my impression, `priceOverride` is an internal functionality which shouldn't be available in the API.\r\nIn addition, when I want to show a variant price I need to always introduce logic like:\r\n```\r\nlet variantPrice = variants[0].priceOverride || product.price\r\n```\r\n\r\n### Describe a proposed solution\r\nI propose to change the attribute `priceOverride` to `price` which shows product price when the override is empty.\r\n\n", "before_files": [{"content": "import graphene\nfrom graphene import relay\nfrom graphene_django.filter import DjangoFilterConnectionField\nfrom graphql.error import GraphQLError\n\nfrom ...product import models\nfrom ...product.templatetags.product_images import get_thumbnail\nfrom ...product.utils import products_with_details\nfrom ...product.utils.availability import get_availability\nfrom ...product.utils.costs import (\n get_margin_for_variant, get_product_costs_data)\nfrom ..core.decorators import permission_required\nfrom ..core.types.common import CountableDjangoObjectType\nfrom ..core.types.money import (\n Money, MoneyRange, TaxedMoney, TaxedMoneyRange, TaxRateType)\nfrom ..utils import get_database_id\nfrom .filters import ProductFilterSet\n\n\ndef resolve_attribute_list(attributes):\n keys = list(attributes.keys())\n values = list(attributes.values())\n\n attributes_map = {\n att.pk: att for att in models.ProductAttribute.objects.filter(\n pk__in=keys)}\n values_map = {\n val.pk: val for val in models.AttributeChoiceValue.objects.filter(\n pk__in=values)}\n\n attributes_list = [SelectedAttribute(\n attribute=attributes_map.get(int(k)),\n value=values_map.get(int(v)))\n for k, v in attributes.items()]\n return attributes_list\n\n\nclass ProductAttributeValue(CountableDjangoObjectType):\n name = graphene.String(description='Visible name for display purposes.')\n slug = graphene.String(\n description='Internal representation of an attribute name.')\n\n class Meta:\n description = 'Represents a value of an attribute.'\n exclude_fields = ['attribute']\n interfaces = [relay.Node]\n model = models.AttributeChoiceValue\n\n\nclass ProductAttribute(CountableDjangoObjectType):\n name = graphene.String(description='Visible name for display purposes.')\n slug = graphene.String(\n description='Internal representation of an attribute name.')\n values = graphene.List(\n ProductAttributeValue, description='List of attribute\\'s values.')\n\n class Meta:\n description = \"\"\"Custom attribute of a product. Attributes can be\n assigned to products and variants at the product type level.\"\"\"\n exclude_fields = ['product_types', 'product_variant_types']\n interfaces = [relay.Node]\n filter_fields = ['id', 'slug']\n model = models.ProductAttribute\n\n def resolve_values(self, info):\n return self.values.all()\n\n\nclass Margin(graphene.ObjectType):\n start = graphene.Int()\n stop = graphene.Int()\n\n\nclass SelectedAttribute(graphene.ObjectType):\n attribute = graphene.Field(\n ProductAttribute,\n default_value=None, description='Name of an attribute')\n value = graphene.Field(\n ProductAttributeValue,\n default_value=None, description='Value of an attribute.')\n\n class Meta:\n description = 'Represents a custom product attribute.'\n\n\nclass ProductVariant(CountableDjangoObjectType):\n stock_quantity = graphene.Int(\n required=True, description='Quantity of a product available for sale.')\n price_override = graphene.Field(\n Money,\n description=\"\"\"Override the base price of a product if necessary.\n A value of `null` indicates that the default product price is used.\"\"\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of attributes assigned to this variant.')\n cost_price = graphene.Field(\n Money, description='Cost price of the variant.')\n margin = graphene.Int(description='Gross margin percentage value.')\n\n class Meta:\n description = \"\"\"Represents a version of a product such as different\n size or color.\"\"\"\n exclude_fields = ['variant_images']\n interfaces = [relay.Node]\n model = models.ProductVariant\n filter_fields = ['id']\n\n def resolve_stock_quantity(self, info):\n return self.quantity_available\n\n def resolve_attributes(self, info):\n return resolve_attribute_list(self.attributes)\n\n def resolve_margin(self, info):\n return get_margin_for_variant(self)\n\n\nclass ProductAvailability(graphene.ObjectType):\n available = graphene.Boolean()\n on_sale = graphene.Boolean()\n discount = graphene.Field(TaxedMoney)\n discount_local_currency = graphene.Field(TaxedMoney)\n price_range = graphene.Field(TaxedMoneyRange)\n price_range_undiscounted = graphene.Field(TaxedMoneyRange)\n price_range_local_currency = graphene.Field(TaxedMoneyRange)\n\n class Meta:\n description = 'Represents availability of a product in the storefront.'\n\n\nclass Image(graphene.ObjectType):\n url = graphene.String(\n required=True,\n description='The URL of the image.',\n size=graphene.Int(description='Size of the image'))\n\n class Meta:\n description = 'Represents an image.'\n\n def resolve_url(self, info, size=None):\n if size:\n return get_thumbnail(self, size, method='thumbnail')\n return self.url\n\n\nclass Product(CountableDjangoObjectType):\n url = graphene.String(\n description='The storefront URL for the product.', required=True)\n thumbnail_url = graphene.String(\n description='The URL of a main thumbnail for a product.',\n size=graphene.Argument(graphene.Int, description='Size of thumbnail'))\n availability = graphene.Field(\n ProductAvailability,\n description=\"\"\"Informs about product's availability in the storefront,\n current price and discounts.\"\"\")\n price = graphene.Field(\n Money,\n description=\"\"\"The product's base price (without any discounts\n applied).\"\"\")\n attributes = graphene.List(\n SelectedAttribute,\n description='List of product attributes assigned to this product.')\n purchase_cost = graphene.Field(MoneyRange)\n margin = graphene.Field(Margin)\n image_by_id = graphene.Field(\n lambda: ProductImage,\n id=graphene.Argument(\n graphene.ID, description='ID of a product image.'),\n description='Get a single product image by ID')\n\n class Meta:\n description = \"\"\"Represents an individual item for sale in the\n storefront.\"\"\"\n interfaces = [relay.Node]\n model = models.Product\n\n def resolve_thumbnail_url(self, info, *, size=None):\n if not size:\n size = 255\n return get_thumbnail(self.get_first_image(), size, method='thumbnail')\n\n def resolve_url(self, info):\n return self.get_absolute_url()\n\n def resolve_availability(self, info):\n context = info.context\n availability = get_availability(\n self, context.discounts, context.taxes, context.currency)\n return ProductAvailability(**availability._asdict())\n\n def resolve_attributes(self, info):\n return resolve_attribute_list(self.attributes)\n\n def resolve_product_type(self, info):\n return self.product_type\n\n @permission_required('product.manage_products')\n def resolve_purchase_cost(self, info):\n purchase_cost, _ = get_product_costs_data(self)\n return purchase_cost\n\n @permission_required('product.manage_products')\n def resolve_margin(self, info):\n _, margin = get_product_costs_data(self)\n return Margin(margin[0], margin[1])\n\n def resolve_image_by_id(self, info, id):\n pk = get_database_id(info, id, ProductImage)\n try:\n return self.images.get(pk=pk)\n except models.ProductImage.DoesNotExist:\n raise GraphQLError('Product image not found.')\n\n\nclass ProductType(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product,\n filterset_class=ProductFilterSet,\n description='List of products of this type.')\n tax_rate = TaxRateType(description='A type of tax rate.')\n\n class Meta:\n description = \"\"\"Represents a type of product. It defines what\n attributes are available to products of this type.\"\"\"\n interfaces = [relay.Node]\n model = models.ProductType\n filter_fields = ['id']\n\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n user=user).filter(product_type=self).distinct()\n\n\nclass Collection(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product, filterset_class=ProductFilterSet,\n description='List of collection products.')\n background_image = graphene.Field(Image)\n\n class Meta:\n description = \"Represents a collection of products.\"\n exclude_fields = ['voucher_set', 'sale_set', 'menuitem_set']\n filter_fields = {\n 'name': ['exact', 'icontains', 'istartswith']}\n interfaces = [relay.Node]\n model = models.Collection\n\n def resolve_products(self, info, **kwargs):\n user = info.context.user\n return products_with_details(\n user=user).filter(collections=self).distinct()\n\n\nclass Category(CountableDjangoObjectType):\n products = DjangoFilterConnectionField(\n Product,\n filterset_class=ProductFilterSet,\n description='List of products in the category.')\n url = graphene.String(\n description='The storefront\\'s URL for the category.')\n ancestors = DjangoFilterConnectionField(\n lambda: Category,\n description='List of ancestors of the category.')\n children = DjangoFilterConnectionField(\n lambda: Category,\n description='List of children of the category.')\n background_image = graphene.Field(Image)\n\n class Meta:\n description = \"\"\"Represents a single category of products. Categories\n allow to organize products in a tree-hierarchies which can be used for\n navigation in the storefront.\"\"\"\n exclude_fields = [\n 'lft', 'rght', 'tree_id', 'voucher_set', 'sale_set',\n 'menuitem_set']\n interfaces = [relay.Node]\n filter_fields = ['id', 'name']\n model = models.Category\n\n def resolve_ancestors(self, info, **kwargs):\n return self.get_ancestors().distinct()\n\n def resolve_children(self, info, **kwargs):\n return self.children.distinct()\n\n def resolve_url(self, info):\n return self.get_absolute_url()\n\n def resolve_products(self, info, **kwargs):\n qs = models.Product.objects.available_products()\n qs = qs.filter(category=self)\n return qs.distinct()\n\n\nclass ProductImage(CountableDjangoObjectType):\n url = graphene.String(\n required=True,\n description='The URL of the image.',\n size=graphene.Int(description='Size of the image'))\n\n class Meta:\n description = 'Represents a product image.'\n exclude_fields = [\n 'image', 'product', 'ppoi', 'productvariant_set',\n 'variant_images']\n interfaces = [relay.Node]\n model = models.ProductImage\n\n def resolve_url(self, info, *, size=None):\n if size:\n return get_thumbnail(self.image, size, method='thumbnail')\n return self.image.url\n", "path": "saleor/graphql/product/types.py"}]} | 3,722 | 232 |
gh_patches_debug_13980 | rasdani/github-patches | git_diff | ocf__ocfweb-185 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
libsass>0.10.0 breaks dh-virtuaenv
https://jenkins.ocf.berkeley.edu/view/ocfweb-deploy/job/ocfweb-build-deb/232/console
```
# build sass, output static files
debian/ocfweb/usr/share/python/ocfweb/bin/python setup.py build_sass
running build_sass
building 'ocfweb' sass
/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/setuptools/dist.py:294: UserWarning: The version specified ('2016.01.30.23.27-git7cd5ebbd') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.
"details." % self.metadata.version
Traceback (most recent call last):
File "setup.py", line 44, in <module>
'ocfweb-run-periodic-functions = ocfweb.bin.run_periodic_functions:main',
File "/usr/lib/python3.4/distutils/core.py", line 148, in setup
dist.run_commands()
File "/usr/lib/python3.4/distutils/dist.py", line 955, in run_commands
self.run_command(cmd)
File "/usr/lib/python3.4/distutils/dist.py", line 974, in run_command
cmd_obj.run()
File "/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/distutils.py", line 123, in run
output_style=self.output_style
File "/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/builder.py", line 177, in build
).values()
File "/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/builder.py", line 73, in build_directory
_root_css=_root_css)
File "/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/builder.py", line 63, in build_directory
include_paths=[_root_sass])
File "/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sass.py", line 644, in compile
raise CompileError(v)
sass.CompileError: b'Error: ".page-staff-hours .hour .ocf-staffhours-faces" failed to @extend ".list-inline".\n The selector ".list-inline" was not found.\n Use "@extend .list-inline !optional" if the extend should be able to fail.\n on line 37 of ocfweb/static/scss/pages/staff-hours.scss\n>> @extend .list-inline;\n ------------^\n'
debian/rules:17: recipe for target 'override_dh_virtualenv' failed
```
reproduces locally, but only with `make builddeb` and not the regular `make scss`
</issue>
<code>
[start of setup.py]
1 from setuptools import find_packages
2 from setuptools import setup
3
4 try:
5 with open('.version') as f:
6 VERSION = f.readline().strip()
7 except IOError:
8 VERSION = 'unknown'
9
10 setup(
11 name='ocfweb',
12 version=VERSION,
13 packages=find_packages(exclude=['debian', 'venv']),
14 include_package_data=True,
15 url='https://www.ocf.berkeley.edu/',
16 author='Open Computing Facility',
17 author_email='[email protected]',
18 install_requires=[
19 'cached-property',
20 'celery[redis]',
21 'django-bootstrap-form',
22 'django-ipware',
23 'django-mathfilters',
24 'django-redis',
25 'django>=1.10,<1.10.999',
26 'gunicorn',
27 'libsass<=0.10.0', # https://github.com/ocf/ocfweb/issues/128
28 'matplotlib',
29 'mistune',
30 'numpy',
31 'ocflib',
32 'pycrypto',
33 'pygments',
34 'pymysql',
35 'python-dateutil',
36 ],
37 sass_manifests={
38 'ocfweb': ('static/scss',), # XXX: must be tuple
39 },
40 entry_points={
41 'console_scripts': [
42 'ocfweb-run-periodic-functions = ocfweb.bin.run_periodic_functions:main',
43 ],
44 },
45 classifiers=[
46 'Programming Language :: Python :: 3',
47 ],
48 )
49
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,7 +24,7 @@
'django-redis',
'django>=1.10,<1.10.999',
'gunicorn',
- 'libsass<=0.10.0', # https://github.com/ocf/ocfweb/issues/128
+ 'libsass',
'matplotlib',
'mistune',
'numpy',
@@ -34,9 +34,6 @@
'pymysql',
'python-dateutil',
],
- sass_manifests={
- 'ocfweb': ('static/scss',), # XXX: must be tuple
- },
entry_points={
'console_scripts': [
'ocfweb-run-periodic-functions = ocfweb.bin.run_periodic_functions:main',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,7 +24,7 @@\n 'django-redis',\n 'django>=1.10,<1.10.999',\n 'gunicorn',\n- 'libsass<=0.10.0', # https://github.com/ocf/ocfweb/issues/128\n+ 'libsass',\n 'matplotlib',\n 'mistune',\n 'numpy',\n@@ -34,9 +34,6 @@\n 'pymysql',\n 'python-dateutil',\n ],\n- sass_manifests={\n- 'ocfweb': ('static/scss',), # XXX: must be tuple\n- },\n entry_points={\n 'console_scripts': [\n 'ocfweb-run-periodic-functions = ocfweb.bin.run_periodic_functions:main',\n", "issue": "libsass>0.10.0 breaks dh-virtuaenv\nhttps://jenkins.ocf.berkeley.edu/view/ocfweb-deploy/job/ocfweb-build-deb/232/console\n\n```\n# build sass, output static files\ndebian/ocfweb/usr/share/python/ocfweb/bin/python setup.py build_sass\nrunning build_sass\nbuilding 'ocfweb' sass\n/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/setuptools/dist.py:294: UserWarning: The version specified ('2016.01.30.23.27-git7cd5ebbd') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.\n \"details.\" % self.metadata.version\nTraceback (most recent call last):\n File \"setup.py\", line 44, in <module>\n 'ocfweb-run-periodic-functions = ocfweb.bin.run_periodic_functions:main',\n File \"/usr/lib/python3.4/distutils/core.py\", line 148, in setup\n dist.run_commands()\n File \"/usr/lib/python3.4/distutils/dist.py\", line 955, in run_commands\n self.run_command(cmd)\n File \"/usr/lib/python3.4/distutils/dist.py\", line 974, in run_command\n cmd_obj.run()\n File \"/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/distutils.py\", line 123, in run\n output_style=self.output_style\n File \"/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/builder.py\", line 177, in build\n ).values()\n File \"/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/builder.py\", line 73, in build_directory\n _root_css=_root_css)\n File \"/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sassutils/builder.py\", line 63, in build_directory\n include_paths=[_root_sass])\n File \"/opt/jenkins/slave/workspace/ocfweb-build-deb/src/debian/ocfweb/usr/share/python/ocfweb/lib/python3.4/site-packages/sass.py\", line 644, in compile\n raise CompileError(v)\nsass.CompileError: b'Error: \".page-staff-hours .hour .ocf-staffhours-faces\" failed to @extend \".list-inline\".\\n The selector \".list-inline\" was not found.\\n Use \"@extend .list-inline !optional\" if the extend should be able to fail.\\n on line 37 of ocfweb/static/scss/pages/staff-hours.scss\\n>> @extend .list-inline;\\n ------------^\\n'\ndebian/rules:17: recipe for target 'override_dh_virtualenv' failed\n```\n\nreproduces locally, but only with `make builddeb` and not the regular `make scss`\n\n", "before_files": [{"content": "from setuptools import find_packages\nfrom setuptools import setup\n\ntry:\n with open('.version') as f:\n VERSION = f.readline().strip()\nexcept IOError:\n VERSION = 'unknown'\n\nsetup(\n name='ocfweb',\n version=VERSION,\n packages=find_packages(exclude=['debian', 'venv']),\n include_package_data=True,\n url='https://www.ocf.berkeley.edu/',\n author='Open Computing Facility',\n author_email='[email protected]',\n install_requires=[\n 'cached-property',\n 'celery[redis]',\n 'django-bootstrap-form',\n 'django-ipware',\n 'django-mathfilters',\n 'django-redis',\n 'django>=1.10,<1.10.999',\n 'gunicorn',\n 'libsass<=0.10.0', # https://github.com/ocf/ocfweb/issues/128\n 'matplotlib',\n 'mistune',\n 'numpy',\n 'ocflib',\n 'pycrypto',\n 'pygments',\n 'pymysql',\n 'python-dateutil',\n ],\n sass_manifests={\n 'ocfweb': ('static/scss',), # XXX: must be tuple\n },\n entry_points={\n 'console_scripts': [\n 'ocfweb-run-periodic-functions = ocfweb.bin.run_periodic_functions:main',\n ],\n },\n classifiers=[\n 'Programming Language :: Python :: 3',\n ],\n)\n", "path": "setup.py"}]} | 1,717 | 199 |
gh_patches_debug_26182 | rasdani/github-patches | git_diff | scalableminds__webknossos-libs-641 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Persist log file
When running the cuber, there usually is a lot of output (overflowing terminal buffers). It would be great if all of that output would be persisted in a log file as well to allow to easily scan and search through it, afterwards.
</issue>
<code>
[start of webknossos/webknossos/utils.py]
1 import argparse
2 import calendar
3 import functools
4 import json
5 import logging
6 import os
7 import time
8 from concurrent.futures import as_completed
9 from concurrent.futures._base import Future
10 from datetime import datetime
11 from multiprocessing import cpu_count
12 from pathlib import Path
13 from typing import Any, Callable, Iterable, List, Optional, Union
14
15 import rich
16 from cluster_tools import WrappedProcessPoolExecutor, get_executor
17 from cluster_tools.schedulers.cluster_executor import ClusterExecutor
18 from rich.progress import Progress
19
20 times = {}
21
22
23 def time_start(identifier: str) -> None:
24 times[identifier] = time.time()
25 logging.debug("{} started".format(identifier))
26
27
28 def time_stop(identifier: str) -> None:
29 _time = times.pop(identifier)
30 logging.debug("{} took {:.8f}s".format(identifier, time.time() - _time))
31
32
33 def get_executor_for_args(
34 args: Optional[argparse.Namespace],
35 ) -> Union[ClusterExecutor, WrappedProcessPoolExecutor]:
36 executor = None
37 if args is None:
38 # For backwards compatibility with code from other packages
39 # we allow args to be None. In this case we are defaulting
40 # to these values:
41 jobs = cpu_count()
42 executor = get_executor("multiprocessing", max_workers=jobs)
43 logging.info("Using pool of {} workers.".format(jobs))
44 elif args.distribution_strategy == "multiprocessing":
45 # Also accept "processes" instead of job to be compatible with segmentation-tools.
46 # In the long run, the args should be unified and provided by the clustertools.
47 if "jobs" in args:
48 jobs = args.jobs
49 elif "processes" in args:
50 jobs = args.processes
51 else:
52 jobs = cpu_count()
53
54 executor = get_executor("multiprocessing", max_workers=jobs)
55 logging.info("Using pool of {} workers.".format(jobs))
56 elif args.distribution_strategy in ("slurm", "kubernetes"):
57 if args.job_resources is None:
58 resources_example = (
59 '{"mem": "1G"}'
60 if args.distribution_strategy == "slurm"
61 else '{"memory": "1G"}'
62 )
63 raise argparse.ArgumentTypeError(
64 f"Job resources (--job_resources) has to be provided when using {args.distribution_strategy} as distribution strategy. Example: --job_resources='{resources_example}'"
65 )
66
67 executor = get_executor(
68 args.distribution_strategy,
69 debug=True,
70 keep_logs=True,
71 job_resources=json.loads(args.job_resources),
72 )
73 logging.info(f"Using {args.distribution_strategy} cluster.")
74 else:
75 logging.error(
76 "Unknown distribution strategy: {}".format(args.distribution_strategy)
77 )
78
79 return executor
80
81
82 F = Callable[..., Any]
83
84
85 def named_partial(func: F, *args: Any, **kwargs: Any) -> F:
86 # Propagate __name__ and __doc__ attributes to partial function
87 partial_func = functools.partial(func, *args, **kwargs)
88 functools.update_wrapper(partial_func, func)
89 if hasattr(func, "__annotations__"):
90 # Generic types cannot be pickled in Python <= 3.6, see https://github.com/python/typing/issues/511
91 partial_func.__annotations__ = {}
92 return partial_func
93
94
95 def wait_and_ensure_success(
96 futures: List[Future], progress_desc: Optional[str] = None
97 ) -> List[Any]:
98 """Waits for all futures to complete and raises an exception
99 as soon as a future resolves with an error."""
100
101 results = []
102 if progress_desc is None:
103 for fut in as_completed(futures):
104 results.append(fut.result())
105 else:
106 with get_rich_progress() as progress:
107 task = progress.add_task(progress_desc, total=len(futures))
108 for fut in as_completed(futures):
109 results.append(fut.result())
110 progress.update(task, advance=1)
111 return results
112
113
114 def snake_to_camel_case(snake_case_name: str) -> str:
115 parts = snake_case_name.split("_")
116 return parts[0] + "".join(part.title() for part in parts[1:])
117
118
119 def get_chunks(arr: List[Any], chunk_size: int) -> Iterable[List[Any]]:
120 for i in range(0, len(arr), chunk_size):
121 yield arr[i : i + chunk_size]
122
123
124 def time_since_epoch_in_ms() -> int:
125 d = datetime.utcnow()
126 unixtime = calendar.timegm(d.utctimetuple())
127 return unixtime * 1000
128
129
130 def copy_directory_with_symlinks(
131 src_path: Path,
132 dst_path: Path,
133 ignore: Iterable[str] = tuple(),
134 make_relative: bool = False,
135 ) -> None:
136 """
137 Links all directories in src_path / dir_name to dst_path / dir_name.
138 """
139 for item in src_path.iterdir():
140 if item.name not in ignore:
141 symlink_path = dst_path / item.name
142 if make_relative:
143 rel_or_abspath = Path(os.path.relpath(item, symlink_path.parent))
144 else:
145 rel_or_abspath = item.resolve()
146 symlink_path.symlink_to(rel_or_abspath)
147
148
149 def setup_logging(args: argparse.Namespace) -> None:
150 logging.basicConfig(
151 level=(logging.DEBUG if args.verbose else logging.INFO),
152 format="%(asctime)s %(levelname)s %(message)s",
153 )
154
155
156 def add_verbose_flag(parser: argparse.ArgumentParser) -> None:
157 parser.add_argument(
158 "--verbose", help="Verbose output", dest="verbose", action="store_true"
159 )
160
161 parser.set_defaults(verbose=False)
162
163
164 def get_rich_progress() -> Progress:
165 return Progress(
166 "[progress.description]{task.description}",
167 rich.progress.BarColumn(),
168 "[progress.percentage]{task.percentage:>3.0f}%",
169 rich.progress.TimeElapsedColumn(),
170 "|",
171 rich.progress.TimeRemainingColumn(),
172 )
173
[end of webknossos/webknossos/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/webknossos/webknossos/utils.py b/webknossos/webknossos/utils.py
--- a/webknossos/webknossos/utils.py
+++ b/webknossos/webknossos/utils.py
@@ -4,6 +4,7 @@
import json
import logging
import os
+import sys
import time
from concurrent.futures import as_completed
from concurrent.futures._base import Future
@@ -147,10 +148,28 @@
def setup_logging(args: argparse.Namespace) -> None:
- logging.basicConfig(
- level=(logging.DEBUG if args.verbose else logging.INFO),
- format="%(asctime)s %(levelname)s %(message)s",
- )
+ log_path = Path(f"./logs/cuber_{time.strftime('%Y-%m-%d_%H%M%S')}.txt")
+
+ console_log_level = logging.DEBUG if args.verbose else logging.INFO
+ file_log_level = logging.DEBUG
+
+ logging_formatter = logging.Formatter("%(asctime)s %(levelname)s %(message)s")
+
+ # Always set the global log level to the more verbose of console_log_level and
+ # file_log_level to allow to log with different log levels to console and files.
+ root_logger = logging.getLogger()
+ root_logger.setLevel(min(console_log_level, file_log_level))
+
+ console = logging.StreamHandler(sys.stdout)
+ console.setLevel(console_log_level)
+ console.setFormatter(logging_formatter)
+ root_logger.addHandler(console)
+
+ log_path.parent.mkdir(parents=True, exist_ok=True)
+ file_handler = logging.FileHandler(log_path, mode="w", encoding="UTF-8")
+ file_handler.setLevel(file_log_level)
+ file_handler.setFormatter(logging_formatter)
+ root_logger.addHandler(file_handler)
def add_verbose_flag(parser: argparse.ArgumentParser) -> None:
| {"golden_diff": "diff --git a/webknossos/webknossos/utils.py b/webknossos/webknossos/utils.py\n--- a/webknossos/webknossos/utils.py\n+++ b/webknossos/webknossos/utils.py\n@@ -4,6 +4,7 @@\n import json\n import logging\n import os\n+import sys\n import time\n from concurrent.futures import as_completed\n from concurrent.futures._base import Future\n@@ -147,10 +148,28 @@\n \n \n def setup_logging(args: argparse.Namespace) -> None:\n- logging.basicConfig(\n- level=(logging.DEBUG if args.verbose else logging.INFO),\n- format=\"%(asctime)s %(levelname)s %(message)s\",\n- )\n+ log_path = Path(f\"./logs/cuber_{time.strftime('%Y-%m-%d_%H%M%S')}.txt\")\n+\n+ console_log_level = logging.DEBUG if args.verbose else logging.INFO\n+ file_log_level = logging.DEBUG\n+\n+ logging_formatter = logging.Formatter(\"%(asctime)s %(levelname)s %(message)s\")\n+\n+ # Always set the global log level to the more verbose of console_log_level and\n+ # file_log_level to allow to log with different log levels to console and files.\n+ root_logger = logging.getLogger()\n+ root_logger.setLevel(min(console_log_level, file_log_level))\n+\n+ console = logging.StreamHandler(sys.stdout)\n+ console.setLevel(console_log_level)\n+ console.setFormatter(logging_formatter)\n+ root_logger.addHandler(console)\n+\n+ log_path.parent.mkdir(parents=True, exist_ok=True)\n+ file_handler = logging.FileHandler(log_path, mode=\"w\", encoding=\"UTF-8\")\n+ file_handler.setLevel(file_log_level)\n+ file_handler.setFormatter(logging_formatter)\n+ root_logger.addHandler(file_handler)\n \n \n def add_verbose_flag(parser: argparse.ArgumentParser) -> None:\n", "issue": "Persist log file\nWhen running the cuber, there usually is a lot of output (overflowing terminal buffers). It would be great if all of that output would be persisted in a log file as well to allow to easily scan and search through it, afterwards.\n", "before_files": [{"content": "import argparse\nimport calendar\nimport functools\nimport json\nimport logging\nimport os\nimport time\nfrom concurrent.futures import as_completed\nfrom concurrent.futures._base import Future\nfrom datetime import datetime\nfrom multiprocessing import cpu_count\nfrom pathlib import Path\nfrom typing import Any, Callable, Iterable, List, Optional, Union\n\nimport rich\nfrom cluster_tools import WrappedProcessPoolExecutor, get_executor\nfrom cluster_tools.schedulers.cluster_executor import ClusterExecutor\nfrom rich.progress import Progress\n\ntimes = {}\n\n\ndef time_start(identifier: str) -> None:\n times[identifier] = time.time()\n logging.debug(\"{} started\".format(identifier))\n\n\ndef time_stop(identifier: str) -> None:\n _time = times.pop(identifier)\n logging.debug(\"{} took {:.8f}s\".format(identifier, time.time() - _time))\n\n\ndef get_executor_for_args(\n args: Optional[argparse.Namespace],\n) -> Union[ClusterExecutor, WrappedProcessPoolExecutor]:\n executor = None\n if args is None:\n # For backwards compatibility with code from other packages\n # we allow args to be None. In this case we are defaulting\n # to these values:\n jobs = cpu_count()\n executor = get_executor(\"multiprocessing\", max_workers=jobs)\n logging.info(\"Using pool of {} workers.\".format(jobs))\n elif args.distribution_strategy == \"multiprocessing\":\n # Also accept \"processes\" instead of job to be compatible with segmentation-tools.\n # In the long run, the args should be unified and provided by the clustertools.\n if \"jobs\" in args:\n jobs = args.jobs\n elif \"processes\" in args:\n jobs = args.processes\n else:\n jobs = cpu_count()\n\n executor = get_executor(\"multiprocessing\", max_workers=jobs)\n logging.info(\"Using pool of {} workers.\".format(jobs))\n elif args.distribution_strategy in (\"slurm\", \"kubernetes\"):\n if args.job_resources is None:\n resources_example = (\n '{\"mem\": \"1G\"}'\n if args.distribution_strategy == \"slurm\"\n else '{\"memory\": \"1G\"}'\n )\n raise argparse.ArgumentTypeError(\n f\"Job resources (--job_resources) has to be provided when using {args.distribution_strategy} as distribution strategy. Example: --job_resources='{resources_example}'\"\n )\n\n executor = get_executor(\n args.distribution_strategy,\n debug=True,\n keep_logs=True,\n job_resources=json.loads(args.job_resources),\n )\n logging.info(f\"Using {args.distribution_strategy} cluster.\")\n else:\n logging.error(\n \"Unknown distribution strategy: {}\".format(args.distribution_strategy)\n )\n\n return executor\n\n\nF = Callable[..., Any]\n\n\ndef named_partial(func: F, *args: Any, **kwargs: Any) -> F:\n # Propagate __name__ and __doc__ attributes to partial function\n partial_func = functools.partial(func, *args, **kwargs)\n functools.update_wrapper(partial_func, func)\n if hasattr(func, \"__annotations__\"):\n # Generic types cannot be pickled in Python <= 3.6, see https://github.com/python/typing/issues/511\n partial_func.__annotations__ = {}\n return partial_func\n\n\ndef wait_and_ensure_success(\n futures: List[Future], progress_desc: Optional[str] = None\n) -> List[Any]:\n \"\"\"Waits for all futures to complete and raises an exception\n as soon as a future resolves with an error.\"\"\"\n\n results = []\n if progress_desc is None:\n for fut in as_completed(futures):\n results.append(fut.result())\n else:\n with get_rich_progress() as progress:\n task = progress.add_task(progress_desc, total=len(futures))\n for fut in as_completed(futures):\n results.append(fut.result())\n progress.update(task, advance=1)\n return results\n\n\ndef snake_to_camel_case(snake_case_name: str) -> str:\n parts = snake_case_name.split(\"_\")\n return parts[0] + \"\".join(part.title() for part in parts[1:])\n\n\ndef get_chunks(arr: List[Any], chunk_size: int) -> Iterable[List[Any]]:\n for i in range(0, len(arr), chunk_size):\n yield arr[i : i + chunk_size]\n\n\ndef time_since_epoch_in_ms() -> int:\n d = datetime.utcnow()\n unixtime = calendar.timegm(d.utctimetuple())\n return unixtime * 1000\n\n\ndef copy_directory_with_symlinks(\n src_path: Path,\n dst_path: Path,\n ignore: Iterable[str] = tuple(),\n make_relative: bool = False,\n) -> None:\n \"\"\"\n Links all directories in src_path / dir_name to dst_path / dir_name.\n \"\"\"\n for item in src_path.iterdir():\n if item.name not in ignore:\n symlink_path = dst_path / item.name\n if make_relative:\n rel_or_abspath = Path(os.path.relpath(item, symlink_path.parent))\n else:\n rel_or_abspath = item.resolve()\n symlink_path.symlink_to(rel_or_abspath)\n\n\ndef setup_logging(args: argparse.Namespace) -> None:\n logging.basicConfig(\n level=(logging.DEBUG if args.verbose else logging.INFO),\n format=\"%(asctime)s %(levelname)s %(message)s\",\n )\n\n\ndef add_verbose_flag(parser: argparse.ArgumentParser) -> None:\n parser.add_argument(\n \"--verbose\", help=\"Verbose output\", dest=\"verbose\", action=\"store_true\"\n )\n\n parser.set_defaults(verbose=False)\n\n\ndef get_rich_progress() -> Progress:\n return Progress(\n \"[progress.description]{task.description}\",\n rich.progress.BarColumn(),\n \"[progress.percentage]{task.percentage:>3.0f}%\",\n rich.progress.TimeElapsedColumn(),\n \"|\",\n rich.progress.TimeRemainingColumn(),\n )\n", "path": "webknossos/webknossos/utils.py"}]} | 2,269 | 400 |
gh_patches_debug_41281 | rasdani/github-patches | git_diff | mozilla__bugbug-310 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make NLP dependencies optional
Some dependencies are only needed in some images, we should make some dependencies optional so that they are only installed when necessary.
For example, Spacy is the biggest dependency, but it is currently not really used by any image.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 import os
7
8 from setuptools import find_packages, setup
9
10 here = os.path.dirname(__file__)
11
12
13 def read_requirements(file_):
14 requires = []
15 links = []
16 with open(os.path.join(here, file_)) as f:
17 for line in f.readlines():
18 line = line.strip()
19
20 if line.startswith("https://"):
21 links.append(line + "-1.0.0")
22 extras = ""
23 if "[" in line:
24 extras = "[" + line.split("[")[1].split("]")[0] + "]"
25 line = line.split("#")[1].split("egg=")[1] + extras
26 elif line == "" or line.startswith("#") or line.startswith("-"):
27 continue
28 line = line.split("#")[0].strip()
29 requires.append(line)
30
31 return sorted(list(set(requires))), links
32
33
34 install_requires, dependency_links = read_requirements("requirements.txt")
35
36
37 with open(os.path.join(here, "VERSION")) as f:
38 version = f.read().strip()
39
40 setup(
41 name="bugbug",
42 version=version,
43 description="ML tools for Mozilla projects",
44 author="Marco Castelluccio",
45 author_email="[email protected]",
46 install_requires=install_requires,
47 dependency_links=dependency_links,
48 packages=find_packages(exclude=["contrib", "docs", "tests"]),
49 include_package_data=True,
50 license="MPL2",
51 entry_points={
52 "console_scripts": [
53 "bugbug-data-commits = scripts.commit_retriever:main",
54 "bugbug-data-bugzilla = scripts.bug_retriever:main",
55 "bugbug-train = scripts.trainer:main",
56 ]
57 },
58 )
59
[end of setup.py]
[start of bugbug/nlp.py]
1 # -*- coding: utf-8 -*-
2 # This Source Code Form is subject to the terms of the Mozilla Public
3 # License, v. 2.0. If a copy of the MPL was not distributed with this file,
4 # You can obtain one at http://mozilla.org/MPL/2.0/.
5
6 from collections import defaultdict
7 from functools import lru_cache
8
9 import numpy as np
10 import spacy
11 from gensim.models import KeyedVectors
12 from sklearn.base import BaseEstimator, TransformerMixin
13 from sklearn.feature_extraction.text import TfidfVectorizer
14 from spacy.tokenizer import Tokenizer
15
16 nlp = spacy.load("en_core_web_sm")
17
18
19 def spacy_token_lemmatizer(text):
20 if len(text) > nlp.max_length:
21 text = text[: nlp.max_length - 1]
22 doc = nlp(text)
23 return [token.lemma_ for token in doc]
24
25
26 class SpacyVectorizer(TfidfVectorizer):
27 def __init__(self, *args, **kwargs):
28 super().__init__(tokenizer=spacy_token_lemmatizer, *args, **kwargs)
29
30
31 @lru_cache()
32 def get_word_embeddings():
33 word_embeddings = KeyedVectors.load_word2vec_format("wiki-news-300d-1M-subword.vec")
34 word_embeddings.init_sims(replace=True)
35 return word_embeddings
36
37
38 class MeanEmbeddingTransformer(BaseEstimator, TransformerMixin):
39 def __init__(self):
40 self.model = get_word_embeddings()
41 self.dim = len(self.model["if"])
42
43 def fit(self, x, y=None):
44 return self
45
46 def transform(self, data):
47 tokenizer = Tokenizer(nlp.vocab)
48 return np.array(
49 [
50 np.mean(
51 [
52 self.model[w.text.lower()]
53 for w in words
54 if w.text.lower() in self.model
55 ]
56 or [np.zeros(self.dim)],
57 axis=0,
58 )
59 for words in tokenizer.pipe(data)
60 ]
61 )
62
63 def get_feature_names(self):
64 return np.array([f"_{i}" for i in range(self.dim)], dtype=object)
65
66
67 class TfidfMeanEmbeddingTransformer(MeanEmbeddingTransformer):
68 def __init__(self):
69 super().__init__()
70 self.word2weight = None
71
72 def fit(self, X, y=None):
73 tfidf = TfidfVectorizer(analyzer=lambda x: x)
74 tfidf.fit(X)
75
76 # If a word was never seen, it must be at least as infrequent as any of the known words.
77 # So, the default idf is the max of known idfs.
78 max_idf = max(tfidf.idf_)
79 self.word2weight = defaultdict(
80 lambda: max_idf, [(w, tfidf.idf_[i]) for w, i in tfidf.vocabulary_.items()]
81 )
82
83 return self
84
85 def transform(self, data):
86 tokenizer = Tokenizer(nlp.vocab)
87 return np.array(
88 [
89 np.mean(
90 [
91 self.model[w.text.lower()] * self.word2weight[w.text.lower()]
92 for w in words
93 if w.text.lower() in self.model
94 ]
95 or [np.zeros(self.dim)],
96 axis=0,
97 )
98 for words in tokenizer.pipe(data)
99 ]
100 )
101
[end of bugbug/nlp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bugbug/nlp.py b/bugbug/nlp.py
--- a/bugbug/nlp.py
+++ b/bugbug/nlp.py
@@ -3,17 +3,41 @@
# License, v. 2.0. If a copy of the MPL was not distributed with this file,
# You can obtain one at http://mozilla.org/MPL/2.0/.
+import sys
from collections import defaultdict
from functools import lru_cache
import numpy as np
-import spacy
-from gensim.models import KeyedVectors
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.feature_extraction.text import TfidfVectorizer
-from spacy.tokenizer import Tokenizer
-nlp = spacy.load("en_core_web_sm")
+HAS_OPTIONAL_DEPENDENCIES = False
+
+try:
+ import spacy
+
+ from spacy.tokenizer import Tokenizer
+ from gensim.models import KeyedVectors
+
+ HAS_OPTIONAL_DEPENDENCIES = True
+except ImportError:
+ pass
+
+try:
+ if HAS_OPTIONAL_DEPENDENCIES:
+ nlp = spacy.load("en_core_web_sm")
+except OSError:
+ msg = (
+ "Spacy model is missing, install it with: "
+ f"{sys.executable} -m spacy download en_core_web_sm"
+ )
+ print(msg, file=sys.stderr)
+
+OPT_MSG_MISSING = (
+ "Optional dependencies are missing, install them with: pip install bugbug[nlp]\n"
+ "You might need also to download the models with: "
+ f"{sys.executable} -m spacy download en_core_web_sm"
+)
def spacy_token_lemmatizer(text):
@@ -25,6 +49,11 @@
class SpacyVectorizer(TfidfVectorizer):
def __init__(self, *args, **kwargs):
+
+ # Detect when the Spacy optional dependency is missing
+ if not HAS_OPTIONAL_DEPENDENCIES:
+ raise NotImplementedError(OPT_MSG_MISSING)
+
super().__init__(tokenizer=spacy_token_lemmatizer, *args, **kwargs)
@@ -37,6 +66,10 @@
class MeanEmbeddingTransformer(BaseEstimator, TransformerMixin):
def __init__(self):
+ # Detect when the Gensim optional dependency are missing
+ if not HAS_OPTIONAL_DEPENDENCIES:
+ raise NotImplementedError(OPT_MSG_MISSING)
+
self.model = get_word_embeddings()
self.dim = len(self.model["if"])
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -37,6 +37,20 @@
with open(os.path.join(here, "VERSION")) as f:
version = f.read().strip()
+# Read the extra requirements
+extras = ["nlp"]
+
+extras_require = {}
+
+for extra in extras:
+ extras_install, extra_links = read_requirements("extra-%s-requirements.txt" % extra)
+
+ # Merge the dependency links
+ dependency_links.extend(extra_links)
+
+ extras_require[extra] = extras_install
+
+
setup(
name="bugbug",
version=version,
@@ -44,6 +58,7 @@
author="Marco Castelluccio",
author_email="[email protected]",
install_requires=install_requires,
+ extras_require=extras_require,
dependency_links=dependency_links,
packages=find_packages(exclude=["contrib", "docs", "tests"]),
include_package_data=True,
@@ -55,4 +70,9 @@
"bugbug-train = scripts.trainer:main",
]
},
+ classifiers=[
+ "Programming Language :: Python :: 3.6",
+ "Programming Language :: Python :: 3 :: Only",
+ "License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)",
+ ],
)
| {"golden_diff": "diff --git a/bugbug/nlp.py b/bugbug/nlp.py\n--- a/bugbug/nlp.py\n+++ b/bugbug/nlp.py\n@@ -3,17 +3,41 @@\n # License, v. 2.0. If a copy of the MPL was not distributed with this file,\n # You can obtain one at http://mozilla.org/MPL/2.0/.\n \n+import sys\n from collections import defaultdict\n from functools import lru_cache\n \n import numpy as np\n-import spacy\n-from gensim.models import KeyedVectors\n from sklearn.base import BaseEstimator, TransformerMixin\n from sklearn.feature_extraction.text import TfidfVectorizer\n-from spacy.tokenizer import Tokenizer\n \n-nlp = spacy.load(\"en_core_web_sm\")\n+HAS_OPTIONAL_DEPENDENCIES = False\n+\n+try:\n+ import spacy\n+\n+ from spacy.tokenizer import Tokenizer\n+ from gensim.models import KeyedVectors\n+\n+ HAS_OPTIONAL_DEPENDENCIES = True\n+except ImportError:\n+ pass\n+\n+try:\n+ if HAS_OPTIONAL_DEPENDENCIES:\n+ nlp = spacy.load(\"en_core_web_sm\")\n+except OSError:\n+ msg = (\n+ \"Spacy model is missing, install it with: \"\n+ f\"{sys.executable} -m spacy download en_core_web_sm\"\n+ )\n+ print(msg, file=sys.stderr)\n+\n+OPT_MSG_MISSING = (\n+ \"Optional dependencies are missing, install them with: pip install bugbug[nlp]\\n\"\n+ \"You might need also to download the models with: \"\n+ f\"{sys.executable} -m spacy download en_core_web_sm\"\n+)\n \n \n def spacy_token_lemmatizer(text):\n@@ -25,6 +49,11 @@\n \n class SpacyVectorizer(TfidfVectorizer):\n def __init__(self, *args, **kwargs):\n+\n+ # Detect when the Spacy optional dependency is missing\n+ if not HAS_OPTIONAL_DEPENDENCIES:\n+ raise NotImplementedError(OPT_MSG_MISSING)\n+\n super().__init__(tokenizer=spacy_token_lemmatizer, *args, **kwargs)\n \n \n@@ -37,6 +66,10 @@\n \n class MeanEmbeddingTransformer(BaseEstimator, TransformerMixin):\n def __init__(self):\n+ # Detect when the Gensim optional dependency are missing\n+ if not HAS_OPTIONAL_DEPENDENCIES:\n+ raise NotImplementedError(OPT_MSG_MISSING)\n+\n self.model = get_word_embeddings()\n self.dim = len(self.model[\"if\"])\n \ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -37,6 +37,20 @@\n with open(os.path.join(here, \"VERSION\")) as f:\n version = f.read().strip()\n \n+# Read the extra requirements\n+extras = [\"nlp\"]\n+\n+extras_require = {}\n+\n+for extra in extras:\n+ extras_install, extra_links = read_requirements(\"extra-%s-requirements.txt\" % extra)\n+\n+ # Merge the dependency links\n+ dependency_links.extend(extra_links)\n+\n+ extras_require[extra] = extras_install\n+\n+\n setup(\n name=\"bugbug\",\n version=version,\n@@ -44,6 +58,7 @@\n author=\"Marco Castelluccio\",\n author_email=\"[email protected]\",\n install_requires=install_requires,\n+ extras_require=extras_require,\n dependency_links=dependency_links,\n packages=find_packages(exclude=[\"contrib\", \"docs\", \"tests\"]),\n include_package_data=True,\n@@ -55,4 +70,9 @@\n \"bugbug-train = scripts.trainer:main\",\n ]\n },\n+ classifiers=[\n+ \"Programming Language :: Python :: 3.6\",\n+ \"Programming Language :: Python :: 3 :: Only\",\n+ \"License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)\",\n+ ],\n )\n", "issue": "Make NLP dependencies optional\nSome dependencies are only needed in some images, we should make some dependencies optional so that they are only installed when necessary.\r\n\r\nFor example, Spacy is the biggest dependency, but it is currently not really used by any image.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nimport os\n\nfrom setuptools import find_packages, setup\n\nhere = os.path.dirname(__file__)\n\n\ndef read_requirements(file_):\n requires = []\n links = []\n with open(os.path.join(here, file_)) as f:\n for line in f.readlines():\n line = line.strip()\n\n if line.startswith(\"https://\"):\n links.append(line + \"-1.0.0\")\n extras = \"\"\n if \"[\" in line:\n extras = \"[\" + line.split(\"[\")[1].split(\"]\")[0] + \"]\"\n line = line.split(\"#\")[1].split(\"egg=\")[1] + extras\n elif line == \"\" or line.startswith(\"#\") or line.startswith(\"-\"):\n continue\n line = line.split(\"#\")[0].strip()\n requires.append(line)\n\n return sorted(list(set(requires))), links\n\n\ninstall_requires, dependency_links = read_requirements(\"requirements.txt\")\n\n\nwith open(os.path.join(here, \"VERSION\")) as f:\n version = f.read().strip()\n\nsetup(\n name=\"bugbug\",\n version=version,\n description=\"ML tools for Mozilla projects\",\n author=\"Marco Castelluccio\",\n author_email=\"[email protected]\",\n install_requires=install_requires,\n dependency_links=dependency_links,\n packages=find_packages(exclude=[\"contrib\", \"docs\", \"tests\"]),\n include_package_data=True,\n license=\"MPL2\",\n entry_points={\n \"console_scripts\": [\n \"bugbug-data-commits = scripts.commit_retriever:main\",\n \"bugbug-data-bugzilla = scripts.bug_retriever:main\",\n \"bugbug-train = scripts.trainer:main\",\n ]\n },\n)\n", "path": "setup.py"}, {"content": "# -*- coding: utf-8 -*-\n# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this file,\n# You can obtain one at http://mozilla.org/MPL/2.0/.\n\nfrom collections import defaultdict\nfrom functools import lru_cache\n\nimport numpy as np\nimport spacy\nfrom gensim.models import KeyedVectors\nfrom sklearn.base import BaseEstimator, TransformerMixin\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom spacy.tokenizer import Tokenizer\n\nnlp = spacy.load(\"en_core_web_sm\")\n\n\ndef spacy_token_lemmatizer(text):\n if len(text) > nlp.max_length:\n text = text[: nlp.max_length - 1]\n doc = nlp(text)\n return [token.lemma_ for token in doc]\n\n\nclass SpacyVectorizer(TfidfVectorizer):\n def __init__(self, *args, **kwargs):\n super().__init__(tokenizer=spacy_token_lemmatizer, *args, **kwargs)\n\n\n@lru_cache()\ndef get_word_embeddings():\n word_embeddings = KeyedVectors.load_word2vec_format(\"wiki-news-300d-1M-subword.vec\")\n word_embeddings.init_sims(replace=True)\n return word_embeddings\n\n\nclass MeanEmbeddingTransformer(BaseEstimator, TransformerMixin):\n def __init__(self):\n self.model = get_word_embeddings()\n self.dim = len(self.model[\"if\"])\n\n def fit(self, x, y=None):\n return self\n\n def transform(self, data):\n tokenizer = Tokenizer(nlp.vocab)\n return np.array(\n [\n np.mean(\n [\n self.model[w.text.lower()]\n for w in words\n if w.text.lower() in self.model\n ]\n or [np.zeros(self.dim)],\n axis=0,\n )\n for words in tokenizer.pipe(data)\n ]\n )\n\n def get_feature_names(self):\n return np.array([f\"_{i}\" for i in range(self.dim)], dtype=object)\n\n\nclass TfidfMeanEmbeddingTransformer(MeanEmbeddingTransformer):\n def __init__(self):\n super().__init__()\n self.word2weight = None\n\n def fit(self, X, y=None):\n tfidf = TfidfVectorizer(analyzer=lambda x: x)\n tfidf.fit(X)\n\n # If a word was never seen, it must be at least as infrequent as any of the known words.\n # So, the default idf is the max of known idfs.\n max_idf = max(tfidf.idf_)\n self.word2weight = defaultdict(\n lambda: max_idf, [(w, tfidf.idf_[i]) for w, i in tfidf.vocabulary_.items()]\n )\n\n return self\n\n def transform(self, data):\n tokenizer = Tokenizer(nlp.vocab)\n return np.array(\n [\n np.mean(\n [\n self.model[w.text.lower()] * self.word2weight[w.text.lower()]\n for w in words\n if w.text.lower() in self.model\n ]\n or [np.zeros(self.dim)],\n axis=0,\n )\n for words in tokenizer.pipe(data)\n ]\n )\n", "path": "bugbug/nlp.py"}]} | 2,039 | 865 |
gh_patches_debug_11621 | rasdani/github-patches | git_diff | apache__airflow-12057 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception logging success function instead of failure
https://github.com/apache/airflow/blob/fdd9b6f65b608c516b8a062b058972d9a45ec9e3/airflow/sensors/sql_sensor.py#L97
</issue>
<code>
[start of airflow/sensors/sql_sensor.py]
1 #
2 # Licensed to the Apache Software Foundation (ASF) under one
3 # or more contributor license agreements. See the NOTICE file
4 # distributed with this work for additional information
5 # regarding copyright ownership. The ASF licenses this file
6 # to you under the Apache License, Version 2.0 (the
7 # "License"); you may not use this file except in compliance
8 # with the License. You may obtain a copy of the License at
9 #
10 # http://www.apache.org/licenses/LICENSE-2.0
11 #
12 # Unless required by applicable law or agreed to in writing,
13 # software distributed under the License is distributed on an
14 # "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
15 # KIND, either express or implied. See the License for the
16 # specific language governing permissions and limitations
17 # under the License.
18
19 from typing import Iterable
20
21 from airflow.exceptions import AirflowException
22 from airflow.hooks.base_hook import BaseHook
23 from airflow.sensors.base_sensor_operator import BaseSensorOperator
24 from airflow.utils.decorators import apply_defaults
25
26
27 class SqlSensor(BaseSensorOperator):
28 """
29 Runs a sql statement repeatedly until a criteria is met. It will keep trying until
30 success or failure criteria are met, or if the first cell is not in (0, '0', '', None).
31 Optional success and failure callables are called with the first cell returned as the argument.
32 If success callable is defined the sensor will keep retrying until the criteria is met.
33 If failure callable is defined and the criteria is met the sensor will raise AirflowException.
34 Failure criteria is evaluated before success criteria. A fail_on_empty boolean can also
35 be passed to the sensor in which case it will fail if no rows have been returned
36
37 :param conn_id: The connection to run the sensor against
38 :type conn_id: str
39 :param sql: The sql to run. To pass, it needs to return at least one cell
40 that contains a non-zero / empty string value.
41 :type sql: str
42 :param parameters: The parameters to render the SQL query with (optional).
43 :type parameters: dict or iterable
44 :param success: Success criteria for the sensor is a Callable that takes first_cell
45 as the only argument, and returns a boolean (optional).
46 :type: success: Optional<Callable[[Any], bool]>
47 :param failure: Failure criteria for the sensor is a Callable that takes first_cell
48 as the only argument and return a boolean (optional).
49 :type: failure: Optional<Callable[[Any], bool]>
50 :param fail_on_empty: Explicitly fail on no rows returned.
51 :type: fail_on_empty: bool
52 """
53
54 template_fields: Iterable[str] = ('sql',)
55 template_ext: Iterable[str] = (
56 '.hql',
57 '.sql',
58 )
59 ui_color = '#7c7287'
60
61 @apply_defaults
62 def __init__(
63 self, *, conn_id, sql, parameters=None, success=None, failure=None, fail_on_empty=False, **kwargs
64 ):
65 self.conn_id = conn_id
66 self.sql = sql
67 self.parameters = parameters
68 self.success = success
69 self.failure = failure
70 self.fail_on_empty = fail_on_empty
71 super().__init__(**kwargs)
72
73 def _get_hook(self):
74 conn = BaseHook.get_connection(self.conn_id)
75
76 allowed_conn_type = {
77 'google_cloud_platform',
78 'jdbc',
79 'mssql',
80 'mysql',
81 'odbc',
82 'oracle',
83 'postgres',
84 'presto',
85 'snowflake',
86 'sqlite',
87 'vertica',
88 }
89 if conn.conn_type not in allowed_conn_type:
90 raise AirflowException(
91 "The connection type is not supported by SqlSensor. "
92 + "Supported connection types: {}".format(list(allowed_conn_type))
93 )
94 return conn.get_hook()
95
96 def poke(self, context):
97 hook = self._get_hook()
98
99 self.log.info('Poking: %s (with parameters %s)', self.sql, self.parameters)
100 records = hook.get_records(self.sql, self.parameters)
101 if not records:
102 if self.fail_on_empty:
103 raise AirflowException("No rows returned, raising as per fail_on_empty flag")
104 else:
105 return False
106 first_cell = records[0][0]
107 if self.failure is not None:
108 if callable(self.failure):
109 if self.failure(first_cell):
110 raise AirflowException(f"Failure criteria met. self.failure({first_cell}) returned True")
111 else:
112 raise AirflowException(f"self.failure is present, but not callable -> {self.success}")
113 if self.success is not None:
114 if callable(self.success):
115 return self.success(first_cell)
116 else:
117 raise AirflowException(f"self.success is present, but not callable -> {self.success}")
118 return bool(first_cell)
119
[end of airflow/sensors/sql_sensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/airflow/sensors/sql_sensor.py b/airflow/sensors/sql_sensor.py
--- a/airflow/sensors/sql_sensor.py
+++ b/airflow/sensors/sql_sensor.py
@@ -109,7 +109,7 @@
if self.failure(first_cell):
raise AirflowException(f"Failure criteria met. self.failure({first_cell}) returned True")
else:
- raise AirflowException(f"self.failure is present, but not callable -> {self.success}")
+ raise AirflowException(f"self.failure is present, but not callable -> {self.failure}")
if self.success is not None:
if callable(self.success):
return self.success(first_cell)
| {"golden_diff": "diff --git a/airflow/sensors/sql_sensor.py b/airflow/sensors/sql_sensor.py\n--- a/airflow/sensors/sql_sensor.py\n+++ b/airflow/sensors/sql_sensor.py\n@@ -109,7 +109,7 @@\n if self.failure(first_cell):\n raise AirflowException(f\"Failure criteria met. self.failure({first_cell}) returned True\")\n else:\n- raise AirflowException(f\"self.failure is present, but not callable -> {self.success}\")\n+ raise AirflowException(f\"self.failure is present, but not callable -> {self.failure}\")\n if self.success is not None:\n if callable(self.success):\n return self.success(first_cell)\n", "issue": "Exception logging success function instead of failure\nhttps://github.com/apache/airflow/blob/fdd9b6f65b608c516b8a062b058972d9a45ec9e3/airflow/sensors/sql_sensor.py#L97\r\n\r\n\n", "before_files": [{"content": "#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements. See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership. The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied. See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nfrom typing import Iterable\n\nfrom airflow.exceptions import AirflowException\nfrom airflow.hooks.base_hook import BaseHook\nfrom airflow.sensors.base_sensor_operator import BaseSensorOperator\nfrom airflow.utils.decorators import apply_defaults\n\n\nclass SqlSensor(BaseSensorOperator):\n \"\"\"\n Runs a sql statement repeatedly until a criteria is met. It will keep trying until\n success or failure criteria are met, or if the first cell is not in (0, '0', '', None).\n Optional success and failure callables are called with the first cell returned as the argument.\n If success callable is defined the sensor will keep retrying until the criteria is met.\n If failure callable is defined and the criteria is met the sensor will raise AirflowException.\n Failure criteria is evaluated before success criteria. A fail_on_empty boolean can also\n be passed to the sensor in which case it will fail if no rows have been returned\n\n :param conn_id: The connection to run the sensor against\n :type conn_id: str\n :param sql: The sql to run. To pass, it needs to return at least one cell\n that contains a non-zero / empty string value.\n :type sql: str\n :param parameters: The parameters to render the SQL query with (optional).\n :type parameters: dict or iterable\n :param success: Success criteria for the sensor is a Callable that takes first_cell\n as the only argument, and returns a boolean (optional).\n :type: success: Optional<Callable[[Any], bool]>\n :param failure: Failure criteria for the sensor is a Callable that takes first_cell\n as the only argument and return a boolean (optional).\n :type: failure: Optional<Callable[[Any], bool]>\n :param fail_on_empty: Explicitly fail on no rows returned.\n :type: fail_on_empty: bool\n \"\"\"\n\n template_fields: Iterable[str] = ('sql',)\n template_ext: Iterable[str] = (\n '.hql',\n '.sql',\n )\n ui_color = '#7c7287'\n\n @apply_defaults\n def __init__(\n self, *, conn_id, sql, parameters=None, success=None, failure=None, fail_on_empty=False, **kwargs\n ):\n self.conn_id = conn_id\n self.sql = sql\n self.parameters = parameters\n self.success = success\n self.failure = failure\n self.fail_on_empty = fail_on_empty\n super().__init__(**kwargs)\n\n def _get_hook(self):\n conn = BaseHook.get_connection(self.conn_id)\n\n allowed_conn_type = {\n 'google_cloud_platform',\n 'jdbc',\n 'mssql',\n 'mysql',\n 'odbc',\n 'oracle',\n 'postgres',\n 'presto',\n 'snowflake',\n 'sqlite',\n 'vertica',\n }\n if conn.conn_type not in allowed_conn_type:\n raise AirflowException(\n \"The connection type is not supported by SqlSensor. \"\n + \"Supported connection types: {}\".format(list(allowed_conn_type))\n )\n return conn.get_hook()\n\n def poke(self, context):\n hook = self._get_hook()\n\n self.log.info('Poking: %s (with parameters %s)', self.sql, self.parameters)\n records = hook.get_records(self.sql, self.parameters)\n if not records:\n if self.fail_on_empty:\n raise AirflowException(\"No rows returned, raising as per fail_on_empty flag\")\n else:\n return False\n first_cell = records[0][0]\n if self.failure is not None:\n if callable(self.failure):\n if self.failure(first_cell):\n raise AirflowException(f\"Failure criteria met. self.failure({first_cell}) returned True\")\n else:\n raise AirflowException(f\"self.failure is present, but not callable -> {self.success}\")\n if self.success is not None:\n if callable(self.success):\n return self.success(first_cell)\n else:\n raise AirflowException(f\"self.success is present, but not callable -> {self.success}\")\n return bool(first_cell)\n", "path": "airflow/sensors/sql_sensor.py"}]} | 1,902 | 151 |
gh_patches_debug_8494 | rasdani/github-patches | git_diff | cloudtools__troposphere-1040 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Appsync ApiKey Expires needs to be an int
The expires prop on the appsync ApiKey class needs to be an int otherwise the decimal place causes cloudformation to throw a `Can not deserialize value of type java.lang.Long from String` error.
</issue>
<code>
[start of troposphere/appsync.py]
1 # Copyright (c) 2012-2017, Mark Peek <[email protected]>
2 # All rights reserved.
3 #
4 # See LICENSE file for full license.
5
6 from . import AWSObject, AWSProperty
7 from .validators import boolean
8
9
10 class ApiKey(AWSObject):
11 resource_type = "AWS::AppSync::ApiKey"
12
13 props = {
14 'ApiId': (basestring, True),
15 'Description': (basestring, False),
16 'Expires': (float, False),
17 }
18
19
20 class DynamoDBConfig(AWSProperty):
21 props = {
22 'AwsRegion': (basestring, True),
23 'TableName': (basestring, True),
24 'UseCallerCredentials': (boolean, False),
25 }
26
27
28 class ElasticsearchConfig(AWSProperty):
29 props = {
30 'AwsRegion': (basestring, True),
31 'Endpoint': (basestring, True),
32 }
33
34
35 class LambdaConfig(AWSProperty):
36 props = {
37 'LambdaFunctionArn': (basestring, True),
38 }
39
40
41 class DataSource(AWSObject):
42 resource_type = "AWS::AppSync::DataSource"
43
44 props = {
45 'ApiId': (basestring, True),
46 'Description': (basestring, False),
47 'DynamoDBConfig': (DynamoDBConfig, False),
48 'ElasticsearchConfig': (ElasticsearchConfig, False),
49 'LambdaConfig': (LambdaConfig, False),
50 'Name': (basestring, True),
51 'ServiceRoleArn': (basestring, False),
52 'Type': (basestring, True),
53 }
54
55
56 class LogConfig(AWSProperty):
57 props = {
58 'CloudWatchLogsRoleArn': (basestring, False),
59 'FieldLogLevel': (basestring, False),
60 }
61
62
63 class OpenIDConnectConfig(AWSProperty):
64 props = {
65 'AuthTTL': (float, False),
66 'ClientId': (basestring, False),
67 'IatTTL': (float, False),
68 'Issuer': (basestring, True),
69 }
70
71
72 class UserPoolConfig(AWSProperty):
73 props = {
74 'AppIdClientRegex': (basestring, False),
75 'AwsRegion': (basestring, False),
76 'DefaultAction': (basestring, False),
77 'UserPoolId': (basestring, False),
78 }
79
80
81 class GraphQLApi(AWSObject):
82 resource_type = "AWS::AppSync::GraphQLApi"
83
84 props = {
85 'AuthenticationType': (basestring, True),
86 'LogConfig': (LogConfig, False),
87 'Name': (basestring, True),
88 'OpenIDConnectConfig': (OpenIDConnectConfig, False),
89 'UserPoolConfig': (UserPoolConfig, False),
90 }
91
92
93 class GraphQLSchema(AWSObject):
94 resource_type = "AWS::AppSync::GraphQLSchema"
95
96 props = {
97 'ApiId': (basestring, True),
98 'Definition': (basestring, False),
99 'DefinitionS3Location': (basestring, False),
100 }
101
102
103 class Resolver(AWSObject):
104 resource_type = "AWS::AppSync::Resolver"
105
106 props = {
107 'ApiId': (basestring, True),
108 'DataSourceName': (basestring, True),
109 'FieldName': (basestring, True),
110 'RequestMappingTemplate': (basestring, False),
111 'RequestMappingTemplateS3Location': (basestring, False),
112 'ResponseMappingTemplate': (basestring, False),
113 'ResponseMappingTemplateS3Location': (basestring, False),
114 'TypeName': (basestring, True),
115 }
116
[end of troposphere/appsync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/troposphere/appsync.py b/troposphere/appsync.py
--- a/troposphere/appsync.py
+++ b/troposphere/appsync.py
@@ -4,7 +4,7 @@
# See LICENSE file for full license.
from . import AWSObject, AWSProperty
-from .validators import boolean
+from .validators import boolean, integer
class ApiKey(AWSObject):
@@ -13,7 +13,7 @@
props = {
'ApiId': (basestring, True),
'Description': (basestring, False),
- 'Expires': (float, False),
+ 'Expires': (integer, False),
}
| {"golden_diff": "diff --git a/troposphere/appsync.py b/troposphere/appsync.py\n--- a/troposphere/appsync.py\n+++ b/troposphere/appsync.py\n@@ -4,7 +4,7 @@\n # See LICENSE file for full license.\n \n from . import AWSObject, AWSProperty\n-from .validators import boolean\n+from .validators import boolean, integer\n \n \n class ApiKey(AWSObject):\n@@ -13,7 +13,7 @@\n props = {\n 'ApiId': (basestring, True),\n 'Description': (basestring, False),\n- 'Expires': (float, False),\n+ 'Expires': (integer, False),\n }\n", "issue": "Appsync ApiKey Expires needs to be an int\nThe expires prop on the appsync ApiKey class needs to be an int otherwise the decimal place causes cloudformation to throw a `Can not deserialize value of type java.lang.Long from String` error. \n", "before_files": [{"content": "# Copyright (c) 2012-2017, Mark Peek <[email protected]>\n# All rights reserved.\n#\n# See LICENSE file for full license.\n\nfrom . import AWSObject, AWSProperty\nfrom .validators import boolean\n\n\nclass ApiKey(AWSObject):\n resource_type = \"AWS::AppSync::ApiKey\"\n\n props = {\n 'ApiId': (basestring, True),\n 'Description': (basestring, False),\n 'Expires': (float, False),\n }\n\n\nclass DynamoDBConfig(AWSProperty):\n props = {\n 'AwsRegion': (basestring, True),\n 'TableName': (basestring, True),\n 'UseCallerCredentials': (boolean, False),\n }\n\n\nclass ElasticsearchConfig(AWSProperty):\n props = {\n 'AwsRegion': (basestring, True),\n 'Endpoint': (basestring, True),\n }\n\n\nclass LambdaConfig(AWSProperty):\n props = {\n 'LambdaFunctionArn': (basestring, True),\n }\n\n\nclass DataSource(AWSObject):\n resource_type = \"AWS::AppSync::DataSource\"\n\n props = {\n 'ApiId': (basestring, True),\n 'Description': (basestring, False),\n 'DynamoDBConfig': (DynamoDBConfig, False),\n 'ElasticsearchConfig': (ElasticsearchConfig, False),\n 'LambdaConfig': (LambdaConfig, False),\n 'Name': (basestring, True),\n 'ServiceRoleArn': (basestring, False),\n 'Type': (basestring, True),\n }\n\n\nclass LogConfig(AWSProperty):\n props = {\n 'CloudWatchLogsRoleArn': (basestring, False),\n 'FieldLogLevel': (basestring, False),\n }\n\n\nclass OpenIDConnectConfig(AWSProperty):\n props = {\n 'AuthTTL': (float, False),\n 'ClientId': (basestring, False),\n 'IatTTL': (float, False),\n 'Issuer': (basestring, True),\n }\n\n\nclass UserPoolConfig(AWSProperty):\n props = {\n 'AppIdClientRegex': (basestring, False),\n 'AwsRegion': (basestring, False),\n 'DefaultAction': (basestring, False),\n 'UserPoolId': (basestring, False),\n }\n\n\nclass GraphQLApi(AWSObject):\n resource_type = \"AWS::AppSync::GraphQLApi\"\n\n props = {\n 'AuthenticationType': (basestring, True),\n 'LogConfig': (LogConfig, False),\n 'Name': (basestring, True),\n 'OpenIDConnectConfig': (OpenIDConnectConfig, False),\n 'UserPoolConfig': (UserPoolConfig, False),\n }\n\n\nclass GraphQLSchema(AWSObject):\n resource_type = \"AWS::AppSync::GraphQLSchema\"\n\n props = {\n 'ApiId': (basestring, True),\n 'Definition': (basestring, False),\n 'DefinitionS3Location': (basestring, False),\n }\n\n\nclass Resolver(AWSObject):\n resource_type = \"AWS::AppSync::Resolver\"\n\n props = {\n 'ApiId': (basestring, True),\n 'DataSourceName': (basestring, True),\n 'FieldName': (basestring, True),\n 'RequestMappingTemplate': (basestring, False),\n 'RequestMappingTemplateS3Location': (basestring, False),\n 'ResponseMappingTemplate': (basestring, False),\n 'ResponseMappingTemplateS3Location': (basestring, False),\n 'TypeName': (basestring, True),\n }\n", "path": "troposphere/appsync.py"}]} | 1,607 | 146 |
gh_patches_debug_38012 | rasdani/github-patches | git_diff | mabel-dev__opteryx-1423 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
β¨ improve CROSS JOIN UNNEST performance
Usage logs show queries which use CROSS JOIN UNNEST are often amongst the slowest running queries.
Whilst CROSS JOIN and UNNEST are relatively slow operations, opportunities to improve the performance should be investigated.
</issue>
<code>
[start of opteryx/operators/cross_join_node.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
10 # See the License for the specific language governing permissions and
11 # limitations under the License.
12
13 """
14 Cross Join Node
15
16 This is a SQL Query Execution Plan Node.
17
18 This performs a CROSS JOIN - CROSS JOIN is not natively supported by PyArrow so this is written
19 here rather than calling the join() functions
20 """
21 from typing import Generator
22 from typing import Tuple
23
24 import numpy
25 import pyarrow
26 from orso.schema import FlatColumn
27
28 from opteryx.managers.expression import NodeType
29 from opteryx.models import Node
30 from opteryx.models import QueryProperties
31 from opteryx.operators import BasePlanNode
32
33 INTERNAL_BATCH_SIZE: int = 1000 # config
34 MAX_JOIN_SIZE: int = 500 # config
35
36
37 def _cross_join_unnest_column(
38 morsels: BasePlanNode, source: Node, target_column: FlatColumn
39 ) -> Generator[pyarrow.Table, None, None]:
40 """
41 Perform a cross join on an unnested column of pyarrow tables.
42
43 Args:
44 morsels: An iterable of `pyarrow.Table` objects to be cross joined.
45 source: The source node indicating the column.
46 target_column: The column to be unnested.
47
48 Returns:
49 A generator that yields the resulting `pyarrow.Table` objects.
50 """
51
52 # Check if the source node type is an identifier, raise error otherwise
53 if source.node_type != NodeType.IDENTIFIER:
54 raise NotImplementedError("Can only CROSS JOIN UNNEST on a column")
55
56 column_type = None
57 batch_size: int = INTERNAL_BATCH_SIZE
58
59 # Loop through each morsel from the morsels execution
60 for left_morsel in morsels.execute():
61 # Break the morsel into batches to avoid memory issues
62 for left_block in left_morsel.to_batches(max_chunksize=batch_size):
63 # Fetch the data of the column to be unnested
64 column_data = left_block[source.schema_column.identity]
65
66 # Set column_type if it hasn't been determined already
67 if column_type is None:
68 column_type = column_data.type.value_type
69
70 from opteryx.compiled import build_rows_indices_and_column
71
72 indices, new_column_data = build_rows_indices_and_column(column_data)
73
74 # If no new data was generated, skip to next iteration
75 if not new_column_data:
76 continue
77
78 new_block = left_morsel.take(indices)
79 new_block = new_block.append_column(target_column.identity, [new_column_data])
80 yield new_block
81
82 if batch_size == INTERNAL_BATCH_SIZE:
83 # we size the batches based on observations
84 batch_size = int((INTERNAL_BATCH_SIZE / new_block.nbytes) * 8 * 1024 * 1024)
85
86
87 def _cross_join_unnest_literal(
88 morsels: BasePlanNode, source: Tuple, target_column: FlatColumn
89 ) -> Generator[pyarrow.Table, None, None]:
90 joined_list_size = len(source)
91
92 # Loop through each morsel from the morsels execution
93 for left_morsel in morsels.execute():
94 # Break the morsel into batches to avoid memory issues
95 for left_block in left_morsel.to_batches(max_chunksize=INTERNAL_BATCH_SIZE):
96 left_block = pyarrow.Table.from_batches([left_block], schema=left_morsel.schema)
97 block_size = left_block.num_rows
98
99 # Repeat each row in the table n times
100 repeated_indices = numpy.repeat(numpy.arange(block_size), joined_list_size)
101 appended_table = left_block.take(repeated_indices)
102
103 # Tile the array to match the new number of rows
104 tiled_array = numpy.tile(source, block_size)
105
106 # Convert tiled_array to PyArrow array and append it to the table
107 array_column = pyarrow.array(tiled_array)
108 appended_table = appended_table.append_column(target_column.identity, array_column)
109
110 yield appended_table
111
112
113 def _cartesian_product(*arrays):
114 """
115 Cartesian product of arrays creates every combination of the elements in the arrays
116 """
117 array_count = len(arrays)
118 arr = numpy.empty([len(array) for array in arrays] + [array_count], dtype=numpy.int64)
119 for i, array in enumerate(numpy.ix_(*arrays)):
120 arr[..., i] = array
121 return numpy.hsplit(arr.reshape(-1, array_count), array_count)
122
123
124 def _cross_join(left, right):
125 """
126 A cross join is the cartesian product of two tables - this usually isn't very
127 useful, but it does allow you to the theta joins (non-equi joins)
128 """
129
130 def _chunker(seq_1, seq_2, size):
131 """
132 Chunk two equal length interables into size sized chunks
133
134 This returns a generator.
135 """
136 return (
137 (seq_1[pos : pos + size], seq_2[pos : pos + size]) for pos in range(0, len(seq_1), size)
138 )
139
140 from opteryx.third_party.pyarrow_ops import align_tables
141
142 for left_morsel in left.execute():
143 # Iterate through left table in chunks of size INTERNAL_BATCH_SIZE
144 for left_block in left_morsel.to_batches(max_chunksize=INTERNAL_BATCH_SIZE):
145 # Convert the chunk to a table to retain column names
146 left_block = pyarrow.Table.from_batches([left_block], schema=left_morsel.schema)
147
148 # Create an array of row indices for each table
149 left_array = numpy.arange(left_block.num_rows, dtype=numpy.int64)
150 right_array = numpy.arange(right.num_rows, dtype=numpy.int64)
151
152 # Calculate the cartesian product of the two arrays of row indices
153 left_align, right_align = _cartesian_product(left_array, right_array)
154
155 # Further break down the result into manageable chunks of size MAX_JOIN_SIZE
156 for left_chunk, right_chunk in _chunker(left_align, right_align, MAX_JOIN_SIZE):
157 # Align the tables using the specified chunks of row indices
158 table = align_tables(left_block, right, left_chunk.flatten(), right_chunk.flatten())
159
160 # Yield the resulting table to the caller
161 yield table
162
163
164 class CrossJoinNode(BasePlanNode):
165 """
166 Implements a SQL CROSS JOIN
167 """
168
169 def __init__(self, properties: QueryProperties, **config):
170 super().__init__(properties=properties)
171
172 self.source = config.get("column")
173
174 # do we have unnest details?
175 self._unnest_column = config.get("unnest_column")
176 self._unnest_target = config.get("unnest_target")
177
178 # handle variation in how the unnested column is represented
179 if self._unnest_column:
180 if self._unnest_column.node_type == NodeType.NESTED:
181 self._unnest_column = self._unnest_column.centre
182 # if we have a literal that's not a tuple, wrap it
183 if self._unnest_column.node_type == NodeType.LITERAL and not isinstance(
184 self._unnest_column.value, tuple
185 ):
186 self._unnest_column.value = tuple([self._unnest_column.value])
187
188 @property
189 def name(self): # pragma: no cover
190 return "Cross Join"
191
192 @property
193 def config(self): # pragma: no cover
194 return ""
195
196 def execute(self) -> Generator:
197 left_node = self._producers[0] # type:ignore
198 right_node = self._producers[1] # type:ignore
199 right_table = pyarrow.concat_tables(
200 right_node.execute(), promote_options="none"
201 ) # type:ignore
202
203 if self._unnest_column is None:
204 yield from _cross_join(left_node, right_table)
205
206 elif isinstance(self._unnest_column.value, tuple):
207 yield from _cross_join_unnest_literal(
208 morsels=left_node,
209 source=self._unnest_column.value,
210 target_column=self._unnest_target,
211 )
212 else:
213 yield from _cross_join_unnest_column(
214 morsels=left_node, source=self._unnest_column, target_column=self._unnest_target
215 )
216
[end of opteryx/operators/cross_join_node.py]
[start of opteryx/__version__.py]
1 __build__ = 271
2
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """
16 Store the version here so:
17 1) we don't load dependencies by storing it in __init__.py
18 2) we can import it in setup.py for the same reason
19 """
20 from enum import Enum # isort: skip
21
22
23 class VersionStatus(Enum):
24 ALPHA = "alpha"
25 BETA = "beta"
26 RELEASE = "release"
27
28
29 _major = 0
30 _minor = 13
31 _revision = 2
32 _status = VersionStatus.RELEASE
33
34 __author__ = "@joocer"
35 __version__ = f"{_major}.{_minor}.{_revision}" + (
36 f"-{_status.value}.{__build__}" if _status != VersionStatus.RELEASE else ""
37 )
38
[end of opteryx/__version__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opteryx/__version__.py b/opteryx/__version__.py
--- a/opteryx/__version__.py
+++ b/opteryx/__version__.py
@@ -1,4 +1,4 @@
-__build__ = 271
+__build__ = 276
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -27,9 +27,9 @@
_major = 0
-_minor = 13
-_revision = 2
-_status = VersionStatus.RELEASE
+_minor = 14
+_revision = 0
+_status = VersionStatus.ALPHA
__author__ = "@joocer"
__version__ = f"{_major}.{_minor}.{_revision}" + (
diff --git a/opteryx/operators/cross_join_node.py b/opteryx/operators/cross_join_node.py
--- a/opteryx/operators/cross_join_node.py
+++ b/opteryx/operators/cross_join_node.py
@@ -48,6 +48,7 @@
Returns:
A generator that yields the resulting `pyarrow.Table` objects.
"""
+ from opteryx.compiled import build_rows_indices_and_column
# Check if the source node type is an identifier, raise error otherwise
if source.node_type != NodeType.IDENTIFIER:
@@ -62,20 +63,26 @@
for left_block in left_morsel.to_batches(max_chunksize=batch_size):
# Fetch the data of the column to be unnested
column_data = left_block[source.schema_column.identity]
-
+ # we need the offsets before we drop the rows
+ valid_offsets = column_data.is_valid()
+ column_data = column_data.drop_null()
+ # if there's no valid records, continue to the next record
+ if len(column_data) == 0:
+ continue
+ # drop the rows here, wait until we know we need to
+ left_block = left_block.filter(valid_offsets)
# Set column_type if it hasn't been determined already
if column_type is None:
column_type = column_data.type.value_type
- from opteryx.compiled import build_rows_indices_and_column
-
- indices, new_column_data = build_rows_indices_and_column(column_data)
+ indices, new_column_data = build_rows_indices_and_column(column_data.to_numpy(False))
# If no new data was generated, skip to next iteration
- if not new_column_data:
+ if len(new_column_data) == 0:
continue
- new_block = left_morsel.take(indices)
+ new_block = left_block.take(indices)
+ new_block = pyarrow.Table.from_batches([new_block], schema=left_morsel.schema)
new_block = new_block.append_column(target_column.identity, [new_column_data])
yield new_block
| {"golden_diff": "diff --git a/opteryx/__version__.py b/opteryx/__version__.py\n--- a/opteryx/__version__.py\n+++ b/opteryx/__version__.py\n@@ -1,4 +1,4 @@\n-__build__ = 271\n+__build__ = 276\n \n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -27,9 +27,9 @@\n \n \n _major = 0\n-_minor = 13\n-_revision = 2\n-_status = VersionStatus.RELEASE\n+_minor = 14\n+_revision = 0\n+_status = VersionStatus.ALPHA\n \n __author__ = \"@joocer\"\n __version__ = f\"{_major}.{_minor}.{_revision}\" + (\ndiff --git a/opteryx/operators/cross_join_node.py b/opteryx/operators/cross_join_node.py\n--- a/opteryx/operators/cross_join_node.py\n+++ b/opteryx/operators/cross_join_node.py\n@@ -48,6 +48,7 @@\n Returns:\n A generator that yields the resulting `pyarrow.Table` objects.\n \"\"\"\n+ from opteryx.compiled import build_rows_indices_and_column\n \n # Check if the source node type is an identifier, raise error otherwise\n if source.node_type != NodeType.IDENTIFIER:\n@@ -62,20 +63,26 @@\n for left_block in left_morsel.to_batches(max_chunksize=batch_size):\n # Fetch the data of the column to be unnested\n column_data = left_block[source.schema_column.identity]\n-\n+ # we need the offsets before we drop the rows\n+ valid_offsets = column_data.is_valid()\n+ column_data = column_data.drop_null()\n+ # if there's no valid records, continue to the next record\n+ if len(column_data) == 0:\n+ continue\n+ # drop the rows here, wait until we know we need to\n+ left_block = left_block.filter(valid_offsets)\n # Set column_type if it hasn't been determined already\n if column_type is None:\n column_type = column_data.type.value_type\n \n- from opteryx.compiled import build_rows_indices_and_column\n-\n- indices, new_column_data = build_rows_indices_and_column(column_data)\n+ indices, new_column_data = build_rows_indices_and_column(column_data.to_numpy(False))\n \n # If no new data was generated, skip to next iteration\n- if not new_column_data:\n+ if len(new_column_data) == 0:\n continue\n \n- new_block = left_morsel.take(indices)\n+ new_block = left_block.take(indices)\n+ new_block = pyarrow.Table.from_batches([new_block], schema=left_morsel.schema)\n new_block = new_block.append_column(target_column.identity, [new_column_data])\n yield new_block\n", "issue": "\u2728 improve CROSS JOIN UNNEST performance \nUsage logs show queries which use CROSS JOIN UNNEST are often amongst the slowest running queries.\n\nWhilst CROSS JOIN and UNNEST are relatively slow operations, opportunities to improve the performance should be investigated.\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nCross Join Node\n\nThis is a SQL Query Execution Plan Node.\n\nThis performs a CROSS JOIN - CROSS JOIN is not natively supported by PyArrow so this is written\nhere rather than calling the join() functions\n\"\"\"\nfrom typing import Generator\nfrom typing import Tuple\n\nimport numpy\nimport pyarrow\nfrom orso.schema import FlatColumn\n\nfrom opteryx.managers.expression import NodeType\nfrom opteryx.models import Node\nfrom opteryx.models import QueryProperties\nfrom opteryx.operators import BasePlanNode\n\nINTERNAL_BATCH_SIZE: int = 1000 # config\nMAX_JOIN_SIZE: int = 500 # config\n\n\ndef _cross_join_unnest_column(\n morsels: BasePlanNode, source: Node, target_column: FlatColumn\n) -> Generator[pyarrow.Table, None, None]:\n \"\"\"\n Perform a cross join on an unnested column of pyarrow tables.\n\n Args:\n morsels: An iterable of `pyarrow.Table` objects to be cross joined.\n source: The source node indicating the column.\n target_column: The column to be unnested.\n\n Returns:\n A generator that yields the resulting `pyarrow.Table` objects.\n \"\"\"\n\n # Check if the source node type is an identifier, raise error otherwise\n if source.node_type != NodeType.IDENTIFIER:\n raise NotImplementedError(\"Can only CROSS JOIN UNNEST on a column\")\n\n column_type = None\n batch_size: int = INTERNAL_BATCH_SIZE\n\n # Loop through each morsel from the morsels execution\n for left_morsel in morsels.execute():\n # Break the morsel into batches to avoid memory issues\n for left_block in left_morsel.to_batches(max_chunksize=batch_size):\n # Fetch the data of the column to be unnested\n column_data = left_block[source.schema_column.identity]\n\n # Set column_type if it hasn't been determined already\n if column_type is None:\n column_type = column_data.type.value_type\n\n from opteryx.compiled import build_rows_indices_and_column\n\n indices, new_column_data = build_rows_indices_and_column(column_data)\n\n # If no new data was generated, skip to next iteration\n if not new_column_data:\n continue\n\n new_block = left_morsel.take(indices)\n new_block = new_block.append_column(target_column.identity, [new_column_data])\n yield new_block\n\n if batch_size == INTERNAL_BATCH_SIZE:\n # we size the batches based on observations\n batch_size = int((INTERNAL_BATCH_SIZE / new_block.nbytes) * 8 * 1024 * 1024)\n\n\ndef _cross_join_unnest_literal(\n morsels: BasePlanNode, source: Tuple, target_column: FlatColumn\n) -> Generator[pyarrow.Table, None, None]:\n joined_list_size = len(source)\n\n # Loop through each morsel from the morsels execution\n for left_morsel in morsels.execute():\n # Break the morsel into batches to avoid memory issues\n for left_block in left_morsel.to_batches(max_chunksize=INTERNAL_BATCH_SIZE):\n left_block = pyarrow.Table.from_batches([left_block], schema=left_morsel.schema)\n block_size = left_block.num_rows\n\n # Repeat each row in the table n times\n repeated_indices = numpy.repeat(numpy.arange(block_size), joined_list_size)\n appended_table = left_block.take(repeated_indices)\n\n # Tile the array to match the new number of rows\n tiled_array = numpy.tile(source, block_size)\n\n # Convert tiled_array to PyArrow array and append it to the table\n array_column = pyarrow.array(tiled_array)\n appended_table = appended_table.append_column(target_column.identity, array_column)\n\n yield appended_table\n\n\ndef _cartesian_product(*arrays):\n \"\"\"\n Cartesian product of arrays creates every combination of the elements in the arrays\n \"\"\"\n array_count = len(arrays)\n arr = numpy.empty([len(array) for array in arrays] + [array_count], dtype=numpy.int64)\n for i, array in enumerate(numpy.ix_(*arrays)):\n arr[..., i] = array\n return numpy.hsplit(arr.reshape(-1, array_count), array_count)\n\n\ndef _cross_join(left, right):\n \"\"\"\n A cross join is the cartesian product of two tables - this usually isn't very\n useful, but it does allow you to the theta joins (non-equi joins)\n \"\"\"\n\n def _chunker(seq_1, seq_2, size):\n \"\"\"\n Chunk two equal length interables into size sized chunks\n\n This returns a generator.\n \"\"\"\n return (\n (seq_1[pos : pos + size], seq_2[pos : pos + size]) for pos in range(0, len(seq_1), size)\n )\n\n from opteryx.third_party.pyarrow_ops import align_tables\n\n for left_morsel in left.execute():\n # Iterate through left table in chunks of size INTERNAL_BATCH_SIZE\n for left_block in left_morsel.to_batches(max_chunksize=INTERNAL_BATCH_SIZE):\n # Convert the chunk to a table to retain column names\n left_block = pyarrow.Table.from_batches([left_block], schema=left_morsel.schema)\n\n # Create an array of row indices for each table\n left_array = numpy.arange(left_block.num_rows, dtype=numpy.int64)\n right_array = numpy.arange(right.num_rows, dtype=numpy.int64)\n\n # Calculate the cartesian product of the two arrays of row indices\n left_align, right_align = _cartesian_product(left_array, right_array)\n\n # Further break down the result into manageable chunks of size MAX_JOIN_SIZE\n for left_chunk, right_chunk in _chunker(left_align, right_align, MAX_JOIN_SIZE):\n # Align the tables using the specified chunks of row indices\n table = align_tables(left_block, right, left_chunk.flatten(), right_chunk.flatten())\n\n # Yield the resulting table to the caller\n yield table\n\n\nclass CrossJoinNode(BasePlanNode):\n \"\"\"\n Implements a SQL CROSS JOIN\n \"\"\"\n\n def __init__(self, properties: QueryProperties, **config):\n super().__init__(properties=properties)\n\n self.source = config.get(\"column\")\n\n # do we have unnest details?\n self._unnest_column = config.get(\"unnest_column\")\n self._unnest_target = config.get(\"unnest_target\")\n\n # handle variation in how the unnested column is represented\n if self._unnest_column:\n if self._unnest_column.node_type == NodeType.NESTED:\n self._unnest_column = self._unnest_column.centre\n # if we have a literal that's not a tuple, wrap it\n if self._unnest_column.node_type == NodeType.LITERAL and not isinstance(\n self._unnest_column.value, tuple\n ):\n self._unnest_column.value = tuple([self._unnest_column.value])\n\n @property\n def name(self): # pragma: no cover\n return \"Cross Join\"\n\n @property\n def config(self): # pragma: no cover\n return \"\"\n\n def execute(self) -> Generator:\n left_node = self._producers[0] # type:ignore\n right_node = self._producers[1] # type:ignore\n right_table = pyarrow.concat_tables(\n right_node.execute(), promote_options=\"none\"\n ) # type:ignore\n\n if self._unnest_column is None:\n yield from _cross_join(left_node, right_table)\n\n elif isinstance(self._unnest_column.value, tuple):\n yield from _cross_join_unnest_literal(\n morsels=left_node,\n source=self._unnest_column.value,\n target_column=self._unnest_target,\n )\n else:\n yield from _cross_join_unnest_column(\n morsels=left_node, source=self._unnest_column, target_column=self._unnest_target\n )\n", "path": "opteryx/operators/cross_join_node.py"}, {"content": "__build__ = 271\n\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"\nStore the version here so:\n1) we don't load dependencies by storing it in __init__.py\n2) we can import it in setup.py for the same reason\n\"\"\"\nfrom enum import Enum # isort: skip\n\n\nclass VersionStatus(Enum):\n ALPHA = \"alpha\"\n BETA = \"beta\"\n RELEASE = \"release\"\n\n\n_major = 0\n_minor = 13\n_revision = 2\n_status = VersionStatus.RELEASE\n\n__author__ = \"@joocer\"\n__version__ = f\"{_major}.{_minor}.{_revision}\" + (\n f\"-{_status.value}.{__build__}\" if _status != VersionStatus.RELEASE else \"\"\n)\n", "path": "opteryx/__version__.py"}]} | 3,376 | 641 |
gh_patches_debug_7151 | rasdani/github-patches | git_diff | Lightning-AI__pytorch-lightning-1996 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Set precision=16 (using apex) would cause early stopping break
## π Bug
The current early stopping monitor initilize by comparing if the function monitor_op is equal to torch.lt.
`self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf`
https://github.com/PyTorchLightning/pytorch-lightning/blob/12138ced7c730b8b1a72f798ce74f1452ec8f4e0/pytorch_lightning/callbacks/early_stopping.py#L110
However when intializing with the apex, it seems that the torch.lt would change and this evaluation would be always false and thus the `self.best` is intialized to -inf instead of +inf.
### To Reproduce
```
import torch
from pytorch_lightning.callbacks.early_stopping import EarlyStopping
import apex.amp as amp
es = EarlyStopping()
es.monitor_op == torch.lt
Out[6]: True
model = torch.Linear(5, 5).to('cuda')
optimizers = torch.optim.Adam(model.parameters(), lr=1e-3)
amp.initialize(model, optimizers)
es.monitor_op == torch.lt
Out[22]: False
```
And this bug leads to the initialization of `self.best` to be -inf instead of inf
### Expected behavior
`self.best` should be initialized to inf instead of -inf.
### Environment
* CUDA:
- GPU:
- TITAN Xp
- Quadro P400
- available: True
- version: 10.1
* Packages:
- numpy: 1.18.1
- pyTorch_debug: False
- pyTorch_version: 1.4.0
- pytorch-lightning: 0.7.5
- tensorboard: 2.1.1
- tqdm: 4.43.0
* System:
- OS: Linux
- architecture:
- 64bit
-
- processor: x86_64
- python: 3.6.10
- version: #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019
### Additional context
I bump into this bug after downloading from the master branch couple days ago. I would guess the old version is fine but did not test it.
</issue>
<code>
[start of pytorch_lightning/callbacks/early_stopping.py]
1 r"""
2 Early Stopping
3 ==============
4
5 Monitor a validation metric and stop training when it stops improving.
6
7 """
8
9 import numpy as np
10 import torch
11
12 from pytorch_lightning import _logger as log
13 from pytorch_lightning.callbacks.base import Callback
14 from pytorch_lightning.utilities import rank_zero_warn
15
16 torch_inf = torch.tensor(np.Inf)
17
18
19 class EarlyStopping(Callback):
20 r"""
21
22 Args:
23 monitor: quantity to be monitored. Default: ``'val_loss'``.
24 min_delta: minimum change in the monitored quantity
25 to qualify as an improvement, i.e. an absolute
26 change of less than `min_delta`, will count as no
27 improvement. Default: ``0``.
28 patience: number of validation epochs with no improvement
29 after which training will be stopped. Default: ``0``.
30 verbose: verbosity mode. Default: ``False``.
31 mode: one of {auto, min, max}. In `min` mode,
32 training will stop when the quantity
33 monitored has stopped decreasing; in `max`
34 mode it will stop when the quantity
35 monitored has stopped increasing; in `auto`
36 mode, the direction is automatically inferred
37 from the name of the monitored quantity. Default: ``'auto'``.
38 strict: whether to crash the training if `monitor` is
39 not found in the validation metrics. Default: ``True``.
40
41 Example::
42
43 >>> from pytorch_lightning import Trainer
44 >>> from pytorch_lightning.callbacks import EarlyStopping
45 >>> early_stopping = EarlyStopping('val_loss')
46 >>> trainer = Trainer(early_stop_callback=early_stopping)
47 """
48 mode_dict = {
49 'min': torch.lt,
50 'max': torch.gt,
51 }
52
53 def __init__(self, monitor: str = 'val_loss', min_delta: float = 0.0, patience: int = 3,
54 verbose: bool = False, mode: str = 'auto', strict: bool = True):
55 super().__init__()
56 self.monitor = monitor
57 self.patience = patience
58 self.verbose = verbose
59 self.strict = strict
60 self.min_delta = min_delta
61 self.wait = 0
62 self.stopped_epoch = 0
63 self.mode = mode
64
65 if mode not in self.mode_dict:
66 if self.verbose > 0:
67 log.info(f'EarlyStopping mode {mode} is unknown, fallback to auto mode.')
68 self.mode = 'auto'
69
70 if self.mode == 'auto':
71 if self.monitor == 'acc':
72 self.mode = 'max'
73 else:
74 self.mode = 'min'
75 if self.verbose > 0:
76 log.info(f'EarlyStopping mode set to {self.mode} for monitoring {self.monitor}.')
77
78 self.min_delta *= 1 if self.monitor_op == torch.gt else -1
79
80 def _validate_condition_metric(self, logs):
81 """
82 Checks that the condition metric for early stopping is good
83 :param logs:
84 :return:
85 """
86 monitor_val = logs.get(self.monitor)
87 error_msg = (f'Early stopping conditioned on metric `{self.monitor}`'
88 f' which is not available. Either add `{self.monitor}` to the return of '
89 f' validation_epoch end or modify your EarlyStopping callback to use any of the '
90 f'following: `{"`, `".join(list(logs.keys()))}`')
91
92 if monitor_val is None:
93 if self.strict:
94 raise RuntimeError(error_msg)
95 if self.verbose > 0:
96 rank_zero_warn(error_msg, RuntimeWarning)
97
98 return False
99
100 return True
101
102 @property
103 def monitor_op(self):
104 return self.mode_dict[self.mode]
105
106 def on_train_start(self, trainer, pl_module):
107 # Allow instances to be re-used
108 self.wait = 0
109 self.stopped_epoch = 0
110 self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf
111
112 def on_validation_end(self, trainer, pl_module):
113 self._run_early_stopping_check(trainer, pl_module)
114
115 def _run_early_stopping_check(self, trainer, pl_module):
116 logs = trainer.callback_metrics
117 stop_training = False
118 if not self._validate_condition_metric(logs):
119 return stop_training
120
121 current = logs.get(self.monitor)
122 if not isinstance(current, torch.Tensor):
123 current = torch.tensor(current)
124
125 if self.monitor_op(current - self.min_delta, self.best):
126 self.best = current
127 self.wait = 0
128 else:
129 self.wait += 1
130 if self.wait >= self.patience:
131 self.stopped_epoch = trainer.current_epoch
132 stop_training = True
133 self.on_train_end(trainer, pl_module)
134
135 return stop_training
136
137 def on_train_end(self, trainer, pl_module):
138 if self.stopped_epoch > 0 and self.verbose > 0:
139 rank_zero_warn('Displayed epoch numbers by `EarlyStopping` start from "1" until v0.6.x,'
140 ' but will start from "0" in v0.8.0.', DeprecationWarning)
141 log.info(f'Epoch {self.stopped_epoch + 1:05d}: early stopping')
142
[end of pytorch_lightning/callbacks/early_stopping.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pytorch_lightning/callbacks/early_stopping.py b/pytorch_lightning/callbacks/early_stopping.py
--- a/pytorch_lightning/callbacks/early_stopping.py
+++ b/pytorch_lightning/callbacks/early_stopping.py
@@ -107,7 +107,7 @@
# Allow instances to be re-used
self.wait = 0
self.stopped_epoch = 0
- self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf
+ self.best = -torch_inf if self.monitor_op(torch.Tensor(1), torch.Tensor(2))[0].item() else torch_inf
def on_validation_end(self, trainer, pl_module):
self._run_early_stopping_check(trainer, pl_module)
| {"golden_diff": "diff --git a/pytorch_lightning/callbacks/early_stopping.py b/pytorch_lightning/callbacks/early_stopping.py\n--- a/pytorch_lightning/callbacks/early_stopping.py\n+++ b/pytorch_lightning/callbacks/early_stopping.py\n@@ -107,7 +107,7 @@\n # Allow instances to be re-used\n self.wait = 0\n self.stopped_epoch = 0\n- self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf\n+ self.best = -torch_inf if self.monitor_op(torch.Tensor(1), torch.Tensor(2))[0].item() else torch_inf\n \n def on_validation_end(self, trainer, pl_module):\n self._run_early_stopping_check(trainer, pl_module)\n", "issue": "Set precision=16 (using apex) would cause early stopping break\n## \ud83d\udc1b Bug\r\n\r\nThe current early stopping monitor initilize by comparing if the function monitor_op is equal to torch.lt.\r\n\r\n`self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf`\r\n\r\nhttps://github.com/PyTorchLightning/pytorch-lightning/blob/12138ced7c730b8b1a72f798ce74f1452ec8f4e0/pytorch_lightning/callbacks/early_stopping.py#L110\r\n\r\nHowever when intializing with the apex, it seems that the torch.lt would change and this evaluation would be always false and thus the `self.best` is intialized to -inf instead of +inf.\r\n\r\n### To Reproduce\r\n\r\n```\r\nimport torch\r\nfrom pytorch_lightning.callbacks.early_stopping import EarlyStopping\r\nimport apex.amp as amp\r\n\r\nes = EarlyStopping()\r\nes.monitor_op == torch.lt\r\nOut[6]: True\r\n\r\nmodel = torch.Linear(5, 5).to('cuda')\r\noptimizers = torch.optim.Adam(model.parameters(), lr=1e-3)\r\namp.initialize(model, optimizers)\r\n\r\nes.monitor_op == torch.lt\r\nOut[22]: False\r\n```\r\n\r\nAnd this bug leads to the initialization of `self.best` to be -inf instead of inf\r\n\r\n### Expected behavior\r\n\r\n`self.best` should be initialized to inf instead of -inf.\r\n\r\n### Environment\r\n\r\n* CUDA:\r\n - GPU:\r\n - TITAN Xp\r\n - Quadro P400\r\n - available: True\r\n - version: 10.1\r\n* Packages:\r\n - numpy: 1.18.1\r\n - pyTorch_debug: False\r\n - pyTorch_version: 1.4.0\r\n - pytorch-lightning: 0.7.5\r\n - tensorboard: 2.1.1\r\n - tqdm: 4.43.0\r\n* System:\r\n - OS: Linux\r\n - architecture:\r\n - 64bit\r\n -\r\n - processor: x86_64\r\n - python: 3.6.10\r\n - version: #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019\r\n### Additional context\r\n\r\nI bump into this bug after downloading from the master branch couple days ago. I would guess the old version is fine but did not test it.\n", "before_files": [{"content": "r\"\"\"\nEarly Stopping\n==============\n\nMonitor a validation metric and stop training when it stops improving.\n\n\"\"\"\n\nimport numpy as np\nimport torch\n\nfrom pytorch_lightning import _logger as log\nfrom pytorch_lightning.callbacks.base import Callback\nfrom pytorch_lightning.utilities import rank_zero_warn\n\ntorch_inf = torch.tensor(np.Inf)\n\n\nclass EarlyStopping(Callback):\n r\"\"\"\n\n Args:\n monitor: quantity to be monitored. Default: ``'val_loss'``.\n min_delta: minimum change in the monitored quantity\n to qualify as an improvement, i.e. an absolute\n change of less than `min_delta`, will count as no\n improvement. Default: ``0``.\n patience: number of validation epochs with no improvement\n after which training will be stopped. Default: ``0``.\n verbose: verbosity mode. Default: ``False``.\n mode: one of {auto, min, max}. In `min` mode,\n training will stop when the quantity\n monitored has stopped decreasing; in `max`\n mode it will stop when the quantity\n monitored has stopped increasing; in `auto`\n mode, the direction is automatically inferred\n from the name of the monitored quantity. Default: ``'auto'``.\n strict: whether to crash the training if `monitor` is\n not found in the validation metrics. Default: ``True``.\n\n Example::\n\n >>> from pytorch_lightning import Trainer\n >>> from pytorch_lightning.callbacks import EarlyStopping\n >>> early_stopping = EarlyStopping('val_loss')\n >>> trainer = Trainer(early_stop_callback=early_stopping)\n \"\"\"\n mode_dict = {\n 'min': torch.lt,\n 'max': torch.gt,\n }\n\n def __init__(self, monitor: str = 'val_loss', min_delta: float = 0.0, patience: int = 3,\n verbose: bool = False, mode: str = 'auto', strict: bool = True):\n super().__init__()\n self.monitor = monitor\n self.patience = patience\n self.verbose = verbose\n self.strict = strict\n self.min_delta = min_delta\n self.wait = 0\n self.stopped_epoch = 0\n self.mode = mode\n\n if mode not in self.mode_dict:\n if self.verbose > 0:\n log.info(f'EarlyStopping mode {mode} is unknown, fallback to auto mode.')\n self.mode = 'auto'\n\n if self.mode == 'auto':\n if self.monitor == 'acc':\n self.mode = 'max'\n else:\n self.mode = 'min'\n if self.verbose > 0:\n log.info(f'EarlyStopping mode set to {self.mode} for monitoring {self.monitor}.')\n\n self.min_delta *= 1 if self.monitor_op == torch.gt else -1\n\n def _validate_condition_metric(self, logs):\n \"\"\"\n Checks that the condition metric for early stopping is good\n :param logs:\n :return:\n \"\"\"\n monitor_val = logs.get(self.monitor)\n error_msg = (f'Early stopping conditioned on metric `{self.monitor}`'\n f' which is not available. Either add `{self.monitor}` to the return of '\n f' validation_epoch end or modify your EarlyStopping callback to use any of the '\n f'following: `{\"`, `\".join(list(logs.keys()))}`')\n\n if monitor_val is None:\n if self.strict:\n raise RuntimeError(error_msg)\n if self.verbose > 0:\n rank_zero_warn(error_msg, RuntimeWarning)\n\n return False\n\n return True\n\n @property\n def monitor_op(self):\n return self.mode_dict[self.mode]\n\n def on_train_start(self, trainer, pl_module):\n # Allow instances to be re-used\n self.wait = 0\n self.stopped_epoch = 0\n self.best = torch_inf if self.monitor_op == torch.lt else -torch_inf\n\n def on_validation_end(self, trainer, pl_module):\n self._run_early_stopping_check(trainer, pl_module)\n\n def _run_early_stopping_check(self, trainer, pl_module):\n logs = trainer.callback_metrics\n stop_training = False\n if not self._validate_condition_metric(logs):\n return stop_training\n\n current = logs.get(self.monitor)\n if not isinstance(current, torch.Tensor):\n current = torch.tensor(current)\n\n if self.monitor_op(current - self.min_delta, self.best):\n self.best = current\n self.wait = 0\n else:\n self.wait += 1\n if self.wait >= self.patience:\n self.stopped_epoch = trainer.current_epoch\n stop_training = True\n self.on_train_end(trainer, pl_module)\n\n return stop_training\n\n def on_train_end(self, trainer, pl_module):\n if self.stopped_epoch > 0 and self.verbose > 0:\n rank_zero_warn('Displayed epoch numbers by `EarlyStopping` start from \"1\" until v0.6.x,'\n ' but will start from \"0\" in v0.8.0.', DeprecationWarning)\n log.info(f'Epoch {self.stopped_epoch + 1:05d}: early stopping')\n", "path": "pytorch_lightning/callbacks/early_stopping.py"}]} | 2,549 | 171 |
gh_patches_debug_23904 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-847 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
TweedieDevianceScore producing nan results when setting power=1.0
## π Bug
<!-- A clear and concise description of what the bug is. -->
### To Reproduce
```>>> td=TweedieDevianceScore(power=1.0)
>>> targets=torch.tensor([0,1,0,1])
>>> preds=torch.tensor([0.1,0.1,0.1,0.1])
>>> td(preds,targets)
tensor(nan)
>>> td=TweedieDevianceScore(power=1.0001)
>>> td(preds,targets)
tensor(1.5038)```
I assume this is a bug since the documentation says targets >= 0 (obviously Poisson data will often be 0).
</issue>
<code>
[start of torchmetrics/functional/regression/tweedie_deviance.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Tuple
15
16 import torch
17 from torch import Tensor
18
19 from torchmetrics.utilities.checks import _check_same_shape
20
21
22 def _tweedie_deviance_score_update(preds: Tensor, targets: Tensor, power: float = 0.0) -> Tuple[Tensor, Tensor]:
23 """Updates and returns variables required to compute Deviance Score for the given power. Checks for same shape
24 of input tensors.
25
26 Args:
27 preds: Predicted tensor
28 targets: Ground truth tensor
29 power: see :func:`tweedie_deviance_score`
30
31 Example:
32 >>> targets = torch.tensor([1.0, 2.0, 3.0, 4.0])
33 >>> preds = torch.tensor([4.0, 3.0, 2.0, 1.0])
34 >>> _tweedie_deviance_score_update(preds, targets, power=2)
35 (tensor(4.8333), tensor(4))
36 """
37 _check_same_shape(preds, targets)
38
39 zero_tensor = torch.zeros(preds.shape, device=preds.device)
40
41 if 0 < power < 1:
42 raise ValueError(f"Deviance Score is not defined for power={power}.")
43
44 if power == 0:
45 deviance_score = torch.pow(targets - preds, exponent=2)
46 elif power == 1:
47 # Poisson distribution
48 if torch.any(preds <= 0) or torch.any(targets < 0):
49 raise ValueError(
50 f"For power={power}, 'preds' has to be strictly positive and 'targets' cannot be negative."
51 )
52
53 deviance_score = 2 * (targets * torch.log(targets / preds) + preds - targets)
54 elif power == 2:
55 # Gamma distribution
56 if torch.any(preds <= 0) or torch.any(targets <= 0):
57 raise ValueError(f"For power={power}, both 'preds' and 'targets' have to be strictly positive.")
58
59 deviance_score = 2 * (torch.log(preds / targets) + (targets / preds) - 1)
60 else:
61 if power < 0:
62 if torch.any(preds <= 0):
63 raise ValueError(f"For power={power}, 'preds' has to be strictly positive.")
64 elif 1 < power < 2:
65 if torch.any(preds <= 0) or torch.any(targets < 0):
66 raise ValueError(
67 f"For power={power}, 'targets' has to be strictly positive and 'preds' cannot be negative."
68 )
69 else:
70 if torch.any(preds <= 0) or torch.any(targets <= 0):
71 raise ValueError(f"For power={power}, both 'preds' and 'targets' have to be strictly positive.")
72
73 term_1 = torch.pow(torch.max(targets, zero_tensor), 2 - power) / ((1 - power) * (2 - power))
74 term_2 = targets * torch.pow(preds, 1 - power) / (1 - power)
75 term_3 = torch.pow(preds, 2 - power) / (2 - power)
76 deviance_score = 2 * (term_1 - term_2 + term_3)
77
78 sum_deviance_score = torch.sum(deviance_score)
79 num_observations = torch.tensor(torch.numel(deviance_score), device=preds.device)
80
81 return sum_deviance_score, num_observations
82
83
84 def _tweedie_deviance_score_compute(sum_deviance_score: Tensor, num_observations: Tensor) -> Tensor:
85 """Computes Deviance Score.
86
87 Args:
88 sum_deviance_score: Sum of deviance scores accumalated until now.
89 num_observations: Number of observations encountered until now.
90
91 Example:
92 >>> targets = torch.tensor([1.0, 2.0, 3.0, 4.0])
93 >>> preds = torch.tensor([4.0, 3.0, 2.0, 1.0])
94 >>> sum_deviance_score, num_observations = _tweedie_deviance_score_update(preds, targets, power=2)
95 >>> _tweedie_deviance_score_compute(sum_deviance_score, num_observations)
96 tensor(1.2083)
97 """
98
99 return sum_deviance_score / num_observations
100
101
102 def tweedie_deviance_score(preds: Tensor, targets: Tensor, power: float = 0.0) -> Tensor:
103 r"""
104 Computes the `Tweedie Deviance Score`_ between targets and predictions:
105
106 .. math::
107 deviance\_score(\hat{y},y) =
108 \begin{cases}
109 (\hat{y} - y)^2, & \text{for }power=0\\
110 2 * (y * log(\frac{y}{\hat{y}}) + \hat{y} - y), & \text{for }power=1\\
111 2 * (log(\frac{\hat{y}}{y}) + \frac{y}{\hat{y}} - 1), & \text{for }power=2\\
112 2 * (\frac{(max(y,0))^{2}}{(1 - power)(2 - power)} - \frac{y(\hat{y})^{1 - power}}{1 - power} + \frac{(\hat{y})
113 ^{2 - power}}{2 - power}), & \text{otherwise}
114 \end{cases}
115
116 where :math:`y` is a tensor of targets values, and :math:`\hat{y}` is a tensor of predictions.
117
118 Args:
119 preds: Predicted tensor with shape ``(N,...)``
120 targets: Ground truth tensor with shape ``(N,...)``
121 power:
122 - power < 0 : Extreme stable distribution. (Requires: preds > 0.)
123 - power = 0 : Normal distribution. (Requires: targets and preds can be any real numbers.)
124 - power = 1 : Poisson distribution. (Requires: targets >= 0 and y_pred > 0.)
125 - 1 < p < 2 : Compound Poisson distribution. (Requires: targets >= 0 and preds > 0.)
126 - power = 2 : Gamma distribution. (Requires: targets > 0 and preds > 0.)
127 - power = 3 : Inverse Gaussian distribution. (Requires: targets > 0 and preds > 0.)
128 - otherwise : Positive stable distribution. (Requires: targets > 0 and preds > 0.)
129
130 Example:
131 >>> from torchmetrics.functional import tweedie_deviance_score
132 >>> targets = torch.tensor([1.0, 2.0, 3.0, 4.0])
133 >>> preds = torch.tensor([4.0, 3.0, 2.0, 1.0])
134 >>> tweedie_deviance_score(preds, targets, power=2)
135 tensor(1.2083)
136
137 """
138 sum_deviance_score, num_observations = _tweedie_deviance_score_update(preds, targets, power=power)
139 return _tweedie_deviance_score_compute(sum_deviance_score, num_observations)
140
[end of torchmetrics/functional/regression/tweedie_deviance.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchmetrics/functional/regression/tweedie_deviance.py b/torchmetrics/functional/regression/tweedie_deviance.py
--- a/torchmetrics/functional/regression/tweedie_deviance.py
+++ b/torchmetrics/functional/regression/tweedie_deviance.py
@@ -19,6 +19,13 @@
from torchmetrics.utilities.checks import _check_same_shape
+def xlogy(x: Tensor, y: Tensor) -> Tensor:
+ """Computes x * log(y). Returns 0 if x=0"""
+ res = x * torch.log(y)
+ res[x == 0] = 0.0
+ return res
+
+
def _tweedie_deviance_score_update(preds: Tensor, targets: Tensor, power: float = 0.0) -> Tuple[Tensor, Tensor]:
"""Updates and returns variables required to compute Deviance Score for the given power. Checks for same shape
of input tensors.
@@ -50,7 +57,7 @@
f"For power={power}, 'preds' has to be strictly positive and 'targets' cannot be negative."
)
- deviance_score = 2 * (targets * torch.log(targets / preds) + preds - targets)
+ deviance_score = 2 * (xlogy(targets, targets / preds) + preds - targets)
elif power == 2:
# Gamma distribution
if torch.any(preds <= 0) or torch.any(targets <= 0):
| {"golden_diff": "diff --git a/torchmetrics/functional/regression/tweedie_deviance.py b/torchmetrics/functional/regression/tweedie_deviance.py\n--- a/torchmetrics/functional/regression/tweedie_deviance.py\n+++ b/torchmetrics/functional/regression/tweedie_deviance.py\n@@ -19,6 +19,13 @@\n from torchmetrics.utilities.checks import _check_same_shape\n \n \n+def xlogy(x: Tensor, y: Tensor) -> Tensor:\n+ \"\"\"Computes x * log(y). Returns 0 if x=0\"\"\"\n+ res = x * torch.log(y)\n+ res[x == 0] = 0.0\n+ return res\n+\n+\n def _tweedie_deviance_score_update(preds: Tensor, targets: Tensor, power: float = 0.0) -> Tuple[Tensor, Tensor]:\n \"\"\"Updates and returns variables required to compute Deviance Score for the given power. Checks for same shape\n of input tensors.\n@@ -50,7 +57,7 @@\n f\"For power={power}, 'preds' has to be strictly positive and 'targets' cannot be negative.\"\n )\n \n- deviance_score = 2 * (targets * torch.log(targets / preds) + preds - targets)\n+ deviance_score = 2 * (xlogy(targets, targets / preds) + preds - targets)\n elif power == 2:\n # Gamma distribution\n if torch.any(preds <= 0) or torch.any(targets <= 0):\n", "issue": "TweedieDevianceScore producing nan results when setting power=1.0\n## \ud83d\udc1b Bug\r\n\r\n<!-- A clear and concise description of what the bug is. -->\r\n\r\n### To Reproduce\r\n\r\n```>>> td=TweedieDevianceScore(power=1.0)\r\n>>> targets=torch.tensor([0,1,0,1])\r\n>>> preds=torch.tensor([0.1,0.1,0.1,0.1])\r\n>>> td(preds,targets)\r\ntensor(nan)\r\n>>> td=TweedieDevianceScore(power=1.0001)\r\n>>> td(preds,targets)\r\ntensor(1.5038)```\r\n\r\nI assume this is a bug since the documentation says targets >= 0 (obviously Poisson data will often be 0). \n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Tuple\n\nimport torch\nfrom torch import Tensor\n\nfrom torchmetrics.utilities.checks import _check_same_shape\n\n\ndef _tweedie_deviance_score_update(preds: Tensor, targets: Tensor, power: float = 0.0) -> Tuple[Tensor, Tensor]:\n \"\"\"Updates and returns variables required to compute Deviance Score for the given power. Checks for same shape\n of input tensors.\n\n Args:\n preds: Predicted tensor\n targets: Ground truth tensor\n power: see :func:`tweedie_deviance_score`\n\n Example:\n >>> targets = torch.tensor([1.0, 2.0, 3.0, 4.0])\n >>> preds = torch.tensor([4.0, 3.0, 2.0, 1.0])\n >>> _tweedie_deviance_score_update(preds, targets, power=2)\n (tensor(4.8333), tensor(4))\n \"\"\"\n _check_same_shape(preds, targets)\n\n zero_tensor = torch.zeros(preds.shape, device=preds.device)\n\n if 0 < power < 1:\n raise ValueError(f\"Deviance Score is not defined for power={power}.\")\n\n if power == 0:\n deviance_score = torch.pow(targets - preds, exponent=2)\n elif power == 1:\n # Poisson distribution\n if torch.any(preds <= 0) or torch.any(targets < 0):\n raise ValueError(\n f\"For power={power}, 'preds' has to be strictly positive and 'targets' cannot be negative.\"\n )\n\n deviance_score = 2 * (targets * torch.log(targets / preds) + preds - targets)\n elif power == 2:\n # Gamma distribution\n if torch.any(preds <= 0) or torch.any(targets <= 0):\n raise ValueError(f\"For power={power}, both 'preds' and 'targets' have to be strictly positive.\")\n\n deviance_score = 2 * (torch.log(preds / targets) + (targets / preds) - 1)\n else:\n if power < 0:\n if torch.any(preds <= 0):\n raise ValueError(f\"For power={power}, 'preds' has to be strictly positive.\")\n elif 1 < power < 2:\n if torch.any(preds <= 0) or torch.any(targets < 0):\n raise ValueError(\n f\"For power={power}, 'targets' has to be strictly positive and 'preds' cannot be negative.\"\n )\n else:\n if torch.any(preds <= 0) or torch.any(targets <= 0):\n raise ValueError(f\"For power={power}, both 'preds' and 'targets' have to be strictly positive.\")\n\n term_1 = torch.pow(torch.max(targets, zero_tensor), 2 - power) / ((1 - power) * (2 - power))\n term_2 = targets * torch.pow(preds, 1 - power) / (1 - power)\n term_3 = torch.pow(preds, 2 - power) / (2 - power)\n deviance_score = 2 * (term_1 - term_2 + term_3)\n\n sum_deviance_score = torch.sum(deviance_score)\n num_observations = torch.tensor(torch.numel(deviance_score), device=preds.device)\n\n return sum_deviance_score, num_observations\n\n\ndef _tweedie_deviance_score_compute(sum_deviance_score: Tensor, num_observations: Tensor) -> Tensor:\n \"\"\"Computes Deviance Score.\n\n Args:\n sum_deviance_score: Sum of deviance scores accumalated until now.\n num_observations: Number of observations encountered until now.\n\n Example:\n >>> targets = torch.tensor([1.0, 2.0, 3.0, 4.0])\n >>> preds = torch.tensor([4.0, 3.0, 2.0, 1.0])\n >>> sum_deviance_score, num_observations = _tweedie_deviance_score_update(preds, targets, power=2)\n >>> _tweedie_deviance_score_compute(sum_deviance_score, num_observations)\n tensor(1.2083)\n \"\"\"\n\n return sum_deviance_score / num_observations\n\n\ndef tweedie_deviance_score(preds: Tensor, targets: Tensor, power: float = 0.0) -> Tensor:\n r\"\"\"\n Computes the `Tweedie Deviance Score`_ between targets and predictions:\n\n .. math::\n deviance\\_score(\\hat{y},y) =\n \\begin{cases}\n (\\hat{y} - y)^2, & \\text{for }power=0\\\\\n 2 * (y * log(\\frac{y}{\\hat{y}}) + \\hat{y} - y), & \\text{for }power=1\\\\\n 2 * (log(\\frac{\\hat{y}}{y}) + \\frac{y}{\\hat{y}} - 1), & \\text{for }power=2\\\\\n 2 * (\\frac{(max(y,0))^{2}}{(1 - power)(2 - power)} - \\frac{y(\\hat{y})^{1 - power}}{1 - power} + \\frac{(\\hat{y})\n ^{2 - power}}{2 - power}), & \\text{otherwise}\n \\end{cases}\n\n where :math:`y` is a tensor of targets values, and :math:`\\hat{y}` is a tensor of predictions.\n\n Args:\n preds: Predicted tensor with shape ``(N,...)``\n targets: Ground truth tensor with shape ``(N,...)``\n power:\n - power < 0 : Extreme stable distribution. (Requires: preds > 0.)\n - power = 0 : Normal distribution. (Requires: targets and preds can be any real numbers.)\n - power = 1 : Poisson distribution. (Requires: targets >= 0 and y_pred > 0.)\n - 1 < p < 2 : Compound Poisson distribution. (Requires: targets >= 0 and preds > 0.)\n - power = 2 : Gamma distribution. (Requires: targets > 0 and preds > 0.)\n - power = 3 : Inverse Gaussian distribution. (Requires: targets > 0 and preds > 0.)\n - otherwise : Positive stable distribution. (Requires: targets > 0 and preds > 0.)\n\n Example:\n >>> from torchmetrics.functional import tweedie_deviance_score\n >>> targets = torch.tensor([1.0, 2.0, 3.0, 4.0])\n >>> preds = torch.tensor([4.0, 3.0, 2.0, 1.0])\n >>> tweedie_deviance_score(preds, targets, power=2)\n tensor(1.2083)\n\n \"\"\"\n sum_deviance_score, num_observations = _tweedie_deviance_score_update(preds, targets, power=power)\n return _tweedie_deviance_score_compute(sum_deviance_score, num_observations)\n", "path": "torchmetrics/functional/regression/tweedie_deviance.py"}]} | 2,715 | 330 |
gh_patches_debug_2684 | rasdani/github-patches | git_diff | CTFd__CTFd-1918 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Users in admin scoreboard show user position instead of team position
In teams mode on the admin panel, users are shown with their user position on the scoreboard instead of their teams position. We should be showing both.
</issue>
<code>
[start of CTFd/admin/users.py]
1 from flask import render_template, request, url_for
2 from sqlalchemy.sql import not_
3
4 from CTFd.admin import admin
5 from CTFd.models import Challenges, Tracking, Users
6 from CTFd.utils import get_config
7 from CTFd.utils.decorators import admins_only
8 from CTFd.utils.modes import TEAMS_MODE
9
10
11 @admin.route("/admin/users")
12 @admins_only
13 def users_listing():
14 q = request.args.get("q")
15 field = request.args.get("field")
16 page = abs(request.args.get("page", 1, type=int))
17 filters = []
18 users = []
19
20 if q:
21 # The field exists as an exposed column
22 if Users.__mapper__.has_property(field):
23 filters.append(getattr(Users, field).like("%{}%".format(q)))
24
25 if q and field == "ip":
26 users = (
27 Users.query.join(Tracking, Users.id == Tracking.user_id)
28 .filter(Tracking.ip.like("%{}%".format(q)))
29 .order_by(Users.id.asc())
30 .paginate(page=page, per_page=50)
31 )
32 else:
33 users = (
34 Users.query.filter(*filters)
35 .order_by(Users.id.asc())
36 .paginate(page=page, per_page=50)
37 )
38
39 args = dict(request.args)
40 args.pop("page", 1)
41
42 return render_template(
43 "admin/users/users.html",
44 users=users,
45 prev_page=url_for(request.endpoint, page=users.prev_num, **args),
46 next_page=url_for(request.endpoint, page=users.next_num, **args),
47 q=q,
48 field=field,
49 )
50
51
52 @admin.route("/admin/users/new")
53 @admins_only
54 def users_new():
55 return render_template("admin/users/new.html")
56
57
58 @admin.route("/admin/users/<int:user_id>")
59 @admins_only
60 def users_detail(user_id):
61 # Get user object
62 user = Users.query.filter_by(id=user_id).first_or_404()
63
64 # Get the user's solves
65 solves = user.get_solves(admin=True)
66
67 # Get challenges that the user is missing
68 if get_config("user_mode") == TEAMS_MODE:
69 if user.team:
70 all_solves = user.team.get_solves(admin=True)
71 else:
72 all_solves = user.get_solves(admin=True)
73 else:
74 all_solves = user.get_solves(admin=True)
75
76 solve_ids = [s.challenge_id for s in all_solves]
77 missing = Challenges.query.filter(not_(Challenges.id.in_(solve_ids))).all()
78
79 # Get IP addresses that the User has used
80 addrs = (
81 Tracking.query.filter_by(user_id=user_id).order_by(Tracking.date.desc()).all()
82 )
83
84 # Get Fails
85 fails = user.get_fails(admin=True)
86
87 # Get Awards
88 awards = user.get_awards(admin=True)
89
90 # Get user properties
91 score = user.get_score(admin=True)
92 place = user.get_place(admin=True)
93
94 return render_template(
95 "admin/users/user.html",
96 solves=solves,
97 user=user,
98 addrs=addrs,
99 score=score,
100 missing=missing,
101 place=place,
102 fails=fails,
103 awards=awards,
104 )
105
[end of CTFd/admin/users.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/CTFd/admin/users.py b/CTFd/admin/users.py
--- a/CTFd/admin/users.py
+++ b/CTFd/admin/users.py
@@ -88,8 +88,8 @@
awards = user.get_awards(admin=True)
# Get user properties
- score = user.get_score(admin=True)
- place = user.get_place(admin=True)
+ score = user.account.get_score(admin=True)
+ place = user.account.get_place(admin=True)
return render_template(
"admin/users/user.html",
| {"golden_diff": "diff --git a/CTFd/admin/users.py b/CTFd/admin/users.py\n--- a/CTFd/admin/users.py\n+++ b/CTFd/admin/users.py\n@@ -88,8 +88,8 @@\n awards = user.get_awards(admin=True)\n \n # Get user properties\n- score = user.get_score(admin=True)\n- place = user.get_place(admin=True)\n+ score = user.account.get_score(admin=True)\n+ place = user.account.get_place(admin=True)\n \n return render_template(\n \"admin/users/user.html\",\n", "issue": "Users in admin scoreboard show user position instead of team position\nIn teams mode on the admin panel, users are shown with their user position on the scoreboard instead of their teams position. We should be showing both. \n", "before_files": [{"content": "from flask import render_template, request, url_for\nfrom sqlalchemy.sql import not_\n\nfrom CTFd.admin import admin\nfrom CTFd.models import Challenges, Tracking, Users\nfrom CTFd.utils import get_config\nfrom CTFd.utils.decorators import admins_only\nfrom CTFd.utils.modes import TEAMS_MODE\n\n\[email protected](\"/admin/users\")\n@admins_only\ndef users_listing():\n q = request.args.get(\"q\")\n field = request.args.get(\"field\")\n page = abs(request.args.get(\"page\", 1, type=int))\n filters = []\n users = []\n\n if q:\n # The field exists as an exposed column\n if Users.__mapper__.has_property(field):\n filters.append(getattr(Users, field).like(\"%{}%\".format(q)))\n\n if q and field == \"ip\":\n users = (\n Users.query.join(Tracking, Users.id == Tracking.user_id)\n .filter(Tracking.ip.like(\"%{}%\".format(q)))\n .order_by(Users.id.asc())\n .paginate(page=page, per_page=50)\n )\n else:\n users = (\n Users.query.filter(*filters)\n .order_by(Users.id.asc())\n .paginate(page=page, per_page=50)\n )\n\n args = dict(request.args)\n args.pop(\"page\", 1)\n\n return render_template(\n \"admin/users/users.html\",\n users=users,\n prev_page=url_for(request.endpoint, page=users.prev_num, **args),\n next_page=url_for(request.endpoint, page=users.next_num, **args),\n q=q,\n field=field,\n )\n\n\[email protected](\"/admin/users/new\")\n@admins_only\ndef users_new():\n return render_template(\"admin/users/new.html\")\n\n\[email protected](\"/admin/users/<int:user_id>\")\n@admins_only\ndef users_detail(user_id):\n # Get user object\n user = Users.query.filter_by(id=user_id).first_or_404()\n\n # Get the user's solves\n solves = user.get_solves(admin=True)\n\n # Get challenges that the user is missing\n if get_config(\"user_mode\") == TEAMS_MODE:\n if user.team:\n all_solves = user.team.get_solves(admin=True)\n else:\n all_solves = user.get_solves(admin=True)\n else:\n all_solves = user.get_solves(admin=True)\n\n solve_ids = [s.challenge_id for s in all_solves]\n missing = Challenges.query.filter(not_(Challenges.id.in_(solve_ids))).all()\n\n # Get IP addresses that the User has used\n addrs = (\n Tracking.query.filter_by(user_id=user_id).order_by(Tracking.date.desc()).all()\n )\n\n # Get Fails\n fails = user.get_fails(admin=True)\n\n # Get Awards\n awards = user.get_awards(admin=True)\n\n # Get user properties\n score = user.get_score(admin=True)\n place = user.get_place(admin=True)\n\n return render_template(\n \"admin/users/user.html\",\n solves=solves,\n user=user,\n addrs=addrs,\n score=score,\n missing=missing,\n place=place,\n fails=fails,\n awards=awards,\n )\n", "path": "CTFd/admin/users.py"}]} | 1,495 | 120 |
gh_patches_debug_37122 | rasdani/github-patches | git_diff | graspologic-org__graspologic-559 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
singular_values_ attribute needed for MultipleASE class
## Expected Behavior
MultipleASE should have an attribute that stores singular values for ease of generating scree plots, consistent with the other embedding classes. Specifically, it should output the singular values of the concatenated adjacency spectral embeddings, consistent with Figure 10 of the [paper](https://arxiv.org/pdf/1906.10026.pdf) where it is introduced.
## Actual Behavior
MultipleASE has no attribute for singular values.
## Template Code
N/A
## Full Traceback
N/A
## Your Environment
* Python version: 3.7.6
* graspologic version: 0.1.0
## Proposed Labels
* Question
</issue>
<code>
[start of graspologic/embed/mase.py]
1 # Copyright (c) Microsoft Corporation and contributors.
2 # Licensed under the MIT License.
3
4 import numpy as np
5 from sklearn.utils.validation import check_is_fitted
6
7 from ..utils import import_graph, is_almost_symmetric
8 from .base import BaseEmbedMulti
9 from .svd import select_dimension, selectSVD
10
11
12 class MultipleASE(BaseEmbedMulti):
13 r"""
14 Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input
15 graphs with matched vertex sets.
16
17 For a population of undirected graphs, MASE assumes that the population of graphs
18 is sampled from :math:`VR^{(i)}V^T` where :math:`V \in \mathbb{R}^{n\times d}` and
19 :math:`R^{(i)} \in \mathbb{R}^{d\times d}`. Score matrices, :math:`R^{(i)}`, are
20 allowed to vary for each graph, but are symmetric. All graphs share a common a
21 latent position matrix :math:`V`.
22
23 For a population of directed graphs, MASE assumes that the population is sampled
24 from :math:`UR^{(i)}V^T` where :math:`U \in \mathbb{R}^{n\times d_1}`,
25 :math:`V \in \mathbb{R}^{n\times d_2}`, and
26 :math:`R^{(i)} \in \mathbb{R}^{d_1\times d_2}`. In this case, score matrices
27 :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a
28 common latent position matrices :math:`U` and :math:`V`.
29
30 Parameters
31 ----------
32 n_components : int or None, default = None
33 Desired dimensionality of output data. If "full",
34 ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be
35 ``< min(X.shape)``. If None, then optimal dimensions will be chosen by
36 :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.
37
38 n_elbows : int, optional, default: 2
39 If ``n_components`` is None, then compute the optimal embedding dimension using
40 :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.
41
42 algorithm : {'randomized' (default), 'full', 'truncated'}, optional
43 SVD solver to use:
44
45 - 'randomized'
46 Computes randomized svd using
47 :func:`sklearn.utils.extmath.randomized_svd`
48 - 'full'
49 Computes full svd using :func:`scipy.linalg.svd`
50 - 'truncated'
51 Computes truncated svd using :func:`scipy.sparse.linalg.svds`
52
53 n_iter : int, optional (default = 5)
54 Number of iterations for randomized SVD solver. Not used by 'full' or
55 'truncated'. The default is larger than the default in randomized_svd
56 to handle sparse matrices that may have large slowly decaying spectrum.
57
58 scaled : bool, optional (default=True)
59 Whether to scale individual eigenvectors with eigenvalues in first embedding
60 stage.
61
62 diag_aug : bool, optional (default = True)
63 Whether to replace the main diagonal of each adjacency matrices with
64 a vector corresponding to the degree (or sum of edge weights for a
65 weighted network) before embedding.
66
67 concat : bool, optional (default False)
68 If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions
69 along axis 1.
70
71
72 Attributes
73 ----------
74 n_graphs_ : int
75 Number of graphs
76
77 n_vertices_ : int
78 Number of vertices in each graph
79
80 latent_left_ : array, shape (n_samples, n_components)
81 Estimated left latent positions of the graph.
82
83 latent_right_ : array, shape (n_samples, n_components), or None
84 Estimated right latent positions of the graph. Only computed when the an input
85 graph is directed, or adjacency matrix is assymetric. Otherwise, None.
86
87 scores_ : array, shape (n_samples, n_components, n_components)
88 Estimated :math:`\hat{R}` matrices for each input graph.
89
90
91 Notes
92 -----
93 When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal
94 to ``n_components`` of :attr:`latent_right_`.
95 """
96
97 def __init__(
98 self,
99 n_components=None,
100 n_elbows=2,
101 algorithm="randomized",
102 n_iter=5,
103 scaled=True,
104 diag_aug=True,
105 concat=False,
106 ):
107 if not isinstance(scaled, bool):
108 msg = "scaled must be a boolean, not {}".format(scaled)
109 raise TypeError(msg)
110
111 super().__init__(
112 n_components=n_components,
113 n_elbows=n_elbows,
114 algorithm=algorithm,
115 n_iter=n_iter,
116 diag_aug=diag_aug,
117 concat=concat,
118 )
119 self.scaled = scaled
120
121 def _reduce_dim(self, graphs):
122 # first embed into log2(n_vertices) for each graph
123 n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))
124
125 # embed individual graphs
126 embeddings = [
127 selectSVD(
128 graph,
129 n_components=n_components,
130 algorithm=self.algorithm,
131 n_iter=self.n_iter,
132 )
133 for graph in graphs
134 ]
135 Us, Ds, Vs = zip(*embeddings)
136
137 # Choose the best embedding dimension for each graphs
138 if self.n_components is None:
139 embedding_dimensions = []
140 for D in Ds:
141 elbows, _ = select_dimension(D, n_elbows=self.n_elbows)
142 embedding_dimensions.append(elbows[-1])
143
144 # Choose the max of all of best embedding dimension of all graphs
145 best_dimension = int(np.ceil(np.max(embedding_dimensions)))
146 else:
147 best_dimension = self.n_components
148
149 if not self.scaled:
150 Us = np.hstack([U[:, :best_dimension] for U in Us])
151 Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])
152 else:
153 # Equivalent to ASE
154 Us = np.hstack(
155 [
156 U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))
157 for U, D in zip(Us, Ds)
158 ]
159 )
160 Vs = np.hstack(
161 [
162 V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))
163 for V, D in zip(Vs, Ds)
164 ]
165 )
166
167 # Second SVD for vertices
168 # The notation is slightly different than the paper
169 Uhat, _, _ = selectSVD(
170 Us,
171 n_components=self.n_components,
172 n_elbows=self.n_elbows,
173 algorithm=self.algorithm,
174 n_iter=self.n_iter,
175 )
176
177 Vhat, _, _ = selectSVD(
178 Vs,
179 n_components=self.n_components,
180 n_elbows=self.n_elbows,
181 algorithm=self.algorithm,
182 n_iter=self.n_iter,
183 )
184 return Uhat, Vhat
185
186 def fit(self, graphs, y=None):
187 """
188 Fit the model with graphs.
189
190 Parameters
191 ----------
192 graphs : list of nx.Graph or ndarray, or ndarray
193 If list of nx.Graph, each Graph must contain same number of nodes.
194 If list of ndarray, each array must have shape (n_vertices, n_vertices).
195 If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).
196
197 Returns
198 -------
199 self : object
200 Returns an instance of self.
201 """
202 graphs = self._check_input_graphs(graphs)
203
204 # Check if undirected
205 undirected = all(is_almost_symmetric(g) for g in graphs)
206
207 # Diag augment
208 if self.diag_aug:
209 graphs = self._diag_aug(graphs)
210
211 # embed
212 Uhat, Vhat = self._reduce_dim(graphs)
213 self.latent_left_ = Uhat
214 if not undirected:
215 self.latent_right_ = Vhat
216 self.scores_ = Uhat.T @ graphs @ Vhat
217 else:
218 self.latent_right_ = None
219 self.scores_ = Uhat.T @ graphs @ Uhat
220
221 return self
222
223 def fit_transform(self, graphs, y=None):
224 """
225 Fit the model with graphs and apply the embedding on graphs.
226 n_components is either automatically determined or based on user input.
227
228 Parameters
229 ----------
230 graphs : list of nx.Graph or ndarray, or ndarray
231 If list of nx.Graph, each Graph must contain same number of nodes.
232 If list of ndarray, each array must have shape (n_vertices, n_vertices).
233 If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).
234
235 Returns
236 -------
237 out : np.ndarray or length 2 tuple of np.ndarray.
238 If input graphs were symmetric shape (n_vertices, n_components).
239 If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).
240 The first corresponds to the left latent positions, and the second to the right latent positions.
241 When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.
242 """
243 return self._fit_transform(graphs)
244
[end of graspologic/embed/mase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/graspologic/embed/mase.py b/graspologic/embed/mase.py
--- a/graspologic/embed/mase.py
+++ b/graspologic/embed/mase.py
@@ -87,6 +87,13 @@
scores_ : array, shape (n_samples, n_components, n_components)
Estimated :math:`\hat{R}` matrices for each input graph.
+ singular_values_ : array, shape (n_components) OR length 2 tuple of arrays
+ If input graph is undirected, equal to the singular values of the concatenated
+ adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`
+ is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to
+ the singular values of the concatenated left adjacency spectral embeddings,
+ and :attr:`singular_values_[1]` corresponds to
+ the singular values of the concatenated right adjacency spectral embeddings.
Notes
-----
@@ -166,7 +173,7 @@
# Second SVD for vertices
# The notation is slightly different than the paper
- Uhat, _, _ = selectSVD(
+ Uhat, sing_vals_left, _ = selectSVD(
Us,
n_components=self.n_components,
n_elbows=self.n_elbows,
@@ -174,14 +181,14 @@
n_iter=self.n_iter,
)
- Vhat, _, _ = selectSVD(
+ Vhat, sing_vals_right, _ = selectSVD(
Vs,
n_components=self.n_components,
n_elbows=self.n_elbows,
algorithm=self.algorithm,
n_iter=self.n_iter,
)
- return Uhat, Vhat
+ return Uhat, Vhat, sing_vals_left, sing_vals_right
def fit(self, graphs, y=None):
"""
@@ -209,14 +216,16 @@
graphs = self._diag_aug(graphs)
# embed
- Uhat, Vhat = self._reduce_dim(graphs)
+ Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)
self.latent_left_ = Uhat
if not undirected:
self.latent_right_ = Vhat
self.scores_ = Uhat.T @ graphs @ Vhat
+ self.singular_values_ = (sing_vals_left, sing_vals_right)
else:
self.latent_right_ = None
self.scores_ = Uhat.T @ graphs @ Uhat
+ self.singular_values_ = sing_vals_left
return self
| {"golden_diff": "diff --git a/graspologic/embed/mase.py b/graspologic/embed/mase.py\n--- a/graspologic/embed/mase.py\n+++ b/graspologic/embed/mase.py\n@@ -87,6 +87,13 @@\n scores_ : array, shape (n_samples, n_components, n_components)\n Estimated :math:`\\hat{R}` matrices for each input graph.\n \n+ singular_values_ : array, shape (n_components) OR length 2 tuple of arrays\n+ If input graph is undirected, equal to the singular values of the concatenated\n+ adjacency spectral embeddings. If input graph is directed, :attr:`singular_values_`\n+ is a tuple of length 2, where :attr:`singular_values_[0]` corresponds to\n+ the singular values of the concatenated left adjacency spectral embeddings,\n+ and :attr:`singular_values_[1]` corresponds to\n+ the singular values of the concatenated right adjacency spectral embeddings.\n \n Notes\n -----\n@@ -166,7 +173,7 @@\n \n # Second SVD for vertices\n # The notation is slightly different than the paper\n- Uhat, _, _ = selectSVD(\n+ Uhat, sing_vals_left, _ = selectSVD(\n Us,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n@@ -174,14 +181,14 @@\n n_iter=self.n_iter,\n )\n \n- Vhat, _, _ = selectSVD(\n+ Vhat, sing_vals_right, _ = selectSVD(\n Vs,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n- return Uhat, Vhat\n+ return Uhat, Vhat, sing_vals_left, sing_vals_right\n \n def fit(self, graphs, y=None):\n \"\"\"\n@@ -209,14 +216,16 @@\n graphs = self._diag_aug(graphs)\n \n # embed\n- Uhat, Vhat = self._reduce_dim(graphs)\n+ Uhat, Vhat, sing_vals_left, sing_vals_right = self._reduce_dim(graphs)\n self.latent_left_ = Uhat\n if not undirected:\n self.latent_right_ = Vhat\n self.scores_ = Uhat.T @ graphs @ Vhat\n+ self.singular_values_ = (sing_vals_left, sing_vals_right)\n else:\n self.latent_right_ = None\n self.scores_ = Uhat.T @ graphs @ Uhat\n+ self.singular_values_ = sing_vals_left\n \n return self\n", "issue": "singular_values_ attribute needed for MultipleASE class\n## Expected Behavior\r\nMultipleASE should have an attribute that stores singular values for ease of generating scree plots, consistent with the other embedding classes. Specifically, it should output the singular values of the concatenated adjacency spectral embeddings, consistent with Figure 10 of the [paper](https://arxiv.org/pdf/1906.10026.pdf) where it is introduced.\r\n\r\n## Actual Behavior\r\nMultipleASE has no attribute for singular values.\r\n\r\n## Template Code\r\nN/A\r\n\r\n## Full Traceback\r\nN/A\r\n\r\n## Your Environment\r\n* Python version: 3.7.6\r\n* graspologic version: 0.1.0\r\n\r\n## Proposed Labels\r\n * Question\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation and contributors.\n# Licensed under the MIT License.\n\nimport numpy as np\nfrom sklearn.utils.validation import check_is_fitted\n\nfrom ..utils import import_graph, is_almost_symmetric\nfrom .base import BaseEmbedMulti\nfrom .svd import select_dimension, selectSVD\n\n\nclass MultipleASE(BaseEmbedMulti):\n r\"\"\"\n Multiple Adjacency Spectral Embedding (MASE) embeds arbitrary number of input\n graphs with matched vertex sets.\n\n For a population of undirected graphs, MASE assumes that the population of graphs\n is sampled from :math:`VR^{(i)}V^T` where :math:`V \\in \\mathbb{R}^{n\\times d}` and\n :math:`R^{(i)} \\in \\mathbb{R}^{d\\times d}`. Score matrices, :math:`R^{(i)}`, are\n allowed to vary for each graph, but are symmetric. All graphs share a common a\n latent position matrix :math:`V`.\n\n For a population of directed graphs, MASE assumes that the population is sampled\n from :math:`UR^{(i)}V^T` where :math:`U \\in \\mathbb{R}^{n\\times d_1}`,\n :math:`V \\in \\mathbb{R}^{n\\times d_2}`, and\n :math:`R^{(i)} \\in \\mathbb{R}^{d_1\\times d_2}`. In this case, score matrices\n :math:`R^{(i)}` can be assymetric and non-square, but all graphs still share a\n common latent position matrices :math:`U` and :math:`V`.\n\n Parameters\n ----------\n n_components : int or None, default = None\n Desired dimensionality of output data. If \"full\",\n ``n_components`` must be ``<= min(X.shape)``. Otherwise, ``n_components`` must be\n ``< min(X.shape)``. If None, then optimal dimensions will be chosen by\n :func:`~graspologic.embed.select_dimension` using ``n_elbows`` argument.\n\n n_elbows : int, optional, default: 2\n If ``n_components`` is None, then compute the optimal embedding dimension using\n :func:`~graspologic.embed.select_dimension`. Otherwise, ignored.\n\n algorithm : {'randomized' (default), 'full', 'truncated'}, optional\n SVD solver to use:\n\n - 'randomized'\n Computes randomized svd using\n :func:`sklearn.utils.extmath.randomized_svd`\n - 'full'\n Computes full svd using :func:`scipy.linalg.svd`\n - 'truncated'\n Computes truncated svd using :func:`scipy.sparse.linalg.svds`\n\n n_iter : int, optional (default = 5)\n Number of iterations for randomized SVD solver. Not used by 'full' or\n 'truncated'. The default is larger than the default in randomized_svd\n to handle sparse matrices that may have large slowly decaying spectrum.\n\n scaled : bool, optional (default=True)\n Whether to scale individual eigenvectors with eigenvalues in first embedding\n stage.\n\n diag_aug : bool, optional (default = True)\n Whether to replace the main diagonal of each adjacency matrices with\n a vector corresponding to the degree (or sum of edge weights for a\n weighted network) before embedding.\n\n concat : bool, optional (default False)\n If graph(s) are directed, whether to concatenate each graph's left and right (out and in) latent positions\n along axis 1.\n\n\n Attributes\n ----------\n n_graphs_ : int\n Number of graphs\n\n n_vertices_ : int\n Number of vertices in each graph\n\n latent_left_ : array, shape (n_samples, n_components)\n Estimated left latent positions of the graph.\n\n latent_right_ : array, shape (n_samples, n_components), or None\n Estimated right latent positions of the graph. Only computed when the an input\n graph is directed, or adjacency matrix is assymetric. Otherwise, None.\n\n scores_ : array, shape (n_samples, n_components, n_components)\n Estimated :math:`\\hat{R}` matrices for each input graph.\n\n\n Notes\n -----\n When an input graph is directed, ``n_components`` of :attr:`latent_left_` may not be equal\n to ``n_components`` of :attr:`latent_right_`.\n \"\"\"\n\n def __init__(\n self,\n n_components=None,\n n_elbows=2,\n algorithm=\"randomized\",\n n_iter=5,\n scaled=True,\n diag_aug=True,\n concat=False,\n ):\n if not isinstance(scaled, bool):\n msg = \"scaled must be a boolean, not {}\".format(scaled)\n raise TypeError(msg)\n\n super().__init__(\n n_components=n_components,\n n_elbows=n_elbows,\n algorithm=algorithm,\n n_iter=n_iter,\n diag_aug=diag_aug,\n concat=concat,\n )\n self.scaled = scaled\n\n def _reduce_dim(self, graphs):\n # first embed into log2(n_vertices) for each graph\n n_components = int(np.ceil(np.log2(np.min(self.n_vertices_))))\n\n # embed individual graphs\n embeddings = [\n selectSVD(\n graph,\n n_components=n_components,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n for graph in graphs\n ]\n Us, Ds, Vs = zip(*embeddings)\n\n # Choose the best embedding dimension for each graphs\n if self.n_components is None:\n embedding_dimensions = []\n for D in Ds:\n elbows, _ = select_dimension(D, n_elbows=self.n_elbows)\n embedding_dimensions.append(elbows[-1])\n\n # Choose the max of all of best embedding dimension of all graphs\n best_dimension = int(np.ceil(np.max(embedding_dimensions)))\n else:\n best_dimension = self.n_components\n\n if not self.scaled:\n Us = np.hstack([U[:, :best_dimension] for U in Us])\n Vs = np.hstack([V.T[:, :best_dimension] for V in Vs])\n else:\n # Equivalent to ASE\n Us = np.hstack(\n [\n U[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for U, D in zip(Us, Ds)\n ]\n )\n Vs = np.hstack(\n [\n V.T[:, :best_dimension] @ np.diag(np.sqrt(D[:best_dimension]))\n for V, D in zip(Vs, Ds)\n ]\n )\n\n # Second SVD for vertices\n # The notation is slightly different than the paper\n Uhat, _, _ = selectSVD(\n Us,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n\n Vhat, _, _ = selectSVD(\n Vs,\n n_components=self.n_components,\n n_elbows=self.n_elbows,\n algorithm=self.algorithm,\n n_iter=self.n_iter,\n )\n return Uhat, Vhat\n\n def fit(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n self : object\n Returns an instance of self.\n \"\"\"\n graphs = self._check_input_graphs(graphs)\n\n # Check if undirected\n undirected = all(is_almost_symmetric(g) for g in graphs)\n\n # Diag augment\n if self.diag_aug:\n graphs = self._diag_aug(graphs)\n\n # embed\n Uhat, Vhat = self._reduce_dim(graphs)\n self.latent_left_ = Uhat\n if not undirected:\n self.latent_right_ = Vhat\n self.scores_ = Uhat.T @ graphs @ Vhat\n else:\n self.latent_right_ = None\n self.scores_ = Uhat.T @ graphs @ Uhat\n\n return self\n\n def fit_transform(self, graphs, y=None):\n \"\"\"\n Fit the model with graphs and apply the embedding on graphs.\n n_components is either automatically determined or based on user input.\n\n Parameters\n ----------\n graphs : list of nx.Graph or ndarray, or ndarray\n If list of nx.Graph, each Graph must contain same number of nodes.\n If list of ndarray, each array must have shape (n_vertices, n_vertices).\n If ndarray, then array must have shape (n_graphs, n_vertices, n_vertices).\n\n Returns\n -------\n out : np.ndarray or length 2 tuple of np.ndarray.\n If input graphs were symmetric shape (n_vertices, n_components).\n If graphs were directed and ``concat`` is False, returns tuple of two arrays (same shape as above).\n The first corresponds to the left latent positions, and the second to the right latent positions.\n When ``concat`` is True left and right (out and in) latent positions are concatenated along axis 1.\n \"\"\"\n return self._fit_transform(graphs)\n", "path": "graspologic/embed/mase.py"}]} | 3,379 | 589 |
gh_patches_debug_423 | rasdani/github-patches | git_diff | napalm-automation__napalm-1681 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
IOS-XR get_lldp_neighbor_details does not support 'N/A' capabilities
### Description of Issue/Question
The constants LLDP_CAPAB_TRANFORM_TABLE does not include 'N/A' of system capabilities. No checks for this value either.
```
<SystemCapabilities>
N/A
</SystemCapabilities>
```
*Note*: Please check https://guides.github.com/features/mastering-markdown/
to see how to properly format your request.
### Did you follow the steps from https://github.com/napalm-automation/napalm#faq
(Place an ``x`` between the square brackets where applicable)
- [x] Yes
- [] No
### Setup
### napalm version
(Paste verbatim output from `pip freeze | grep napalm` between quotes below)
```
root@salt-master1:/# python3 -m pip freeze | grep napalm
napalm==3.2.0
```
### Network operating system version
(Paste verbatim output from `show version` - or equivalent - between quotes below)
```
RP/0/RP0/CPU0:hbr1.12345.999#show version
Wed Sep 23 18:41:02.500 CEST
Cisco IOS XR Software, Version 7.1.2 LNT
Copyright (c) 2013-2020 by Cisco Systems, Inc.
Build Information:
Built By : ahoang
Built On : Sat Aug 29 20:04:27 UTC 2020
Build Host : iox-lnx-069
Workspace : /auto/srcarchive13/prod/7.1.2/ncs540l/ws
Version : 7.1.2
Label : 7.1.2
cisco NCS540L (C3708 @ 1.70GHz)
System uptime is 1 week, 1 day, 10 hours, 2 minutes
```
### Steps to Reproduce the Issue
### Error Traceback
(Paste the complete traceback of the exception between quotes below)
```
================= Traceback =================
Traceback (most recent call last):
File "/usr/local/bin/napalm", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py", line 308, in main
run_tests(args)
File "/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py", line 291, in run_tests
call_getter(device, args.method, **method_kwargs)
File "/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py", line 27, in wrapper
r = func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py", line 255, in call_getter
r = func(**kwargs)
File "/usr/local/lib/python3.8/dist-packages/napalm/iosxr/iosxr.py", line 852, in get_lldp_neighbors_detail
"remote_system_capab": napalm.base.helpers.transform_lldp_capab(
File "/usr/local/lib/python3.8/dist-packages/napalm/base/helpers.py", line 463, in transform_lldp_capab
[constants.LLDP_CAPAB_TRANFORM_TABLE[c.strip()] for c in capabilities]
File "/usr/local/lib/python3.8/dist-packages/napalm/base/helpers.py", line 463, in <listcomp>
[constants.LLDP_CAPAB_TRANFORM_TABLE[c.strip()] for c in capabilities]
KeyError: 'n/a'
```
</issue>
<code>
[start of napalm/base/constants.py]
1 """Constants to be used across NAPALM drivers."""
2
3 CONFIG_LOCK = True # must be changed soon!
4 TIMEOUT = 60 # seconds
5
6 INTERFACE_NULL_SPEED = -1
7
8 ACTION_TYPE_METHODS = ("ping", "traceroute")
9
10 BGP_NEIGHBOR_NULL_COUNTER = -1
11
12 SNMP_AUTHORIZATION_MODE_MAP = {"read-only": "ro", "read-write": "rw"}
13
14 ROUTE_COMMON_PROTOCOL_FIELDS = [
15 "destination",
16 "prefix_length",
17 "protocol",
18 "current_active",
19 "last_active",
20 "age",
21 "next_hop",
22 "outgoing_interface",
23 "selected_next_hop",
24 "preference",
25 "inactive_reason",
26 "routing_table",
27 ] # identifies the list of fileds common for all protocols
28
29 ROUTE_PROTOCOL_SPECIFIC_FIELDS = {
30 "bgp": [
31 "local_as",
32 "remote_as",
33 "as_path",
34 "communities",
35 "local_preference",
36 "preference2",
37 "remote_address",
38 "metric",
39 "metric2",
40 ],
41 "isis": ["level", "metric", "local_as"],
42 "static": [], # nothing specific to static routes
43 }
44
45 TRACEROUTE_TTL = 255
46 TRACEROUTE_SOURCE = ""
47 TRACEROUTE_TIMEOUT = 2
48 TRACEROUTE_NULL_HOST_NAME = "*"
49 TRACEROUTE_NULL_IP_ADDRESS = "*"
50 TRACEROUTE_VRF = ""
51
52 OPTICS_NULL_LEVEL = "-Inf"
53
54 PING_SOURCE = ""
55 PING_TTL = 255
56 PING_TIMEOUT = 2
57 PING_SIZE = 100
58 PING_COUNT = 5
59 PING_VRF = ""
60 PING_SOURCE_INTERFACE = ""
61
62 NETMIKO_MAP = {
63 "ios": "cisco_ios",
64 "nxos": "cisco_nxos",
65 "nxos_ssh": "cisco_nxos",
66 "iosxr": "cisco_iosxr",
67 "eos": "arista_eos",
68 "junos": "juniper_junos",
69 }
70 LLDP_CAPAB_TRANFORM_TABLE = {
71 "o": "other",
72 "p": "repeater",
73 "b": "bridge",
74 "w": "wlan-access-point",
75 "r": "router",
76 "t": "telephone",
77 "c": "docsis-cable-device",
78 "s": "station",
79 }
80
81 CISCO_SANITIZE_FILTERS = {
82 r"^(snmp-server community).*$": r"\1 <removed>",
83 r"^(snmp-server host \S+( vrf \S+)?( version (1|2c|3))?)\s+\S+((\s+\S*)*)\s*$": r"\1 <removed> \5", # noqa
84 r"^(snmp-server user \S+( \S+)? auth md5) \S+ (priv) \S+ (localizedkey( engineID \S+)?)\s*$": r"\1 <removed> \3 <removed> \4\5", # noqa
85 r"^(username .+ (password|secret) \d) .+$": r"\1 <removed>",
86 r"^(enable (password|secret)( level \d+)? \d) .+$": r"\1 <removed>",
87 r"^(\s+(?:password|secret)) (?:\d{1,2} )?\S+$": r"\1 <removed>",
88 r"^(.*wpa-psk ascii \d) (\S+)$": r"\1 <removed>",
89 r"^(.*key 7) (\d.+)$": r"\1 <removed>",
90 r"^(tacacs-server (.+ )?key) .+$": r"\1 <removed>",
91 r"^(crypto isakmp key) (\S+) (.*)$": r"\1 <removed> \3",
92 r"^(\s+ip ospf message-digest-key \d+ md5) .+$": r"\1 <removed>",
93 r"^(\s+ip ospf authentication-key) .+$": r"\1 <removed>",
94 r"^(\s+neighbor \S+ password) .+$": r"\1 <removed>",
95 r"^(\s+vrrp \d+ authentication text) .+$": r"\1 <removed>",
96 r"^(\s+standby \d+ authentication) .{1,8}$": r"\1 <removed>",
97 r"^(\s+standby \d+ authentication md5 key-string) .+?( timeout \d+)?$": r"\1 <removed> \2",
98 r"^(\s+key-string) .+$": r"\1 <removed>",
99 r"^((tacacs|radius) server [^\n]+\n(\s+[^\n]+\n)*\s+key) [^\n]+$": r"\1 <removed>",
100 r"^(\s+ppp (chap|pap) password \d) .+$": r"\1 <removed>",
101 }
102
[end of napalm/base/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/napalm/base/constants.py b/napalm/base/constants.py
--- a/napalm/base/constants.py
+++ b/napalm/base/constants.py
@@ -69,6 +69,7 @@
}
LLDP_CAPAB_TRANFORM_TABLE = {
"o": "other",
+ "n/a": "other",
"p": "repeater",
"b": "bridge",
"w": "wlan-access-point",
| {"golden_diff": "diff --git a/napalm/base/constants.py b/napalm/base/constants.py\n--- a/napalm/base/constants.py\n+++ b/napalm/base/constants.py\n@@ -69,6 +69,7 @@\n }\n LLDP_CAPAB_TRANFORM_TABLE = {\n \"o\": \"other\",\n+ \"n/a\": \"other\",\n \"p\": \"repeater\",\n \"b\": \"bridge\",\n \"w\": \"wlan-access-point\",\n", "issue": "IOS-XR get_lldp_neighbor_details does not support 'N/A' capabilities\n### Description of Issue/Question\r\n\r\nThe constants LLDP_CAPAB_TRANFORM_TABLE does not include 'N/A' of system capabilities. No checks for this value either.\r\n```\r\n<SystemCapabilities>\r\n N/A\r\n</SystemCapabilities>\r\n```\r\n\r\n\r\n*Note*: Please check https://guides.github.com/features/mastering-markdown/\r\nto see how to properly format your request.\r\n\r\n### Did you follow the steps from https://github.com/napalm-automation/napalm#faq\r\n(Place an ``x`` between the square brackets where applicable)\r\n\r\n- [x] Yes\r\n- [] No\r\n\r\n\r\n### Setup\r\n\r\n### napalm version\r\n(Paste verbatim output from `pip freeze | grep napalm` between quotes below)\r\n\r\n```\r\nroot@salt-master1:/# python3 -m pip freeze | grep napalm\r\nnapalm==3.2.0\r\n```\r\n\r\n### Network operating system version\r\n(Paste verbatim output from `show version` - or equivalent - between quotes below)\r\n\r\n```\r\nRP/0/RP0/CPU0:hbr1.12345.999#show version\r\nWed Sep 23 18:41:02.500 CEST\r\nCisco IOS XR Software, Version 7.1.2 LNT\r\nCopyright (c) 2013-2020 by Cisco Systems, Inc.\r\n\r\nBuild Information:\r\n Built By : ahoang\r\n Built On : Sat Aug 29 20:04:27 UTC 2020\r\n Build Host : iox-lnx-069\r\n Workspace : /auto/srcarchive13/prod/7.1.2/ncs540l/ws\r\n Version : 7.1.2\r\n Label : 7.1.2\r\n\r\ncisco NCS540L (C3708 @ 1.70GHz)\r\nSystem uptime is 1 week, 1 day, 10 hours, 2 minutes\r\n\r\n```\r\n\r\n### Steps to Reproduce the Issue\r\n\r\n### Error Traceback\r\n(Paste the complete traceback of the exception between quotes below)\r\n\r\n```\r\n================= Traceback =================\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/napalm\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py\", line 308, in main\r\n run_tests(args)\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py\", line 291, in run_tests\r\n call_getter(device, args.method, **method_kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py\", line 27, in wrapper\r\n r = func(*args, **kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/base/clitools/cl_napalm.py\", line 255, in call_getter\r\n r = func(**kwargs)\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/iosxr/iosxr.py\", line 852, in get_lldp_neighbors_detail\r\n \"remote_system_capab\": napalm.base.helpers.transform_lldp_capab(\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/base/helpers.py\", line 463, in transform_lldp_capab\r\n [constants.LLDP_CAPAB_TRANFORM_TABLE[c.strip()] for c in capabilities]\r\n File \"/usr/local/lib/python3.8/dist-packages/napalm/base/helpers.py\", line 463, in <listcomp>\r\n [constants.LLDP_CAPAB_TRANFORM_TABLE[c.strip()] for c in capabilities]\r\nKeyError: 'n/a'\r\n```\r\n\n", "before_files": [{"content": "\"\"\"Constants to be used across NAPALM drivers.\"\"\"\n\nCONFIG_LOCK = True # must be changed soon!\nTIMEOUT = 60 # seconds\n\nINTERFACE_NULL_SPEED = -1\n\nACTION_TYPE_METHODS = (\"ping\", \"traceroute\")\n\nBGP_NEIGHBOR_NULL_COUNTER = -1\n\nSNMP_AUTHORIZATION_MODE_MAP = {\"read-only\": \"ro\", \"read-write\": \"rw\"}\n\nROUTE_COMMON_PROTOCOL_FIELDS = [\n \"destination\",\n \"prefix_length\",\n \"protocol\",\n \"current_active\",\n \"last_active\",\n \"age\",\n \"next_hop\",\n \"outgoing_interface\",\n \"selected_next_hop\",\n \"preference\",\n \"inactive_reason\",\n \"routing_table\",\n] # identifies the list of fileds common for all protocols\n\nROUTE_PROTOCOL_SPECIFIC_FIELDS = {\n \"bgp\": [\n \"local_as\",\n \"remote_as\",\n \"as_path\",\n \"communities\",\n \"local_preference\",\n \"preference2\",\n \"remote_address\",\n \"metric\",\n \"metric2\",\n ],\n \"isis\": [\"level\", \"metric\", \"local_as\"],\n \"static\": [], # nothing specific to static routes\n}\n\nTRACEROUTE_TTL = 255\nTRACEROUTE_SOURCE = \"\"\nTRACEROUTE_TIMEOUT = 2\nTRACEROUTE_NULL_HOST_NAME = \"*\"\nTRACEROUTE_NULL_IP_ADDRESS = \"*\"\nTRACEROUTE_VRF = \"\"\n\nOPTICS_NULL_LEVEL = \"-Inf\"\n\nPING_SOURCE = \"\"\nPING_TTL = 255\nPING_TIMEOUT = 2\nPING_SIZE = 100\nPING_COUNT = 5\nPING_VRF = \"\"\nPING_SOURCE_INTERFACE = \"\"\n\nNETMIKO_MAP = {\n \"ios\": \"cisco_ios\",\n \"nxos\": \"cisco_nxos\",\n \"nxos_ssh\": \"cisco_nxos\",\n \"iosxr\": \"cisco_iosxr\",\n \"eos\": \"arista_eos\",\n \"junos\": \"juniper_junos\",\n}\nLLDP_CAPAB_TRANFORM_TABLE = {\n \"o\": \"other\",\n \"p\": \"repeater\",\n \"b\": \"bridge\",\n \"w\": \"wlan-access-point\",\n \"r\": \"router\",\n \"t\": \"telephone\",\n \"c\": \"docsis-cable-device\",\n \"s\": \"station\",\n}\n\nCISCO_SANITIZE_FILTERS = {\n r\"^(snmp-server community).*$\": r\"\\1 <removed>\",\n r\"^(snmp-server host \\S+( vrf \\S+)?( version (1|2c|3))?)\\s+\\S+((\\s+\\S*)*)\\s*$\": r\"\\1 <removed> \\5\", # noqa\n r\"^(snmp-server user \\S+( \\S+)? auth md5) \\S+ (priv) \\S+ (localizedkey( engineID \\S+)?)\\s*$\": r\"\\1 <removed> \\3 <removed> \\4\\5\", # noqa\n r\"^(username .+ (password|secret) \\d) .+$\": r\"\\1 <removed>\",\n r\"^(enable (password|secret)( level \\d+)? \\d) .+$\": r\"\\1 <removed>\",\n r\"^(\\s+(?:password|secret)) (?:\\d{1,2} )?\\S+$\": r\"\\1 <removed>\",\n r\"^(.*wpa-psk ascii \\d) (\\S+)$\": r\"\\1 <removed>\",\n r\"^(.*key 7) (\\d.+)$\": r\"\\1 <removed>\",\n r\"^(tacacs-server (.+ )?key) .+$\": r\"\\1 <removed>\",\n r\"^(crypto isakmp key) (\\S+) (.*)$\": r\"\\1 <removed> \\3\",\n r\"^(\\s+ip ospf message-digest-key \\d+ md5) .+$\": r\"\\1 <removed>\",\n r\"^(\\s+ip ospf authentication-key) .+$\": r\"\\1 <removed>\",\n r\"^(\\s+neighbor \\S+ password) .+$\": r\"\\1 <removed>\",\n r\"^(\\s+vrrp \\d+ authentication text) .+$\": r\"\\1 <removed>\",\n r\"^(\\s+standby \\d+ authentication) .{1,8}$\": r\"\\1 <removed>\",\n r\"^(\\s+standby \\d+ authentication md5 key-string) .+?( timeout \\d+)?$\": r\"\\1 <removed> \\2\",\n r\"^(\\s+key-string) .+$\": r\"\\1 <removed>\",\n r\"^((tacacs|radius) server [^\\n]+\\n(\\s+[^\\n]+\\n)*\\s+key) [^\\n]+$\": r\"\\1 <removed>\",\n r\"^(\\s+ppp (chap|pap) password \\d) .+$\": r\"\\1 <removed>\",\n}\n", "path": "napalm/base/constants.py"}]} | 2,666 | 99 |
gh_patches_debug_5120 | rasdani/github-patches | git_diff | translate__pootle-6751 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pootle Install
I am a Pootle novice, I successfully installed it on the mac, but when i want to deploy to the production environment, the following installation error occurs.
# Steps to reproduce:
1. I create a virtualenv env
2. Install Pootle requirements
3. Install Pootle, using `pip install --process-dependency-links Pootle`
# Results (Expected/Actual):
```Python
Collecting Pootle
Using cached Pootle-2.8.1.tar.bz2
Complete output from command python setup.py egg_info:
Pootle requires a non-PyPI dependency, when using pip ensure you use the --process-dependency-links option.
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-build-_2TvXs/Pootle/setup.py", line 453, in <module>
long_description=parse_long_description(README_FILE, tag=True),
File "/tmp/pip-build-_2TvXs/Pootle/setup.py", line 404, in parse_long_description
replace_urls()
File "/tmp/pip-build-_2TvXs/Pootle/setup.py", line 332, in replace_urls
branch = version.get_git_branch()
File "/tmp/pip-build-_2TvXs/Pootle/pootle/core/utils/version.py", line 231, in get_git_branch
'HEAD']).strip()
AttributeError: 'NoneType' object has no attribute 'strip'
```
# Environment (i.e. 'pootle --version', DB, OS, Browser):
python version: 2.7.12
system: ubuntu 16.04
</issue>
<code>
[start of pootle/core/utils/version.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 # Some functions are taken from or modelled on the version management in
10 # Django. Those are:
11 # Copyright (c) Django Software Foundation and individual contributors. All
12 # rights reserved.
13
14 from __future__ import print_function
15
16 import datetime
17 import os
18 import subprocess
19
20 try:
21 from django.utils.lru_cache import lru_cache
22 except ImportError:
23 # Required for Python 2.7 support and when backported Django version is
24 # unavailable
25 def lru_cache():
26 def fake(func):
27 return func
28 return fake
29
30 from pootle.constants import VERSION
31
32
33 CANDIDATE_MARKERS = ('alpha', 'beta', 'rc', 'final')
34
35
36 def get_version(version=None):
37 """Returns a PEP 440-compliant version number from VERSION.
38
39 The following examples show a progression from development through
40 pre-release to release and the resultant versions generated:
41
42 >>> get_version((2, 7, 1, 'alpha', 0))
43 '2.7.1.dev20150530132219'
44 >>> get_version((2, 7, 1, 'alpha', 1))
45 '2.7.1a1'
46 >>> get_version((2, 7, 1, 'beta', 1))
47 '2.7.1b1'
48 >>> get_version((2, 7, 1, 'rc', 2))
49 '2.7.1rc2'
50 >>> get_version((2, 7, 1, 'final', 0))
51 '2.7.1'
52 """
53 version = get_complete_version(version)
54
55 # Now build the two parts of the version number:
56 # main = X.Y[.Z]
57 # sub = .devN - for pre-alpha releases
58 # | {a|b|rc}N - for alpha, beta and rc releases
59
60 main = get_main_version(version)
61 candidate_pos = _get_candidate_pos(version)
62 candidate = version[candidate_pos]
63 candidate_extra = version[candidate_pos+1]
64
65 sub = ''
66 if _is_development_candidate(version):
67 git_changeset = get_git_changeset()
68 if git_changeset:
69 sub = '.dev%s' % git_changeset
70 else:
71 sub = '.dev0'
72
73 elif candidate != 'final':
74 mapping = {'alpha': 'a', 'beta': 'b', 'rc': 'rc'}
75 sub = mapping[candidate] + str(candidate_extra)
76
77 return str(main + sub)
78
79
80 def _is_development_candidate(version):
81 """Is this a pre-alpha release
82
83 >>> _is_development_candidate((2, 1, 0, 'alpha', 0))
84 True
85 >>> _is_development_candidate((2, 1, 0, 'beta', 1))
86 False
87 """
88 candidate_pos = _get_candidate_pos(version)
89 candidate = version[candidate_pos]
90 candidate_extra = version[candidate_pos+1]
91 return candidate == 'alpha' and candidate_extra == 0
92
93
94 def _get_candidate_pos(version):
95 """Returns the position of the candidate marker.
96
97 >>> _get_candidate_pos((1, 2, 0, 'alpha', 0))
98 3
99 """
100 return [i for i, part in enumerate(version)
101 if part in CANDIDATE_MARKERS][0]
102
103
104 def _get_candidate(version):
105 """Returns the candidate. One of alpha, beta, rc or final.
106
107 >>> _get_candidate((0, 1, 2, 'rc', 1))
108 'rc'
109 """
110 return version[_get_candidate_pos(version)]
111
112
113 def _get_version_string(parts):
114 """Returns an X.Y.Z version from the list of version parts.
115
116 >>> _get_version_string((1, 1, 0))
117 '1.1.0'
118 >>> _get_version_string((1, 1, 0, 1))
119 '1.1.0.1'
120 """
121 return '.'.join(str(x) for x in parts)
122
123
124 def get_main_version(version=None):
125 """Returns main version (X.Y[.Z]) from VERSION.
126
127 >>> get_main_version((1, 2, 3, 'alpha', 1))
128 '1.2.3'
129 """
130 version = get_complete_version(version)
131 candidate_pos = _get_candidate_pos(version)
132 return _get_version_string(version[:candidate_pos])
133
134
135 def get_major_minor_version(version=None):
136 """Returns X.Y from VERSION.
137
138 >>> get_major_minor_version((1, 2, 3, 'final', 0))
139 '1.2'
140 """
141 version = get_complete_version(version)
142 return _get_version_string(version[:2])
143
144
145 def get_complete_version(version=None):
146 """Returns a tuple of the Pootle version. Or the supplied ``version``
147
148 >>> get_complete_version((1, 2, 3, 'alpha', 0))
149 (1, 2, 3, 'alpha', 0)
150 """
151 if version is not None:
152 return version
153
154 return VERSION
155
156
157 def get_docs_version(version=None, positions=2):
158 """Return the version used in documentation.
159
160 >>> get_docs_version((1, 2, 1, 'alpha', 0))
161 'dev'
162 >>> get_docs_version((1, 2, 1, 'rc', 2))
163 '1.2'
164 """
165 version = get_complete_version(version)
166 candidate_pos = _get_candidate_pos(version)
167 if positions > candidate_pos:
168 positions = candidate_pos
169 if _is_development_candidate(version):
170 return 'dev'
171 return _get_version_string(version[:positions])
172
173
174 def get_rtd_version(version=None):
175 """Return the docs version string reported in the RTD site."""
176 version_str = get_docs_version(version=version, positions=2)
177 return (
178 'latest'
179 if version_str == 'dev'
180 else 'stable-%s.x' % (version_str, )
181 )
182
183
184 def _shell_command(command):
185 """Return the first result of a shell ``command``"""
186 repo_dir = os.path.dirname(os.path.abspath(__file__))
187
188 try:
189 command_subprocess = subprocess.Popen(
190 command,
191 stdout=subprocess.PIPE,
192 stderr=subprocess.PIPE,
193 cwd=repo_dir,
194 universal_newlines=True
195 )
196 except OSError:
197 return None
198
199 return command_subprocess.communicate()[0]
200
201
202 @lru_cache()
203 def get_git_changeset():
204 """Returns a numeric identifier of the latest git changeset.
205
206 The result is the UTC timestamp of the changeset in YYYYMMDDHHMMSS format.
207 This value isn't guaranteed to be unique, but collisions are very unlikely,
208 so it's sufficient for generating the development version numbers.
209
210 >>> get_git_changeset()
211 '20150530132219'
212 """
213 timestamp = _shell_command(
214 ['/usr/bin/git', 'log', '--pretty=format:%ct', '--quiet', '-1', 'HEAD']
215 )
216 try:
217 timestamp = datetime.datetime.utcfromtimestamp(int(timestamp))
218 except ValueError:
219 return None
220 return timestamp.strftime('%Y%m%d%H%M%S')
221
222
223 @lru_cache()
224 def get_git_branch():
225 """Returns the current git branch.
226
227 >>> get_git_branch()
228 'feature/proper_version'
229 """
230 branch = _shell_command(['/usr/bin/git', 'symbolic-ref', '-q',
231 'HEAD']).strip()
232 if not branch:
233 return None
234 return "/".join(branch.split("/")[2:])
235
236
237 @lru_cache()
238 def get_git_hash():
239 """Returns the current git commit hash or None.
240
241 >>> get_git_hash()
242 'ad768e8'
243 """
244 git_hash = _shell_command(
245 ['/usr/bin/git', 'rev-parse', '--verify', '--short', 'HEAD']
246 )
247 if git_hash:
248 return git_hash.strip()
249 return None
250
251
252 if __name__ == "__main__":
253 from sys import argv
254 if len(argv) == 2:
255 if argv[1] == "main":
256 print(get_main_version())
257 elif argv[1] == "major_minor":
258 print(get_major_minor_version())
259 elif argv[1] == "docs":
260 print(get_docs_version())
261 else:
262 print(get_version())
263
264
265 def is_prerelease(version=None):
266 """Is this a final release or not"""
267
268 return _get_candidate(get_complete_version(version)) != 'final'
269
[end of pootle/core/utils/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/core/utils/version.py b/pootle/core/utils/version.py
--- a/pootle/core/utils/version.py
+++ b/pootle/core/utils/version.py
@@ -228,10 +228,10 @@
'feature/proper_version'
"""
branch = _shell_command(['/usr/bin/git', 'symbolic-ref', '-q',
- 'HEAD']).strip()
+ 'HEAD'])
if not branch:
return None
- return "/".join(branch.split("/")[2:])
+ return "/".join(branch.strip().split("/")[2:])
@lru_cache()
| {"golden_diff": "diff --git a/pootle/core/utils/version.py b/pootle/core/utils/version.py\n--- a/pootle/core/utils/version.py\n+++ b/pootle/core/utils/version.py\n@@ -228,10 +228,10 @@\n 'feature/proper_version'\n \"\"\"\n branch = _shell_command(['/usr/bin/git', 'symbolic-ref', '-q',\n- 'HEAD']).strip()\n+ 'HEAD'])\n if not branch:\n return None\n- return \"/\".join(branch.split(\"/\")[2:])\n+ return \"/\".join(branch.strip().split(\"/\")[2:])\n \n \n @lru_cache()\n", "issue": "Pootle Install \nI am a Pootle novice, I successfully installed it on the mac, but when i want to deploy to the production environment, the following installation error occurs.\r\n\r\n# Steps to reproduce:\r\n\r\n1. I create a virtualenv env\r\n2. Install Pootle requirements\r\n3. Install Pootle, using `pip install --process-dependency-links Pootle`\r\n\r\n# Results (Expected/Actual):\r\n\r\n```Python\r\nCollecting Pootle\r\n Using cached Pootle-2.8.1.tar.bz2\r\n Complete output from command python setup.py egg_info:\r\n Pootle requires a non-PyPI dependency, when using pip ensure you use the --process-dependency-links option.\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/tmp/pip-build-_2TvXs/Pootle/setup.py\", line 453, in <module>\r\n long_description=parse_long_description(README_FILE, tag=True),\r\n File \"/tmp/pip-build-_2TvXs/Pootle/setup.py\", line 404, in parse_long_description\r\n replace_urls()\r\n File \"/tmp/pip-build-_2TvXs/Pootle/setup.py\", line 332, in replace_urls\r\n branch = version.get_git_branch()\r\n File \"/tmp/pip-build-_2TvXs/Pootle/pootle/core/utils/version.py\", line 231, in get_git_branch\r\n 'HEAD']).strip()\r\n AttributeError: 'NoneType' object has no attribute 'strip'\r\n```\r\n\r\n\r\n# Environment (i.e. 'pootle --version', DB, OS, Browser):\r\n\r\npython version: 2.7.12\r\nsystem: ubuntu 16.04\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n# Some functions are taken from or modelled on the version management in\n# Django. Those are:\n# Copyright (c) Django Software Foundation and individual contributors. All\n# rights reserved.\n\nfrom __future__ import print_function\n\nimport datetime\nimport os\nimport subprocess\n\ntry:\n from django.utils.lru_cache import lru_cache\nexcept ImportError:\n # Required for Python 2.7 support and when backported Django version is\n # unavailable\n def lru_cache():\n def fake(func):\n return func\n return fake\n\nfrom pootle.constants import VERSION\n\n\nCANDIDATE_MARKERS = ('alpha', 'beta', 'rc', 'final')\n\n\ndef get_version(version=None):\n \"\"\"Returns a PEP 440-compliant version number from VERSION.\n\n The following examples show a progression from development through\n pre-release to release and the resultant versions generated:\n\n >>> get_version((2, 7, 1, 'alpha', 0))\n '2.7.1.dev20150530132219'\n >>> get_version((2, 7, 1, 'alpha', 1))\n '2.7.1a1'\n >>> get_version((2, 7, 1, 'beta', 1))\n '2.7.1b1'\n >>> get_version((2, 7, 1, 'rc', 2))\n '2.7.1rc2'\n >>> get_version((2, 7, 1, 'final', 0))\n '2.7.1'\n \"\"\"\n version = get_complete_version(version)\n\n # Now build the two parts of the version number:\n # main = X.Y[.Z]\n # sub = .devN - for pre-alpha releases\n # | {a|b|rc}N - for alpha, beta and rc releases\n\n main = get_main_version(version)\n candidate_pos = _get_candidate_pos(version)\n candidate = version[candidate_pos]\n candidate_extra = version[candidate_pos+1]\n\n sub = ''\n if _is_development_candidate(version):\n git_changeset = get_git_changeset()\n if git_changeset:\n sub = '.dev%s' % git_changeset\n else:\n sub = '.dev0'\n\n elif candidate != 'final':\n mapping = {'alpha': 'a', 'beta': 'b', 'rc': 'rc'}\n sub = mapping[candidate] + str(candidate_extra)\n\n return str(main + sub)\n\n\ndef _is_development_candidate(version):\n \"\"\"Is this a pre-alpha release\n\n >>> _is_development_candidate((2, 1, 0, 'alpha', 0))\n True\n >>> _is_development_candidate((2, 1, 0, 'beta', 1))\n False\n \"\"\"\n candidate_pos = _get_candidate_pos(version)\n candidate = version[candidate_pos]\n candidate_extra = version[candidate_pos+1]\n return candidate == 'alpha' and candidate_extra == 0\n\n\ndef _get_candidate_pos(version):\n \"\"\"Returns the position of the candidate marker.\n\n >>> _get_candidate_pos((1, 2, 0, 'alpha', 0))\n 3\n \"\"\"\n return [i for i, part in enumerate(version)\n if part in CANDIDATE_MARKERS][0]\n\n\ndef _get_candidate(version):\n \"\"\"Returns the candidate. One of alpha, beta, rc or final.\n\n >>> _get_candidate((0, 1, 2, 'rc', 1))\n 'rc'\n \"\"\"\n return version[_get_candidate_pos(version)]\n\n\ndef _get_version_string(parts):\n \"\"\"Returns an X.Y.Z version from the list of version parts.\n\n >>> _get_version_string((1, 1, 0))\n '1.1.0'\n >>> _get_version_string((1, 1, 0, 1))\n '1.1.0.1'\n \"\"\"\n return '.'.join(str(x) for x in parts)\n\n\ndef get_main_version(version=None):\n \"\"\"Returns main version (X.Y[.Z]) from VERSION.\n\n >>> get_main_version((1, 2, 3, 'alpha', 1))\n '1.2.3'\n \"\"\"\n version = get_complete_version(version)\n candidate_pos = _get_candidate_pos(version)\n return _get_version_string(version[:candidate_pos])\n\n\ndef get_major_minor_version(version=None):\n \"\"\"Returns X.Y from VERSION.\n\n >>> get_major_minor_version((1, 2, 3, 'final', 0))\n '1.2'\n \"\"\"\n version = get_complete_version(version)\n return _get_version_string(version[:2])\n\n\ndef get_complete_version(version=None):\n \"\"\"Returns a tuple of the Pootle version. Or the supplied ``version``\n\n >>> get_complete_version((1, 2, 3, 'alpha', 0))\n (1, 2, 3, 'alpha', 0)\n \"\"\"\n if version is not None:\n return version\n\n return VERSION\n\n\ndef get_docs_version(version=None, positions=2):\n \"\"\"Return the version used in documentation.\n\n >>> get_docs_version((1, 2, 1, 'alpha', 0))\n 'dev'\n >>> get_docs_version((1, 2, 1, 'rc', 2))\n '1.2'\n \"\"\"\n version = get_complete_version(version)\n candidate_pos = _get_candidate_pos(version)\n if positions > candidate_pos:\n positions = candidate_pos\n if _is_development_candidate(version):\n return 'dev'\n return _get_version_string(version[:positions])\n\n\ndef get_rtd_version(version=None):\n \"\"\"Return the docs version string reported in the RTD site.\"\"\"\n version_str = get_docs_version(version=version, positions=2)\n return (\n 'latest'\n if version_str == 'dev'\n else 'stable-%s.x' % (version_str, )\n )\n\n\ndef _shell_command(command):\n \"\"\"Return the first result of a shell ``command``\"\"\"\n repo_dir = os.path.dirname(os.path.abspath(__file__))\n\n try:\n command_subprocess = subprocess.Popen(\n command,\n stdout=subprocess.PIPE,\n stderr=subprocess.PIPE,\n cwd=repo_dir,\n universal_newlines=True\n )\n except OSError:\n return None\n\n return command_subprocess.communicate()[0]\n\n\n@lru_cache()\ndef get_git_changeset():\n \"\"\"Returns a numeric identifier of the latest git changeset.\n\n The result is the UTC timestamp of the changeset in YYYYMMDDHHMMSS format.\n This value isn't guaranteed to be unique, but collisions are very unlikely,\n so it's sufficient for generating the development version numbers.\n\n >>> get_git_changeset()\n '20150530132219'\n \"\"\"\n timestamp = _shell_command(\n ['/usr/bin/git', 'log', '--pretty=format:%ct', '--quiet', '-1', 'HEAD']\n )\n try:\n timestamp = datetime.datetime.utcfromtimestamp(int(timestamp))\n except ValueError:\n return None\n return timestamp.strftime('%Y%m%d%H%M%S')\n\n\n@lru_cache()\ndef get_git_branch():\n \"\"\"Returns the current git branch.\n\n >>> get_git_branch()\n 'feature/proper_version'\n \"\"\"\n branch = _shell_command(['/usr/bin/git', 'symbolic-ref', '-q',\n 'HEAD']).strip()\n if not branch:\n return None\n return \"/\".join(branch.split(\"/\")[2:])\n\n\n@lru_cache()\ndef get_git_hash():\n \"\"\"Returns the current git commit hash or None.\n\n >>> get_git_hash()\n 'ad768e8'\n \"\"\"\n git_hash = _shell_command(\n ['/usr/bin/git', 'rev-parse', '--verify', '--short', 'HEAD']\n )\n if git_hash:\n return git_hash.strip()\n return None\n\n\nif __name__ == \"__main__\":\n from sys import argv\n if len(argv) == 2:\n if argv[1] == \"main\":\n print(get_main_version())\n elif argv[1] == \"major_minor\":\n print(get_major_minor_version())\n elif argv[1] == \"docs\":\n print(get_docs_version())\n else:\n print(get_version())\n\n\ndef is_prerelease(version=None):\n \"\"\"Is this a final release or not\"\"\"\n\n return _get_candidate(get_complete_version(version)) != 'final'\n", "path": "pootle/core/utils/version.py"}]} | 3,563 | 137 |
gh_patches_debug_45049 | rasdani/github-patches | git_diff | open-telemetry__opentelemetry-python-contrib-1679 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Metrics instrumentation celery
Relevant semconv: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/semantic_conventions/instrumentation/kafka.md
</issue>
<code>
[start of instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py]
1 # Copyright The OpenTelemetry Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """
15 Instrument `celery`_ to trace Celery applications.
16
17 .. _celery: https://pypi.org/project/celery/
18
19 Usage
20 -----
21
22 * Start broker backend
23
24 .. code::
25
26 docker run -p 5672:5672 rabbitmq
27
28
29 * Run instrumented task
30
31 .. code:: python
32
33 from opentelemetry.instrumentation.celery import CeleryInstrumentor
34
35 from celery import Celery
36 from celery.signals import worker_process_init
37
38 @worker_process_init.connect(weak=False)
39 def init_celery_tracing(*args, **kwargs):
40 CeleryInstrumentor().instrument()
41
42 app = Celery("tasks", broker="amqp://localhost")
43
44 @app.task
45 def add(x, y):
46 return x + y
47
48 add.delay(42, 50)
49
50 Setting up tracing
51 ------------------
52
53 When tracing a celery worker process, tracing and instrumention both must be initialized after the celery worker
54 process is initialized. This is required for any tracing components that might use threading to work correctly
55 such as the BatchSpanProcessor. Celery provides a signal called ``worker_process_init`` that can be used to
56 accomplish this as shown in the example above.
57
58 API
59 ---
60 """
61
62 import logging
63 from typing import Collection, Iterable
64
65 from celery import signals # pylint: disable=no-name-in-module
66
67 from opentelemetry import trace
68 from opentelemetry.instrumentation.celery import utils
69 from opentelemetry.instrumentation.celery.package import _instruments
70 from opentelemetry.instrumentation.celery.version import __version__
71 from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
72 from opentelemetry.propagate import extract, inject
73 from opentelemetry.propagators.textmap import Getter
74 from opentelemetry.semconv.trace import SpanAttributes
75 from opentelemetry.trace.status import Status, StatusCode
76
77 logger = logging.getLogger(__name__)
78
79 # Task operations
80 _TASK_TAG_KEY = "celery.action"
81 _TASK_APPLY_ASYNC = "apply_async"
82 _TASK_RUN = "run"
83
84 _TASK_RETRY_REASON_KEY = "celery.retry.reason"
85 _TASK_REVOKED_REASON_KEY = "celery.revoked.reason"
86 _TASK_REVOKED_TERMINATED_SIGNAL_KEY = "celery.terminated.signal"
87 _TASK_NAME_KEY = "celery.task_name"
88
89
90 class CeleryGetter(Getter):
91 def get(self, carrier, key):
92 value = getattr(carrier, key, None)
93 if value is None:
94 return None
95 if isinstance(value, str) or not isinstance(value, Iterable):
96 value = (value,)
97 return value
98
99 def keys(self, carrier):
100 return []
101
102
103 celery_getter = CeleryGetter()
104
105
106 class CeleryInstrumentor(BaseInstrumentor):
107 def instrumentation_dependencies(self) -> Collection[str]:
108 return _instruments
109
110 def _instrument(self, **kwargs):
111 tracer_provider = kwargs.get("tracer_provider")
112
113 # pylint: disable=attribute-defined-outside-init
114 self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)
115
116 signals.task_prerun.connect(self._trace_prerun, weak=False)
117 signals.task_postrun.connect(self._trace_postrun, weak=False)
118 signals.before_task_publish.connect(
119 self._trace_before_publish, weak=False
120 )
121 signals.after_task_publish.connect(
122 self._trace_after_publish, weak=False
123 )
124 signals.task_failure.connect(self._trace_failure, weak=False)
125 signals.task_retry.connect(self._trace_retry, weak=False)
126
127 def _uninstrument(self, **kwargs):
128 signals.task_prerun.disconnect(self._trace_prerun)
129 signals.task_postrun.disconnect(self._trace_postrun)
130 signals.before_task_publish.disconnect(self._trace_before_publish)
131 signals.after_task_publish.disconnect(self._trace_after_publish)
132 signals.task_failure.disconnect(self._trace_failure)
133 signals.task_retry.disconnect(self._trace_retry)
134
135 def _trace_prerun(self, *args, **kwargs):
136 task = utils.retrieve_task(kwargs)
137 task_id = utils.retrieve_task_id(kwargs)
138
139 if task is None or task_id is None:
140 return
141
142 request = task.request
143 tracectx = extract(request, getter=celery_getter) or None
144
145 logger.debug("prerun signal start task_id=%s", task_id)
146
147 operation_name = f"{_TASK_RUN}/{task.name}"
148 span = self._tracer.start_span(
149 operation_name, context=tracectx, kind=trace.SpanKind.CONSUMER
150 )
151
152 activation = trace.use_span(span, end_on_exit=True)
153 activation.__enter__() # pylint: disable=E1101
154 utils.attach_span(task, task_id, (span, activation))
155
156 @staticmethod
157 def _trace_postrun(*args, **kwargs):
158 task = utils.retrieve_task(kwargs)
159 task_id = utils.retrieve_task_id(kwargs)
160
161 if task is None or task_id is None:
162 return
163
164 logger.debug("postrun signal task_id=%s", task_id)
165
166 # retrieve and finish the Span
167 span, activation = utils.retrieve_span(task, task_id)
168 if span is None:
169 logger.warning("no existing span found for task_id=%s", task_id)
170 return
171
172 # request context tags
173 if span.is_recording():
174 span.set_attribute(_TASK_TAG_KEY, _TASK_RUN)
175 utils.set_attributes_from_context(span, kwargs)
176 utils.set_attributes_from_context(span, task.request)
177 span.set_attribute(_TASK_NAME_KEY, task.name)
178
179 activation.__exit__(None, None, None)
180 utils.detach_span(task, task_id)
181
182 def _trace_before_publish(self, *args, **kwargs):
183 task = utils.retrieve_task_from_sender(kwargs)
184 task_id = utils.retrieve_task_id_from_message(kwargs)
185
186 if task_id is None:
187 return
188
189 if task is None:
190 # task is an anonymous task send using send_task or using canvas workflow
191 # Signatures() to send to a task not in the current processes dependency
192 # tree
193 task_name = kwargs.get("sender", "unknown")
194 else:
195 task_name = task.name
196 operation_name = f"{_TASK_APPLY_ASYNC}/{task_name}"
197 span = self._tracer.start_span(
198 operation_name, kind=trace.SpanKind.PRODUCER
199 )
200
201 # apply some attributes here because most of the data is not available
202 if span.is_recording():
203 span.set_attribute(_TASK_TAG_KEY, _TASK_APPLY_ASYNC)
204 span.set_attribute(SpanAttributes.MESSAGING_MESSAGE_ID, task_id)
205 span.set_attribute(_TASK_NAME_KEY, task_name)
206 utils.set_attributes_from_context(span, kwargs)
207
208 activation = trace.use_span(span, end_on_exit=True)
209 activation.__enter__() # pylint: disable=E1101
210
211 utils.attach_span(task, task_id, (span, activation), is_publish=True)
212
213 headers = kwargs.get("headers")
214 if headers:
215 inject(headers)
216
217 @staticmethod
218 def _trace_after_publish(*args, **kwargs):
219 task = utils.retrieve_task_from_sender(kwargs)
220 task_id = utils.retrieve_task_id_from_message(kwargs)
221
222 if task is None or task_id is None:
223 return
224
225 # retrieve and finish the Span
226 _, activation = utils.retrieve_span(task, task_id, is_publish=True)
227 if activation is None:
228 logger.warning("no existing span found for task_id=%s", task_id)
229 return
230
231 activation.__exit__(None, None, None) # pylint: disable=E1101
232 utils.detach_span(task, task_id, is_publish=True)
233
234 @staticmethod
235 def _trace_failure(*args, **kwargs):
236 task = utils.retrieve_task_from_sender(kwargs)
237 task_id = utils.retrieve_task_id(kwargs)
238
239 if task is None or task_id is None:
240 return
241
242 # retrieve and pass exception info to activation
243 span, _ = utils.retrieve_span(task, task_id)
244 if span is None or not span.is_recording():
245 return
246
247 status_kwargs = {"status_code": StatusCode.ERROR}
248
249 ex = kwargs.get("einfo")
250
251 if (
252 hasattr(task, "throws")
253 and ex is not None
254 and isinstance(ex.exception, task.throws)
255 ):
256 return
257
258 if ex is not None:
259 status_kwargs["description"] = str(ex)
260 span.record_exception(ex)
261 span.set_status(Status(**status_kwargs))
262
263 @staticmethod
264 def _trace_retry(*args, **kwargs):
265 task = utils.retrieve_task_from_sender(kwargs)
266 task_id = utils.retrieve_task_id_from_request(kwargs)
267 reason = utils.retrieve_reason(kwargs)
268
269 if task is None or task_id is None or reason is None:
270 return
271
272 span, _ = utils.retrieve_span(task, task_id)
273 if span is None or not span.is_recording():
274 return
275
276 # Add retry reason metadata to span
277 # Use `str(reason)` instead of `reason.message` in case we get
278 # something that isn't an `Exception`
279 span.set_attribute(_TASK_RETRY_REASON_KEY, str(reason))
280
[end of instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py b/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py
--- a/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py
+++ b/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py
@@ -60,6 +60,7 @@
"""
import logging
+from timeit import default_timer
from typing import Collection, Iterable
from celery import signals # pylint: disable=no-name-in-module
@@ -69,6 +70,7 @@
from opentelemetry.instrumentation.celery.package import _instruments
from opentelemetry.instrumentation.celery.version import __version__
from opentelemetry.instrumentation.instrumentor import BaseInstrumentor
+from opentelemetry.metrics import get_meter
from opentelemetry.propagate import extract, inject
from opentelemetry.propagators.textmap import Getter
from opentelemetry.semconv.trace import SpanAttributes
@@ -104,6 +106,11 @@
class CeleryInstrumentor(BaseInstrumentor):
+ def __init__(self):
+ super().__init__()
+ self.metrics = None
+ self.task_id_to_start_time = {}
+
def instrumentation_dependencies(self) -> Collection[str]:
return _instruments
@@ -113,6 +120,11 @@
# pylint: disable=attribute-defined-outside-init
self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)
+ meter_provider = kwargs.get("meter_provider")
+ meter = get_meter(__name__, __version__, meter_provider)
+
+ self.create_celery_metrics(meter)
+
signals.task_prerun.connect(self._trace_prerun, weak=False)
signals.task_postrun.connect(self._trace_postrun, weak=False)
signals.before_task_publish.connect(
@@ -139,6 +151,7 @@
if task is None or task_id is None:
return
+ self.update_task_duration_time(task_id)
request = task.request
tracectx = extract(request, getter=celery_getter) or None
@@ -153,8 +166,7 @@
activation.__enter__() # pylint: disable=E1101
utils.attach_span(task, task_id, (span, activation))
- @staticmethod
- def _trace_postrun(*args, **kwargs):
+ def _trace_postrun(self, *args, **kwargs):
task = utils.retrieve_task(kwargs)
task_id = utils.retrieve_task_id(kwargs)
@@ -178,6 +190,9 @@
activation.__exit__(None, None, None)
utils.detach_span(task, task_id)
+ self.update_task_duration_time(task_id)
+ labels = {"task": task.name, "worker": task.request.hostname}
+ self._record_histograms(task_id, labels)
def _trace_before_publish(self, *args, **kwargs):
task = utils.retrieve_task_from_sender(kwargs)
@@ -277,3 +292,30 @@
# Use `str(reason)` instead of `reason.message` in case we get
# something that isn't an `Exception`
span.set_attribute(_TASK_RETRY_REASON_KEY, str(reason))
+
+ def update_task_duration_time(self, task_id):
+ cur_time = default_timer()
+ task_duration_time_until_now = (
+ cur_time - self.task_id_to_start_time[task_id]
+ if task_id in self.task_id_to_start_time
+ else cur_time
+ )
+ self.task_id_to_start_time[task_id] = task_duration_time_until_now
+
+ def _record_histograms(self, task_id, metric_attributes):
+ if task_id is None:
+ return
+
+ self.metrics["flower.task.runtime.seconds"].record(
+ self.task_id_to_start_time.get(task_id),
+ attributes=metric_attributes,
+ )
+
+ def create_celery_metrics(self, meter) -> None:
+ self.metrics = {
+ "flower.task.runtime.seconds": meter.create_histogram(
+ name="flower.task.runtime.seconds",
+ unit="seconds",
+ description="The time it took to run the task.",
+ )
+ }
| {"golden_diff": "diff --git a/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py b/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py\n--- a/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py\n+++ b/instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py\n@@ -60,6 +60,7 @@\n \"\"\"\n \n import logging\n+from timeit import default_timer\n from typing import Collection, Iterable\n \n from celery import signals # pylint: disable=no-name-in-module\n@@ -69,6 +70,7 @@\n from opentelemetry.instrumentation.celery.package import _instruments\n from opentelemetry.instrumentation.celery.version import __version__\n from opentelemetry.instrumentation.instrumentor import BaseInstrumentor\n+from opentelemetry.metrics import get_meter\n from opentelemetry.propagate import extract, inject\n from opentelemetry.propagators.textmap import Getter\n from opentelemetry.semconv.trace import SpanAttributes\n@@ -104,6 +106,11 @@\n \n \n class CeleryInstrumentor(BaseInstrumentor):\n+ def __init__(self):\n+ super().__init__()\n+ self.metrics = None\n+ self.task_id_to_start_time = {}\n+\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n \n@@ -113,6 +120,11 @@\n # pylint: disable=attribute-defined-outside-init\n self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)\n \n+ meter_provider = kwargs.get(\"meter_provider\")\n+ meter = get_meter(__name__, __version__, meter_provider)\n+\n+ self.create_celery_metrics(meter)\n+\n signals.task_prerun.connect(self._trace_prerun, weak=False)\n signals.task_postrun.connect(self._trace_postrun, weak=False)\n signals.before_task_publish.connect(\n@@ -139,6 +151,7 @@\n if task is None or task_id is None:\n return\n \n+ self.update_task_duration_time(task_id)\n request = task.request\n tracectx = extract(request, getter=celery_getter) or None\n \n@@ -153,8 +166,7 @@\n activation.__enter__() # pylint: disable=E1101\n utils.attach_span(task, task_id, (span, activation))\n \n- @staticmethod\n- def _trace_postrun(*args, **kwargs):\n+ def _trace_postrun(self, *args, **kwargs):\n task = utils.retrieve_task(kwargs)\n task_id = utils.retrieve_task_id(kwargs)\n \n@@ -178,6 +190,9 @@\n \n activation.__exit__(None, None, None)\n utils.detach_span(task, task_id)\n+ self.update_task_duration_time(task_id)\n+ labels = {\"task\": task.name, \"worker\": task.request.hostname}\n+ self._record_histograms(task_id, labels)\n \n def _trace_before_publish(self, *args, **kwargs):\n task = utils.retrieve_task_from_sender(kwargs)\n@@ -277,3 +292,30 @@\n # Use `str(reason)` instead of `reason.message` in case we get\n # something that isn't an `Exception`\n span.set_attribute(_TASK_RETRY_REASON_KEY, str(reason))\n+\n+ def update_task_duration_time(self, task_id):\n+ cur_time = default_timer()\n+ task_duration_time_until_now = (\n+ cur_time - self.task_id_to_start_time[task_id]\n+ if task_id in self.task_id_to_start_time\n+ else cur_time\n+ )\n+ self.task_id_to_start_time[task_id] = task_duration_time_until_now\n+\n+ def _record_histograms(self, task_id, metric_attributes):\n+ if task_id is None:\n+ return\n+\n+ self.metrics[\"flower.task.runtime.seconds\"].record(\n+ self.task_id_to_start_time.get(task_id),\n+ attributes=metric_attributes,\n+ )\n+\n+ def create_celery_metrics(self, meter) -> None:\n+ self.metrics = {\n+ \"flower.task.runtime.seconds\": meter.create_histogram(\n+ name=\"flower.task.runtime.seconds\",\n+ unit=\"seconds\",\n+ description=\"The time it took to run the task.\",\n+ )\n+ }\n", "issue": "Metrics instrumentation celery\nRelevant semconv: https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/metrics/semantic_conventions/instrumentation/kafka.md\n", "before_files": [{"content": "# Copyright The OpenTelemetry Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"\nInstrument `celery`_ to trace Celery applications.\n\n.. _celery: https://pypi.org/project/celery/\n\nUsage\n-----\n\n* Start broker backend\n\n.. code::\n\n docker run -p 5672:5672 rabbitmq\n\n\n* Run instrumented task\n\n.. code:: python\n\n from opentelemetry.instrumentation.celery import CeleryInstrumentor\n\n from celery import Celery\n from celery.signals import worker_process_init\n\n @worker_process_init.connect(weak=False)\n def init_celery_tracing(*args, **kwargs):\n CeleryInstrumentor().instrument()\n\n app = Celery(\"tasks\", broker=\"amqp://localhost\")\n\n @app.task\n def add(x, y):\n return x + y\n\n add.delay(42, 50)\n\nSetting up tracing\n------------------\n\nWhen tracing a celery worker process, tracing and instrumention both must be initialized after the celery worker\nprocess is initialized. This is required for any tracing components that might use threading to work correctly\nsuch as the BatchSpanProcessor. Celery provides a signal called ``worker_process_init`` that can be used to\naccomplish this as shown in the example above.\n\nAPI\n---\n\"\"\"\n\nimport logging\nfrom typing import Collection, Iterable\n\nfrom celery import signals # pylint: disable=no-name-in-module\n\nfrom opentelemetry import trace\nfrom opentelemetry.instrumentation.celery import utils\nfrom opentelemetry.instrumentation.celery.package import _instruments\nfrom opentelemetry.instrumentation.celery.version import __version__\nfrom opentelemetry.instrumentation.instrumentor import BaseInstrumentor\nfrom opentelemetry.propagate import extract, inject\nfrom opentelemetry.propagators.textmap import Getter\nfrom opentelemetry.semconv.trace import SpanAttributes\nfrom opentelemetry.trace.status import Status, StatusCode\n\nlogger = logging.getLogger(__name__)\n\n# Task operations\n_TASK_TAG_KEY = \"celery.action\"\n_TASK_APPLY_ASYNC = \"apply_async\"\n_TASK_RUN = \"run\"\n\n_TASK_RETRY_REASON_KEY = \"celery.retry.reason\"\n_TASK_REVOKED_REASON_KEY = \"celery.revoked.reason\"\n_TASK_REVOKED_TERMINATED_SIGNAL_KEY = \"celery.terminated.signal\"\n_TASK_NAME_KEY = \"celery.task_name\"\n\n\nclass CeleryGetter(Getter):\n def get(self, carrier, key):\n value = getattr(carrier, key, None)\n if value is None:\n return None\n if isinstance(value, str) or not isinstance(value, Iterable):\n value = (value,)\n return value\n\n def keys(self, carrier):\n return []\n\n\ncelery_getter = CeleryGetter()\n\n\nclass CeleryInstrumentor(BaseInstrumentor):\n def instrumentation_dependencies(self) -> Collection[str]:\n return _instruments\n\n def _instrument(self, **kwargs):\n tracer_provider = kwargs.get(\"tracer_provider\")\n\n # pylint: disable=attribute-defined-outside-init\n self._tracer = trace.get_tracer(__name__, __version__, tracer_provider)\n\n signals.task_prerun.connect(self._trace_prerun, weak=False)\n signals.task_postrun.connect(self._trace_postrun, weak=False)\n signals.before_task_publish.connect(\n self._trace_before_publish, weak=False\n )\n signals.after_task_publish.connect(\n self._trace_after_publish, weak=False\n )\n signals.task_failure.connect(self._trace_failure, weak=False)\n signals.task_retry.connect(self._trace_retry, weak=False)\n\n def _uninstrument(self, **kwargs):\n signals.task_prerun.disconnect(self._trace_prerun)\n signals.task_postrun.disconnect(self._trace_postrun)\n signals.before_task_publish.disconnect(self._trace_before_publish)\n signals.after_task_publish.disconnect(self._trace_after_publish)\n signals.task_failure.disconnect(self._trace_failure)\n signals.task_retry.disconnect(self._trace_retry)\n\n def _trace_prerun(self, *args, **kwargs):\n task = utils.retrieve_task(kwargs)\n task_id = utils.retrieve_task_id(kwargs)\n\n if task is None or task_id is None:\n return\n\n request = task.request\n tracectx = extract(request, getter=celery_getter) or None\n\n logger.debug(\"prerun signal start task_id=%s\", task_id)\n\n operation_name = f\"{_TASK_RUN}/{task.name}\"\n span = self._tracer.start_span(\n operation_name, context=tracectx, kind=trace.SpanKind.CONSUMER\n )\n\n activation = trace.use_span(span, end_on_exit=True)\n activation.__enter__() # pylint: disable=E1101\n utils.attach_span(task, task_id, (span, activation))\n\n @staticmethod\n def _trace_postrun(*args, **kwargs):\n task = utils.retrieve_task(kwargs)\n task_id = utils.retrieve_task_id(kwargs)\n\n if task is None or task_id is None:\n return\n\n logger.debug(\"postrun signal task_id=%s\", task_id)\n\n # retrieve and finish the Span\n span, activation = utils.retrieve_span(task, task_id)\n if span is None:\n logger.warning(\"no existing span found for task_id=%s\", task_id)\n return\n\n # request context tags\n if span.is_recording():\n span.set_attribute(_TASK_TAG_KEY, _TASK_RUN)\n utils.set_attributes_from_context(span, kwargs)\n utils.set_attributes_from_context(span, task.request)\n span.set_attribute(_TASK_NAME_KEY, task.name)\n\n activation.__exit__(None, None, None)\n utils.detach_span(task, task_id)\n\n def _trace_before_publish(self, *args, **kwargs):\n task = utils.retrieve_task_from_sender(kwargs)\n task_id = utils.retrieve_task_id_from_message(kwargs)\n\n if task_id is None:\n return\n\n if task is None:\n # task is an anonymous task send using send_task or using canvas workflow\n # Signatures() to send to a task not in the current processes dependency\n # tree\n task_name = kwargs.get(\"sender\", \"unknown\")\n else:\n task_name = task.name\n operation_name = f\"{_TASK_APPLY_ASYNC}/{task_name}\"\n span = self._tracer.start_span(\n operation_name, kind=trace.SpanKind.PRODUCER\n )\n\n # apply some attributes here because most of the data is not available\n if span.is_recording():\n span.set_attribute(_TASK_TAG_KEY, _TASK_APPLY_ASYNC)\n span.set_attribute(SpanAttributes.MESSAGING_MESSAGE_ID, task_id)\n span.set_attribute(_TASK_NAME_KEY, task_name)\n utils.set_attributes_from_context(span, kwargs)\n\n activation = trace.use_span(span, end_on_exit=True)\n activation.__enter__() # pylint: disable=E1101\n\n utils.attach_span(task, task_id, (span, activation), is_publish=True)\n\n headers = kwargs.get(\"headers\")\n if headers:\n inject(headers)\n\n @staticmethod\n def _trace_after_publish(*args, **kwargs):\n task = utils.retrieve_task_from_sender(kwargs)\n task_id = utils.retrieve_task_id_from_message(kwargs)\n\n if task is None or task_id is None:\n return\n\n # retrieve and finish the Span\n _, activation = utils.retrieve_span(task, task_id, is_publish=True)\n if activation is None:\n logger.warning(\"no existing span found for task_id=%s\", task_id)\n return\n\n activation.__exit__(None, None, None) # pylint: disable=E1101\n utils.detach_span(task, task_id, is_publish=True)\n\n @staticmethod\n def _trace_failure(*args, **kwargs):\n task = utils.retrieve_task_from_sender(kwargs)\n task_id = utils.retrieve_task_id(kwargs)\n\n if task is None or task_id is None:\n return\n\n # retrieve and pass exception info to activation\n span, _ = utils.retrieve_span(task, task_id)\n if span is None or not span.is_recording():\n return\n\n status_kwargs = {\"status_code\": StatusCode.ERROR}\n\n ex = kwargs.get(\"einfo\")\n\n if (\n hasattr(task, \"throws\")\n and ex is not None\n and isinstance(ex.exception, task.throws)\n ):\n return\n\n if ex is not None:\n status_kwargs[\"description\"] = str(ex)\n span.record_exception(ex)\n span.set_status(Status(**status_kwargs))\n\n @staticmethod\n def _trace_retry(*args, **kwargs):\n task = utils.retrieve_task_from_sender(kwargs)\n task_id = utils.retrieve_task_id_from_request(kwargs)\n reason = utils.retrieve_reason(kwargs)\n\n if task is None or task_id is None or reason is None:\n return\n\n span, _ = utils.retrieve_span(task, task_id)\n if span is None or not span.is_recording():\n return\n\n # Add retry reason metadata to span\n # Use `str(reason)` instead of `reason.message` in case we get\n # something that isn't an `Exception`\n span.set_attribute(_TASK_RETRY_REASON_KEY, str(reason))\n", "path": "instrumentation/opentelemetry-instrumentation-celery/src/opentelemetry/instrumentation/celery/__init__.py"}]} | 3,453 | 991 |
gh_patches_debug_7313 | rasdani/github-patches | git_diff | MycroftAI__mycroft-core-1195 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Calling the pause() function does not pause the playback
This [pause() function](https://github.com/MycroftAI/mycroft-core/blob/a9d627c979b98053719200c6084f3b31a9a97c62/mycroft/skills/audioservice.py#L83) does not have the desired effect of pausing the playback.
`self.emitter.emit(Message('mycroft.sudio.service.stop'))`
This stops the playback, but the Audioservice does not expose a stop function. I could make the commit to add the stop() function if agreed upon.
</issue>
<code>
[start of mycroft/skills/audioservice.py]
1 # Copyright 2017 Mycroft AI Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 #
15 import time
16
17 from os.path import abspath
18
19 from mycroft.messagebus.message import Message
20
21
22 def ensure_uri(s):
23 """
24 Interprete paths as file:// uri's
25
26 Args:
27 s: string to be checked
28
29 Returns:
30 if s is uri, s is returned otherwise file:// is prepended
31 """
32 if '://' not in s:
33 return 'file://' + abspath(s)
34 else:
35 return s
36
37
38 class AudioService():
39 """
40 AudioService object for interacting with the audio subsystem
41
42 Args:
43 emitter: eventemitter or websocket object
44 """
45
46 def __init__(self, emitter):
47 self.emitter = emitter
48 self.emitter.on('mycroft.audio.service.track_info_reply',
49 self._track_info)
50 self.info = None
51
52 def _track_info(self, message=None):
53 """
54 Handler for catching returning track info
55 """
56 self.info = message.data
57
58 def play(self, tracks=[], utterance=''):
59 """ Start playback.
60
61 Args:
62 tracks: track uri or list of track uri's
63 utterance: forward utterance for further processing by the
64 audio service.
65 """
66 if isinstance(tracks, basestring):
67 tracks = [tracks]
68 elif not isinstance(tracks, list):
69 raise ValueError
70 tracks = [ensure_uri(t) for t in tracks]
71 self.emitter.emit(Message('mycroft.audio.service.play',
72 data={'tracks': tracks,
73 'utterance': utterance}))
74
75 def next(self):
76 """ Change to next track. """
77 self.emitter.emit(Message('mycroft.audio.service.next'))
78
79 def prev(self):
80 """ Change to previous track. """
81 self.emitter.emit(Message('mycroft.audio.service.prev'))
82
83 def pause(self):
84 """ Pause playback. """
85 self.emitter.emit(Message('mycroft.audio.service.pause'))
86
87 def resume(self):
88 """ Resume paused playback. """
89 self.emitter.emit(Message('mycroft.audio.service.resume'))
90
91 def track_info(self):
92 """ Request information of current playing track.
93
94 Returns:
95 Dict with track info.
96 """
97 self.info = None
98 self.emitter.emit(Message('mycroft.audio.service.track_info'))
99 wait = 5.0
100 while self.info is None and wait >= 0:
101 time.sleep(0.1)
102 wait -= 0.1
103
104 return self.info or {}
105
106 @property
107 def is_playing(self):
108 return self.track_info() != {}
109
[end of mycroft/skills/audioservice.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mycroft/skills/audioservice.py b/mycroft/skills/audioservice.py
--- a/mycroft/skills/audioservice.py
+++ b/mycroft/skills/audioservice.py
@@ -72,6 +72,10 @@
data={'tracks': tracks,
'utterance': utterance}))
+ def stop(self):
+ """ Stop the track. """
+ self.emitter.emit(Message('mycroft.audio.service.stop'))
+
def next(self):
""" Change to next track. """
self.emitter.emit(Message('mycroft.audio.service.next'))
| {"golden_diff": "diff --git a/mycroft/skills/audioservice.py b/mycroft/skills/audioservice.py\n--- a/mycroft/skills/audioservice.py\n+++ b/mycroft/skills/audioservice.py\n@@ -72,6 +72,10 @@\n data={'tracks': tracks,\n 'utterance': utterance}))\n \n+ def stop(self):\n+ \"\"\" Stop the track. \"\"\"\n+ self.emitter.emit(Message('mycroft.audio.service.stop'))\n+\n def next(self):\n \"\"\" Change to next track. \"\"\"\n self.emitter.emit(Message('mycroft.audio.service.next'))\n", "issue": "Calling the pause() function does not pause the playback\nThis [pause() function](https://github.com/MycroftAI/mycroft-core/blob/a9d627c979b98053719200c6084f3b31a9a97c62/mycroft/skills/audioservice.py#L83) does not have the desired effect of pausing the playback. \r\n`self.emitter.emit(Message('mycroft.sudio.service.stop'))`\r\nThis stops the playback, but the Audioservice does not expose a stop function. I could make the commit to add the stop() function if agreed upon.\n", "before_files": [{"content": "# Copyright 2017 Mycroft AI Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nimport time\n\nfrom os.path import abspath\n\nfrom mycroft.messagebus.message import Message\n\n\ndef ensure_uri(s):\n \"\"\"\n Interprete paths as file:// uri's\n\n Args:\n s: string to be checked\n\n Returns:\n if s is uri, s is returned otherwise file:// is prepended\n \"\"\"\n if '://' not in s:\n return 'file://' + abspath(s)\n else:\n return s\n\n\nclass AudioService():\n \"\"\"\n AudioService object for interacting with the audio subsystem\n\n Args:\n emitter: eventemitter or websocket object\n \"\"\"\n\n def __init__(self, emitter):\n self.emitter = emitter\n self.emitter.on('mycroft.audio.service.track_info_reply',\n self._track_info)\n self.info = None\n\n def _track_info(self, message=None):\n \"\"\"\n Handler for catching returning track info\n \"\"\"\n self.info = message.data\n\n def play(self, tracks=[], utterance=''):\n \"\"\" Start playback.\n\n Args:\n tracks: track uri or list of track uri's\n utterance: forward utterance for further processing by the\n audio service.\n \"\"\"\n if isinstance(tracks, basestring):\n tracks = [tracks]\n elif not isinstance(tracks, list):\n raise ValueError\n tracks = [ensure_uri(t) for t in tracks]\n self.emitter.emit(Message('mycroft.audio.service.play',\n data={'tracks': tracks,\n 'utterance': utterance}))\n\n def next(self):\n \"\"\" Change to next track. \"\"\"\n self.emitter.emit(Message('mycroft.audio.service.next'))\n\n def prev(self):\n \"\"\" Change to previous track. \"\"\"\n self.emitter.emit(Message('mycroft.audio.service.prev'))\n\n def pause(self):\n \"\"\" Pause playback. \"\"\"\n self.emitter.emit(Message('mycroft.audio.service.pause'))\n\n def resume(self):\n \"\"\" Resume paused playback. \"\"\"\n self.emitter.emit(Message('mycroft.audio.service.resume'))\n\n def track_info(self):\n \"\"\" Request information of current playing track.\n\n Returns:\n Dict with track info.\n \"\"\"\n self.info = None\n self.emitter.emit(Message('mycroft.audio.service.track_info'))\n wait = 5.0\n while self.info is None and wait >= 0:\n time.sleep(0.1)\n wait -= 0.1\n\n return self.info or {}\n\n @property\n def is_playing(self):\n return self.track_info() != {}\n", "path": "mycroft/skills/audioservice.py"}]} | 1,582 | 132 |
gh_patches_debug_62283 | rasdani/github-patches | git_diff | keras-team__autokeras-1367 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Exception in multi label classification with StructuredDataClassifier
### Bug Description
Multi label classification throws the following error when using `StructuredDataClassifier`:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-23-e7c45e9f8b2f> in <module>
----> 1 clf.predict(X)
/opt/conda/lib/python3.7/site-packages/autokeras/tasks/structured_data.py in predict(self, x, batch_size, **kwargs)
156 x = self.read_for_predict(x)
157
--> 158 return super().predict(x=x, batch_size=batch_size, **kwargs)
159
160 def evaluate(self, x, y=None, batch_size=32, **kwargs):
/opt/conda/lib/python3.7/site-packages/autokeras/auto_model.py in predict(self, x, **kwargs)
421 self._check_data_format((x, None), predict=True)
422 dataset = self._adapt(x, self.inputs)
--> 423 pipeline = self.tuner.get_best_pipeline()
424 model = self.tuner.get_best_model()
425 dataset = pipeline.transform_x(dataset)
/opt/conda/lib/python3.7/site-packages/autokeras/engine/tuner.py in get_best_pipeline(self)
65
66 def get_best_pipeline(self):
---> 67 return pipeline_module.load_pipeline(self.best_pipeline_path)
68
69 def _pipeline_path(self, trial_id):
/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in load_pipeline(filepath, custom_objects)
73 custom_objects = {}
74 with tf.keras.utils.custom_object_scope(custom_objects):
---> 75 return Pipeline.from_config(utils.load_json(filepath))
76
77
/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in from_config(cls, config)
180 for preprocessor in preprocessors
181 ]
--> 182 for preprocessors in config["outputs"]
183 ],
184 )
/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in <listcomp>(.0)
180 for preprocessor in preprocessors
181 ]
--> 182 for preprocessors in config["outputs"]
183 ],
184 )
/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in <listcomp>(.0)
178 [
179 preprocessors_module.deserialize(preprocessor)
--> 180 for preprocessor in preprocessors
181 ]
182 for preprocessors in config["outputs"]
/opt/conda/lib/python3.7/site-packages/autokeras/preprocessors/__init__.py in deserialize(config, custom_objects)
33 module_objects=globals(),
34 custom_objects=custom_objects,
---> 35 printable_module_name="preprocessors",
36 )
/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
358 list(custom_objects.items())))
359 with CustomObjectScope(custom_objects):
--> 360 return cls.from_config(cls_config)
361 else:
362 # Then `cls` may be a function returning a class.
/opt/conda/lib/python3.7/site-packages/autokeras/engine/serializable.py in from_config(cls, config)
32 config: Dict. The config of the object.
33 """
---> 34 return cls(**config)
/opt/conda/lib/python3.7/site-packages/autokeras/preprocessors/encoders.py in __init__(self, **kwargs)
125
126 def __init__(self, **kwargs):
--> 127 super().__init__(labels=[], **kwargs)
128
129 def transform(self, dataset):
TypeError: __init__() got multiple values for keyword argument 'labels'
```
### Bug Reproduction
Code for reproducing the bug:
```
from sklearn.datasets import make_multilabel_classification
import autokeras as ak
X, y = make_multilabel_classification(n_samples=1000, n_features=20, n_classes=2, n_labels=5)
clf = ak.StructuredDataClassifier(
overwrite=True,
max_trials=3,
multi_label=True,
objective='val_loss',
loss='binary_crossentropy'
)
clf.fit(X, y, epochs=3)
clf.predict(X)
```
### Expected Behavior
This should not throw an error, but return the predictions of the multi label classification
### Setup Details
Include the details about the versions of:
- OS type and version: Ubuntu 18.04.4
- Python: 3.7.6
- autokeras: 1.0.9
- keras-tuner: 1.0.2rc2
- scikit-learn: 0.23.2
- numpy: 1.18.5
- pandas: 1.1.2
- tensorflow: 2.3.0
</issue>
<code>
[start of autokeras/preprocessors/encoders.py]
1 # Copyright 2020 The AutoKeras Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import numpy as np
16 import tensorflow as tf
17
18 from autokeras.engine import preprocessor
19
20
21 class Encoder(preprocessor.TargetPreprocessor):
22 """Transform labels to encodings.
23
24 # Arguments
25 labels: A list of labels of any type. The labels to be encoded.
26 """
27
28 def __init__(self, labels, **kwargs):
29 super().__init__(**kwargs)
30 self.labels = [
31 label.decode("utf-8") if isinstance(label, bytes) else str(label)
32 for label in labels
33 ]
34
35 def get_config(self):
36 return {"labels": self.labels}
37
38 def fit(self, dataset):
39 return
40
41 def transform(self, dataset):
42 """Transform labels to integer encodings.
43
44 # Arguments
45 dataset: tf.data.Dataset. The dataset to be transformed.
46
47 # Returns
48 tf.data.Dataset. The transformed dataset.
49 """
50 keys_tensor = tf.constant(self.labels)
51 vals_tensor = tf.constant(list(range(len(self.labels))))
52 table = tf.lookup.StaticHashTable(
53 tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1
54 )
55
56 return dataset.map(lambda x: table.lookup(tf.reshape(x, [-1])))
57
58
59 class OneHotEncoder(Encoder):
60 def transform(self, dataset):
61 """Transform labels to one-hot encodings.
62
63 # Arguments
64 dataset: tf.data.Dataset. The dataset to be transformed.
65
66 # Returns
67 tf.data.Dataset. The transformed dataset.
68 """
69 dataset = super().transform(dataset)
70 eye = tf.eye(len(self.labels))
71 dataset = dataset.map(lambda x: tf.nn.embedding_lookup(eye, x))
72 return dataset
73
74 def postprocess(self, data):
75 """Transform probabilities back to labels.
76
77 # Arguments
78 data: numpy.ndarray. The output probabilities of the classification head.
79
80 # Returns
81 numpy.ndarray. The original labels.
82 """
83 return np.array(
84 list(
85 map(
86 lambda x: self.labels[x],
87 np.argmax(np.array(data), axis=1),
88 )
89 )
90 ).reshape(-1, 1)
91
92
93 class LabelEncoder(Encoder):
94 """Transform the labels to integer encodings."""
95
96 def transform(self, dataset):
97 """Transform labels to integer encodings.
98
99 # Arguments
100 dataset: tf.data.Dataset. The dataset to be transformed.
101
102 # Returns
103 tf.data.Dataset. The transformed dataset.
104 """
105 dataset = super().transform(dataset)
106 dataset = dataset.map(lambda x: tf.expand_dims(x, axis=-1))
107 return dataset
108
109 def postprocess(self, data):
110 """Transform probabilities back to labels.
111
112 # Arguments
113 data: numpy.ndarray. The output probabilities of the classification head.
114
115 # Returns
116 numpy.ndarray. The original labels.
117 """
118 return np.array(
119 list(map(lambda x: self.labels[int(round(x[0]))], np.array(data)))
120 ).reshape(-1, 1)
121
122
123 class MultiLabelEncoder(Encoder):
124 """Encoder for multi-label data."""
125
126 def __init__(self, **kwargs):
127 super().__init__(labels=[], **kwargs)
128
129 def transform(self, dataset):
130 return dataset
131
132 def postprocess(self, data):
133 """Transform probabilities to zeros and ones.
134
135 # Arguments
136 data: numpy.ndarray. The output probabilities of the classification head.
137
138 # Returns
139 numpy.ndarray. The zeros and ones predictions.
140 """
141 data[data < 0.5] = 0
142 data[data > 0.5] = 1
143 return data
144
[end of autokeras/preprocessors/encoders.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/autokeras/preprocessors/encoders.py b/autokeras/preprocessors/encoders.py
--- a/autokeras/preprocessors/encoders.py
+++ b/autokeras/preprocessors/encoders.py
@@ -124,6 +124,7 @@
"""Encoder for multi-label data."""
def __init__(self, **kwargs):
+ kwargs.pop("labels", None)
super().__init__(labels=[], **kwargs)
def transform(self, dataset):
| {"golden_diff": "diff --git a/autokeras/preprocessors/encoders.py b/autokeras/preprocessors/encoders.py\n--- a/autokeras/preprocessors/encoders.py\n+++ b/autokeras/preprocessors/encoders.py\n@@ -124,6 +124,7 @@\n \"\"\"Encoder for multi-label data.\"\"\"\n \n def __init__(self, **kwargs):\n+ kwargs.pop(\"labels\", None)\n super().__init__(labels=[], **kwargs)\n \n def transform(self, dataset):\n", "issue": "Exception in multi label classification with StructuredDataClassifier\n### Bug Description\r\nMulti label classification throws the following error when using `StructuredDataClassifier`:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError Traceback (most recent call last)\r\n<ipython-input-23-e7c45e9f8b2f> in <module>\r\n----> 1 clf.predict(X)\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/tasks/structured_data.py in predict(self, x, batch_size, **kwargs)\r\n 156 x = self.read_for_predict(x)\r\n 157 \r\n--> 158 return super().predict(x=x, batch_size=batch_size, **kwargs)\r\n 159 \r\n 160 def evaluate(self, x, y=None, batch_size=32, **kwargs):\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/auto_model.py in predict(self, x, **kwargs)\r\n 421 self._check_data_format((x, None), predict=True)\r\n 422 dataset = self._adapt(x, self.inputs)\r\n--> 423 pipeline = self.tuner.get_best_pipeline()\r\n 424 model = self.tuner.get_best_model()\r\n 425 dataset = pipeline.transform_x(dataset)\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/engine/tuner.py in get_best_pipeline(self)\r\n 65 \r\n 66 def get_best_pipeline(self):\r\n---> 67 return pipeline_module.load_pipeline(self.best_pipeline_path)\r\n 68 \r\n 69 def _pipeline_path(self, trial_id):\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in load_pipeline(filepath, custom_objects)\r\n 73 custom_objects = {}\r\n 74 with tf.keras.utils.custom_object_scope(custom_objects):\r\n---> 75 return Pipeline.from_config(utils.load_json(filepath))\r\n 76 \r\n 77 \r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in from_config(cls, config)\r\n 180 for preprocessor in preprocessors\r\n 181 ]\r\n--> 182 for preprocessors in config[\"outputs\"]\r\n 183 ],\r\n 184 )\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in <listcomp>(.0)\r\n 180 for preprocessor in preprocessors\r\n 181 ]\r\n--> 182 for preprocessors in config[\"outputs\"]\r\n 183 ],\r\n 184 )\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/pipeline.py in <listcomp>(.0)\r\n 178 [\r\n 179 preprocessors_module.deserialize(preprocessor)\r\n--> 180 for preprocessor in preprocessors\r\n 181 ]\r\n 182 for preprocessors in config[\"outputs\"]\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/preprocessors/__init__.py in deserialize(config, custom_objects)\r\n 33 module_objects=globals(),\r\n 34 custom_objects=custom_objects,\r\n---> 35 printable_module_name=\"preprocessors\",\r\n 36 )\r\n\r\n/opt/conda/lib/python3.7/site-packages/tensorflow/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)\r\n 358 list(custom_objects.items())))\r\n 359 with CustomObjectScope(custom_objects):\r\n--> 360 return cls.from_config(cls_config)\r\n 361 else:\r\n 362 # Then `cls` may be a function returning a class.\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/engine/serializable.py in from_config(cls, config)\r\n 32 config: Dict. The config of the object.\r\n 33 \"\"\"\r\n---> 34 return cls(**config)\r\n\r\n/opt/conda/lib/python3.7/site-packages/autokeras/preprocessors/encoders.py in __init__(self, **kwargs)\r\n 125 \r\n 126 def __init__(self, **kwargs):\r\n--> 127 super().__init__(labels=[], **kwargs)\r\n 128 \r\n 129 def transform(self, dataset):\r\n\r\nTypeError: __init__() got multiple values for keyword argument 'labels'\r\n```\r\n\r\n### Bug Reproduction\r\nCode for reproducing the bug:\r\n\r\n```\r\nfrom sklearn.datasets import make_multilabel_classification\r\nimport autokeras as ak\r\n\r\nX, y = make_multilabel_classification(n_samples=1000, n_features=20, n_classes=2, n_labels=5)\r\nclf = ak.StructuredDataClassifier(\r\n overwrite=True,\r\n max_trials=3,\r\n multi_label=True,\r\n objective='val_loss',\r\n loss='binary_crossentropy'\r\n)\r\nclf.fit(X, y, epochs=3)\r\nclf.predict(X)\r\n```\r\n\r\n### Expected Behavior\r\nThis should not throw an error, but return the predictions of the multi label classification\r\n\r\n### Setup Details\r\nInclude the details about the versions of:\r\n - OS type and version: Ubuntu 18.04.4\r\n - Python: 3.7.6\r\n - autokeras: 1.0.9\r\n - keras-tuner: 1.0.2rc2\r\n - scikit-learn: 0.23.2\r\n - numpy: 1.18.5\r\n - pandas: 1.1.2\r\n - tensorflow: 2.3.0\r\n\n", "before_files": [{"content": "# Copyright 2020 The AutoKeras Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom autokeras.engine import preprocessor\n\n\nclass Encoder(preprocessor.TargetPreprocessor):\n \"\"\"Transform labels to encodings.\n\n # Arguments\n labels: A list of labels of any type. The labels to be encoded.\n \"\"\"\n\n def __init__(self, labels, **kwargs):\n super().__init__(**kwargs)\n self.labels = [\n label.decode(\"utf-8\") if isinstance(label, bytes) else str(label)\n for label in labels\n ]\n\n def get_config(self):\n return {\"labels\": self.labels}\n\n def fit(self, dataset):\n return\n\n def transform(self, dataset):\n \"\"\"Transform labels to integer encodings.\n\n # Arguments\n dataset: tf.data.Dataset. The dataset to be transformed.\n\n # Returns\n tf.data.Dataset. The transformed dataset.\n \"\"\"\n keys_tensor = tf.constant(self.labels)\n vals_tensor = tf.constant(list(range(len(self.labels))))\n table = tf.lookup.StaticHashTable(\n tf.lookup.KeyValueTensorInitializer(keys_tensor, vals_tensor), -1\n )\n\n return dataset.map(lambda x: table.lookup(tf.reshape(x, [-1])))\n\n\nclass OneHotEncoder(Encoder):\n def transform(self, dataset):\n \"\"\"Transform labels to one-hot encodings.\n\n # Arguments\n dataset: tf.data.Dataset. The dataset to be transformed.\n\n # Returns\n tf.data.Dataset. The transformed dataset.\n \"\"\"\n dataset = super().transform(dataset)\n eye = tf.eye(len(self.labels))\n dataset = dataset.map(lambda x: tf.nn.embedding_lookup(eye, x))\n return dataset\n\n def postprocess(self, data):\n \"\"\"Transform probabilities back to labels.\n\n # Arguments\n data: numpy.ndarray. The output probabilities of the classification head.\n\n # Returns\n numpy.ndarray. The original labels.\n \"\"\"\n return np.array(\n list(\n map(\n lambda x: self.labels[x],\n np.argmax(np.array(data), axis=1),\n )\n )\n ).reshape(-1, 1)\n\n\nclass LabelEncoder(Encoder):\n \"\"\"Transform the labels to integer encodings.\"\"\"\n\n def transform(self, dataset):\n \"\"\"Transform labels to integer encodings.\n\n # Arguments\n dataset: tf.data.Dataset. The dataset to be transformed.\n\n # Returns\n tf.data.Dataset. The transformed dataset.\n \"\"\"\n dataset = super().transform(dataset)\n dataset = dataset.map(lambda x: tf.expand_dims(x, axis=-1))\n return dataset\n\n def postprocess(self, data):\n \"\"\"Transform probabilities back to labels.\n\n # Arguments\n data: numpy.ndarray. The output probabilities of the classification head.\n\n # Returns\n numpy.ndarray. The original labels.\n \"\"\"\n return np.array(\n list(map(lambda x: self.labels[int(round(x[0]))], np.array(data)))\n ).reshape(-1, 1)\n\n\nclass MultiLabelEncoder(Encoder):\n \"\"\"Encoder for multi-label data.\"\"\"\n\n def __init__(self, **kwargs):\n super().__init__(labels=[], **kwargs)\n\n def transform(self, dataset):\n return dataset\n\n def postprocess(self, data):\n \"\"\"Transform probabilities to zeros and ones.\n\n # Arguments\n data: numpy.ndarray. The output probabilities of the classification head.\n\n # Returns\n numpy.ndarray. The zeros and ones predictions.\n \"\"\"\n data[data < 0.5] = 0\n data[data > 0.5] = 1\n return data\n", "path": "autokeras/preprocessors/encoders.py"}]} | 2,995 | 112 |
gh_patches_debug_14746 | rasdani/github-patches | git_diff | GoogleCloudPlatform__PerfKitBenchmarker-563 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Project id not respected when deleting disk in Cloudstack
I forgot to pass project_id as an agrugment when deleting volume.
</issue>
<code>
[start of perfkitbenchmarker/cloudstack/cloudstack_disk.py]
1 # Copyright 2014 Google Inc. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Module containing classes related to CloudStack disks.
15
16 Disks can be created, deleted, attached to VMs, and detached from VMs.
17 """
18
19 from perfkitbenchmarker import disk
20 from perfkitbenchmarker import vm_util
21 from perfkitbenchmarker import flags
22 from perfkitbenchmarker.cloudstack import util
23
24 import string
25 import logging
26
27 FLAGS = flags.FLAGS
28
29
30 class CloudStackDisk(disk.BaseDisk):
31 """Object representing a Cloudstack Disk."""
32
33
34 def __init__(self, disk_spec, name, zone_id, project_id=None):
35 super(CloudStackDisk, self).__init__(disk_spec)
36
37 self.cs = util.CsClient(
38 FLAGS.CS_API_URL,
39 FLAGS.CS_API_KEY,
40 FLAGS.CS_API_SECRET
41 )
42
43 self.attached_vm_name = None
44 self.attached_vm_id = None
45 self.name = name
46
47 self.zone_id = zone_id
48 self.project_id = project_id
49
50 self.disk_offering_id = self._GetBestOfferingId(self.disk_size)
51 assert self.disk_offering_id, "Unable get disk offering of given size"
52
53 if self.disk_spec.disk_type:
54 logging.warn("Cloudstack does not support disk types")
55
56
57 @vm_util.Retry(max_retries=3)
58 def _Create(self):
59 """Creates the disk."""
60
61
62 volume = self.cs.create_volume(self.name,
63 self.disk_offering_id,
64 self.zone_id,
65 self.project_id)
66
67 assert volume, "Unable to create volume"
68
69 self.volume_id = volume['id']
70 self.disk_type = volume['type']
71 self.actual_disk_size = int(volume['size']) / (2 ** 30) # In GB
72
73
74 def _Delete(self):
75 """Deletes the disk."""
76 vol = self.cs.get_volume(self.name)
77 if vol:
78 self.cs.delete_volume(self.volume_id)
79
80
81 def _Exists(self):
82 """Returns true if the disk exists."""
83 vol = self.cs.get_volume(self.name, self.project_id)
84 if vol:
85 return True
86 return False
87
88
89 @vm_util.Retry(max_retries=3)
90 def Attach(self, vm):
91 """Attaches the disk to a VM.
92
93 Args:
94 vm: The CloudStackVirtualMachine instance to which
95 the disk will be attached.
96
97 """
98
99 res = self.cs.attach_volume(self.volume_id, vm.id)
100 assert res, "Unable to attach volume"
101
102 self.device_id = res['deviceid']
103
104 self.device_path = "/dev/xvd" + \
105 str(string.ascii_lowercase[self.device_id])
106
107
108
109 def Detach(self):
110 """Detaches the disk from a VM."""
111
112 self.cs.detach_volume(self.volume_id)
113
114
115 def _GetBestOfferingId(self, disk_size):
116 """ Given a disk_size (in GB), try to find a disk
117 offering that is atleast as big as the requested
118 one.
119 """
120
121 disk_offerings = self.cs.list_disk_offerings()
122 sorted_do = sorted(disk_offerings, key=lambda x: x['disksize'])
123
124 for do in sorted_do:
125 if int(do['disksize']) >= disk_size:
126 return do['id']
127
128 return None
129
[end of perfkitbenchmarker/cloudstack/cloudstack_disk.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/perfkitbenchmarker/cloudstack/cloudstack_disk.py b/perfkitbenchmarker/cloudstack/cloudstack_disk.py
--- a/perfkitbenchmarker/cloudstack/cloudstack_disk.py
+++ b/perfkitbenchmarker/cloudstack/cloudstack_disk.py
@@ -50,7 +50,7 @@
self.disk_offering_id = self._GetBestOfferingId(self.disk_size)
assert self.disk_offering_id, "Unable get disk offering of given size"
- if self.disk_spec.disk_type:
+ if disk_spec.disk_type:
logging.warn("Cloudstack does not support disk types")
@@ -73,7 +73,7 @@
def _Delete(self):
"""Deletes the disk."""
- vol = self.cs.get_volume(self.name)
+ vol = self.cs.get_volume(self.name, self.project_id)
if vol:
self.cs.delete_volume(self.volume_id)
| {"golden_diff": "diff --git a/perfkitbenchmarker/cloudstack/cloudstack_disk.py b/perfkitbenchmarker/cloudstack/cloudstack_disk.py\n--- a/perfkitbenchmarker/cloudstack/cloudstack_disk.py\n+++ b/perfkitbenchmarker/cloudstack/cloudstack_disk.py\n@@ -50,7 +50,7 @@\n self.disk_offering_id = self._GetBestOfferingId(self.disk_size)\n assert self.disk_offering_id, \"Unable get disk offering of given size\"\n \n- if self.disk_spec.disk_type:\n+ if disk_spec.disk_type:\n logging.warn(\"Cloudstack does not support disk types\")\n \n \n@@ -73,7 +73,7 @@\n \n def _Delete(self):\n \"\"\"Deletes the disk.\"\"\"\n- vol = self.cs.get_volume(self.name)\n+ vol = self.cs.get_volume(self.name, self.project_id)\n if vol:\n self.cs.delete_volume(self.volume_id)\n", "issue": "Project id not respected when deleting disk in Cloudstack\nI forgot to pass project_id as an agrugment when deleting volume. \n\n", "before_files": [{"content": "# Copyright 2014 Google Inc. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Module containing classes related to CloudStack disks.\n\nDisks can be created, deleted, attached to VMs, and detached from VMs.\n\"\"\"\n\nfrom perfkitbenchmarker import disk\nfrom perfkitbenchmarker import vm_util\nfrom perfkitbenchmarker import flags\nfrom perfkitbenchmarker.cloudstack import util\n\nimport string\nimport logging\n\nFLAGS = flags.FLAGS\n\n\nclass CloudStackDisk(disk.BaseDisk):\n \"\"\"Object representing a Cloudstack Disk.\"\"\"\n\n\n def __init__(self, disk_spec, name, zone_id, project_id=None):\n super(CloudStackDisk, self).__init__(disk_spec)\n\n self.cs = util.CsClient(\n FLAGS.CS_API_URL,\n FLAGS.CS_API_KEY,\n FLAGS.CS_API_SECRET\n )\n\n self.attached_vm_name = None\n self.attached_vm_id = None\n self.name = name\n\n self.zone_id = zone_id\n self.project_id = project_id\n\n self.disk_offering_id = self._GetBestOfferingId(self.disk_size)\n assert self.disk_offering_id, \"Unable get disk offering of given size\"\n\n if self.disk_spec.disk_type:\n logging.warn(\"Cloudstack does not support disk types\")\n\n\n @vm_util.Retry(max_retries=3)\n def _Create(self):\n \"\"\"Creates the disk.\"\"\"\n\n\n volume = self.cs.create_volume(self.name,\n self.disk_offering_id,\n self.zone_id,\n self.project_id)\n\n assert volume, \"Unable to create volume\"\n\n self.volume_id = volume['id']\n self.disk_type = volume['type']\n self.actual_disk_size = int(volume['size']) / (2 ** 30) # In GB\n\n\n def _Delete(self):\n \"\"\"Deletes the disk.\"\"\"\n vol = self.cs.get_volume(self.name)\n if vol:\n self.cs.delete_volume(self.volume_id)\n\n\n def _Exists(self):\n \"\"\"Returns true if the disk exists.\"\"\"\n vol = self.cs.get_volume(self.name, self.project_id)\n if vol:\n return True\n return False\n\n\n @vm_util.Retry(max_retries=3)\n def Attach(self, vm):\n \"\"\"Attaches the disk to a VM.\n\n Args:\n vm: The CloudStackVirtualMachine instance to which\n the disk will be attached.\n\n \"\"\"\n\n res = self.cs.attach_volume(self.volume_id, vm.id)\n assert res, \"Unable to attach volume\"\n\n self.device_id = res['deviceid']\n\n self.device_path = \"/dev/xvd\" + \\\n str(string.ascii_lowercase[self.device_id])\n\n\n\n def Detach(self):\n \"\"\"Detaches the disk from a VM.\"\"\"\n\n self.cs.detach_volume(self.volume_id)\n\n\n def _GetBestOfferingId(self, disk_size):\n \"\"\" Given a disk_size (in GB), try to find a disk\n offering that is atleast as big as the requested\n one.\n \"\"\"\n\n disk_offerings = self.cs.list_disk_offerings()\n sorted_do = sorted(disk_offerings, key=lambda x: x['disksize'])\n\n for do in sorted_do:\n if int(do['disksize']) >= disk_size:\n return do['id']\n\n return None\n", "path": "perfkitbenchmarker/cloudstack/cloudstack_disk.py"}]} | 1,696 | 198 |
gh_patches_debug_25894 | rasdani/github-patches | git_diff | sopel-irc__sopel-2545 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken plugin symlink -> `PluginError` in CLI
### Description
The `sopel-plugins` command should gracefully handle things that look like plugins but can't be loaded. It does this in some cases (like Python file plugins with syntax errors), but not all. There's room for improvement.
For example, I recently symlinked a file in `~/.sopel/plugins` so it would point to my local repository for that plugin (not yet converted to an entry-point or package) for testing. I did the `ln -s` command wrong, so the link was broken: `pluginfile.py -> ./pluginfile.py`, which is a circular reference.
### Logs
```
Traceback (most recent call last):
File "/home/dgw/.local/bin/sopel-plugins", line 33, in <module>
sys.exit(load_entry_point('sopel', 'console_scripts', 'sopel-plugins')())
File "/home/dgw/github/sopel/sopel/cli/plugins.py", line 556, in main
return handle_list(options)
File "/home/dgw/github/sopel/sopel/cli/plugins.py", line 174, in handle_list
for name, info in plugins.get_usable_plugins(settings).items()
File "/home/dgw/github/sopel/sopel/plugins/__init__.py", line 225, in get_usable_plugins
plugins_info = collections.OrderedDict(
File "/home/dgw/github/sopel/sopel/plugins/__init__.py", line 225, in <genexpr>
plugins_info = collections.OrderedDict(
File "/home/dgw/github/sopel/sopel/plugins/__init__.py", line 185, in enumerate_plugins
for plugin in all_plugins:
File "/home/dgw/github/sopel/sopel/plugins/__init__.py", line 124, in find_directory_plugins
yield handlers.PyFilePlugin(abspath)
File "/home/dgw/github/sopel/sopel/plugins/handlers.py", line 440, in __init__
raise exceptions.PluginError('Invalid Sopel plugin: %s' % filename)
sopel.plugins.exceptions.PluginError: Invalid Sopel plugin: /home/dgw/.sopel/plugins/bombbot.py
```
### Environment
- Sopel `.version`: 8b463a0e49cac7a27dc528a180243a6a5183d1f9
- Sopel installed via: `pip -e`
- Python version: 3.9.5
- Operating system: Ubuntu 20.04
- IRCd `/version`: n/a
- Relevant plugins: n/a
</issue>
<code>
[start of sopel/plugins/__init__.py]
1 """Sopel's plugins interface.
2
3 .. versionadded:: 7.0
4
5 Sopel uses what are called Plugin Handlers as an interface between the bot and
6 its plugins (formerly called "modules"). This interface is defined by the
7 :class:`~.handlers.AbstractPluginHandler` abstract class.
8
9 Plugins that can be used by Sopel are provided by :func:`~.get_usable_plugins`
10 in an :class:`ordered dict<collections.OrderedDict>`. This dict contains one
11 and only one plugin per unique name, using a specific order:
12
13 * extra directories defined in the settings
14 * homedir's ``plugins`` directory
15 * ``sopel.plugins`` entry point group
16 * ``sopel_modules``'s subpackages
17 * ``sopel.builtins``'s core plugins
18
19 (The ``coretasks`` plugin is *always* the one from ``sopel.coretasks`` and
20 cannot be overridden.)
21
22 To find all plugins (no matter their sources), the :func:`~.enumerate_plugins`
23 function can be used. For a more fine-grained search, ``find_*`` functions
24 exist for each type of plugin.
25 """
26 # Copyright 2019, Florian Strzelecki <[email protected]>
27 #
28 # Licensed under the Eiffel Forum License 2.
29 from __future__ import annotations
30
31 import collections
32 import importlib
33 import itertools
34 import os
35
36 # TODO: use stdlib importlib.metadata when possible, after dropping py3.9.
37 # Stdlib does not support `entry_points(group='filter')` until py3.10, but
38 # fallback logic is more trouble than it's worth when e.g. clean Ubuntu
39 # py3.10 envs include old versions of this backport.
40 import importlib_metadata
41
42 from . import exceptions, handlers, rules # noqa
43
44
45 def _list_plugin_filenames(directory):
46 # list plugin filenames from a directory
47 # yield 2-value tuples: (name, absolute path)
48 base = os.path.abspath(directory)
49 for filename in os.listdir(base):
50 abspath = os.path.join(base, filename)
51
52 if os.path.isdir(abspath):
53 if os.path.isfile(os.path.join(abspath, '__init__.py')):
54 yield os.path.basename(filename), abspath
55 else:
56 name, ext = os.path.splitext(filename)
57 if ext == '.py' and name != '__init__':
58 yield name, abspath
59
60
61 def find_internal_plugins():
62 """List internal plugins.
63
64 :return: yield instances of :class:`~.handlers.PyModulePlugin`
65 configured for ``sopel.builtins.*``
66
67 Internal plugins can be found under ``sopel.builtins``. This list does not
68 include the ``coretasks`` plugin.
69 """
70 builtins = importlib.util.find_spec('sopel.builtins')
71 if builtins is None or builtins.submodule_search_locations is None:
72 raise RuntimeError('Cannot resolve internal plugins')
73 plugin_list = itertools.chain.from_iterable(
74 _list_plugin_filenames(path)
75 for path in builtins.submodule_search_locations
76 )
77
78 for name, _ in set(plugin_list):
79 yield handlers.PyModulePlugin(name, 'sopel.builtins')
80
81
82 def find_sopel_modules_plugins():
83 """List plugins from ``sopel_modules.*``.
84
85 :return: yield instances of :class:`~.handlers.PyModulePlugin`
86 configured for ``sopel_modules.*``
87
88 Before entry point plugins, the only way to package a plugin was to follow
89 :pep:`382` by using the ``sopel_modules`` namespace. This function is
90 responsible to load such plugins.
91 """
92 try:
93 import sopel_modules # type: ignore[import]
94 except ImportError:
95 return
96
97 for plugin_dir in set(sopel_modules.__path__):
98 for name, _ in _list_plugin_filenames(plugin_dir):
99 yield handlers.PyModulePlugin(name, 'sopel_modules')
100
101
102 def find_entry_point_plugins(group='sopel.plugins'):
103 """List plugins from an entry point group.
104
105 :param str group: entry point group to search in (defaults to
106 ``sopel.plugins``)
107 :return: yield instances of :class:`~.handlers.EntryPointPlugin`
108 created from each entry point in the ``group``
109
110 This function finds plugins declared under an entry point group; by
111 default it looks in the ``sopel.plugins`` group.
112 """
113 for entry_point in importlib_metadata.entry_points(group=group):
114 yield handlers.EntryPointPlugin(entry_point)
115
116
117 def find_directory_plugins(directory):
118 """List plugins from a ``directory``.
119
120 :param str directory: directory path to search
121 :return: yield instances of :class:`~.handlers.PyFilePlugin`
122 found in ``directory``
123
124 This function looks for single file and folder plugins in a directory.
125 """
126 for _, abspath in _list_plugin_filenames(directory):
127 yield handlers.PyFilePlugin(abspath)
128
129
130 def enumerate_plugins(settings):
131 """Yield Sopel's plugins.
132
133 :param settings: Sopel's configuration
134 :type settings: :class:`sopel.config.Config`
135 :return: yield 2-value tuple: an instance of
136 :class:`~.handlers.AbstractPluginHandler`, and if the plugin is
137 active or not
138
139 This function uses the find functions to find all of Sopel's available
140 plugins. It uses the bot's ``settings`` to determine if the plugin is
141 enabled or disabled.
142
143 .. seealso::
144
145 The find functions used are:
146
147 * :func:`find_internal_plugins` for internal plugins
148 * :func:`find_sopel_modules_plugins` for ``sopel_modules.*`` plugins
149 * :func:`find_entry_point_plugins` for plugins exposed via packages'
150 entry points
151 * :func:`find_directory_plugins` for plugins in ``$homedir/plugins``,
152 and in extra directories as defined by ``settings.core.extra``
153
154 .. versionchanged:: 8.0
155
156 Looks in ``$homedir/plugins`` instead of the ``$homedir/modules``
157 directory, reflecting Sopel's shift away from calling them "modules".
158
159 """
160 from_internals = find_internal_plugins()
161 from_sopel_modules = find_sopel_modules_plugins()
162 from_entry_points = find_entry_point_plugins()
163 # load from directories
164 source_dirs = [
165 os.path.join(settings.homedir, 'plugins'),
166 ]
167 if settings.core.extra:
168 source_dirs = source_dirs + settings.core.extra
169
170 from_directories = [
171 find_directory_plugins(source_dir)
172 for source_dir in source_dirs
173 if os.path.isdir(source_dir)
174 ]
175
176 # Retrieve all plugins
177 all_plugins = itertools.chain(
178 from_internals,
179 from_sopel_modules,
180 from_entry_points,
181 *from_directories)
182
183 # Get plugin settings
184 enabled = settings.core.enable
185 disabled = settings.core.exclude
186
187 # Yield all found plugins with their enabled status (True/False)
188 for plugin in all_plugins:
189 name = plugin.name
190 is_enabled = name not in disabled and (not enabled or name in enabled)
191 yield plugin, is_enabled
192
193 # And always yield coretasks
194 yield handlers.PyModulePlugin('coretasks', 'sopel'), True
195
196
197 def get_usable_plugins(settings):
198 """Get usable plugins, unique per name.
199
200 :param settings: Sopel's configuration
201 :type settings: :class:`sopel.config.Config`
202 :return: an ordered dict of usable plugins
203 :rtype: collections.OrderedDict
204
205 This function provides the plugins Sopel can use to load, enable,
206 or disable, as an :class:`ordered dict<collections.OrderedDict>`. This dict
207 contains one and only one plugin per unique name, using a specific order:
208
209 * extra directories defined in the settings
210 * homedir's ``plugins`` directory
211 * ``sopel.plugins`` entry point group
212 * ``sopel_modules``'s subpackages
213 * ``sopel.builtins``'s core plugins
214
215 (The ``coretasks`` plugin is *always* the one from ``sopel.coretasks`` and
216 cannot be overridden.)
217
218 .. seealso::
219
220 The :func:`~.enumerate_plugins` function is used to generate a list
221 of all possible plugins, and its return value is used to populate
222 the :class:`ordered dict<collections.OrderedDict>`.
223
224 """
225 # Use an OrderedDict to get one and only one plugin per name
226 # based on what plugins.enumerate_plugins does, external plugins are
227 # allowed to override internal plugins
228 plugins_info = collections.OrderedDict(
229 (plugin.name, (plugin, is_enabled))
230 for plugin, is_enabled in enumerate_plugins(settings))
231 # reset coretasks's position at the end of the loading queue
232 plugins_info.move_to_end('coretasks')
233
234 return plugins_info
235
[end of sopel/plugins/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sopel/plugins/__init__.py b/sopel/plugins/__init__.py
--- a/sopel/plugins/__init__.py
+++ b/sopel/plugins/__init__.py
@@ -31,7 +31,9 @@
import collections
import importlib
import itertools
+import logging
import os
+from typing import TYPE_CHECKING, Union
# TODO: use stdlib importlib.metadata when possible, after dropping py3.9.
# Stdlib does not support `entry_points(group='filter')` until py3.10, but
@@ -41,13 +43,22 @@
from . import exceptions, handlers, rules # noqa
+if TYPE_CHECKING:
+ from collections.abc import Iterable
-def _list_plugin_filenames(directory):
+
+LOGGER = logging.getLogger(__name__)
+
+
+def _list_plugin_filenames(directory: Union[str, os.PathLike]) -> Iterable[tuple[str, str]]:
# list plugin filenames from a directory
# yield 2-value tuples: (name, absolute path)
base = os.path.abspath(directory)
for filename in os.listdir(base):
- abspath = os.path.join(base, filename)
+ abspath = os.path.realpath(os.path.join(base, filename))
+ if not os.path.exists(abspath):
+ LOGGER.warning("Plugin path does not exist, skipping: %r", abspath)
+ continue
if os.path.isdir(abspath):
if os.path.isfile(os.path.join(abspath, '__init__.py')):
| {"golden_diff": "diff --git a/sopel/plugins/__init__.py b/sopel/plugins/__init__.py\n--- a/sopel/plugins/__init__.py\n+++ b/sopel/plugins/__init__.py\n@@ -31,7 +31,9 @@\n import collections\n import importlib\n import itertools\n+import logging\n import os\n+from typing import TYPE_CHECKING, Union\n \n # TODO: use stdlib importlib.metadata when possible, after dropping py3.9.\n # Stdlib does not support `entry_points(group='filter')` until py3.10, but\n@@ -41,13 +43,22 @@\n \n from . import exceptions, handlers, rules # noqa\n \n+if TYPE_CHECKING:\n+ from collections.abc import Iterable\n \n-def _list_plugin_filenames(directory):\n+\n+LOGGER = logging.getLogger(__name__)\n+\n+\n+def _list_plugin_filenames(directory: Union[str, os.PathLike]) -> Iterable[tuple[str, str]]:\n # list plugin filenames from a directory\n # yield 2-value tuples: (name, absolute path)\n base = os.path.abspath(directory)\n for filename in os.listdir(base):\n- abspath = os.path.join(base, filename)\n+ abspath = os.path.realpath(os.path.join(base, filename))\n+ if not os.path.exists(abspath):\n+ LOGGER.warning(\"Plugin path does not exist, skipping: %r\", abspath)\n+ continue\n \n if os.path.isdir(abspath):\n if os.path.isfile(os.path.join(abspath, '__init__.py')):\n", "issue": "Broken plugin symlink -> `PluginError` in CLI\n### Description\r\nThe `sopel-plugins` command should gracefully handle things that look like plugins but can't be loaded. It does this in some cases (like Python file plugins with syntax errors), but not all. There's room for improvement.\r\n\r\nFor example, I recently symlinked a file in `~/.sopel/plugins` so it would point to my local repository for that plugin (not yet converted to an entry-point or package) for testing. I did the `ln -s` command wrong, so the link was broken: `pluginfile.py -> ./pluginfile.py`, which is a circular reference.\r\n\r\n### Logs\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/dgw/.local/bin/sopel-plugins\", line 33, in <module>\r\n sys.exit(load_entry_point('sopel', 'console_scripts', 'sopel-plugins')())\r\n File \"/home/dgw/github/sopel/sopel/cli/plugins.py\", line 556, in main\r\n return handle_list(options)\r\n File \"/home/dgw/github/sopel/sopel/cli/plugins.py\", line 174, in handle_list\r\n for name, info in plugins.get_usable_plugins(settings).items()\r\n File \"/home/dgw/github/sopel/sopel/plugins/__init__.py\", line 225, in get_usable_plugins\r\n plugins_info = collections.OrderedDict(\r\n File \"/home/dgw/github/sopel/sopel/plugins/__init__.py\", line 225, in <genexpr>\r\n plugins_info = collections.OrderedDict(\r\n File \"/home/dgw/github/sopel/sopel/plugins/__init__.py\", line 185, in enumerate_plugins\r\n for plugin in all_plugins:\r\n File \"/home/dgw/github/sopel/sopel/plugins/__init__.py\", line 124, in find_directory_plugins\r\n yield handlers.PyFilePlugin(abspath)\r\n File \"/home/dgw/github/sopel/sopel/plugins/handlers.py\", line 440, in __init__\r\n raise exceptions.PluginError('Invalid Sopel plugin: %s' % filename)\r\nsopel.plugins.exceptions.PluginError: Invalid Sopel plugin: /home/dgw/.sopel/plugins/bombbot.py\r\n```\r\n\r\n### Environment\r\n- Sopel `.version`: 8b463a0e49cac7a27dc528a180243a6a5183d1f9 \r\n- Sopel installed via: `pip -e`\r\n- Python version: 3.9.5\r\n- Operating system: Ubuntu 20.04\r\n- IRCd `/version`: n/a\r\n- Relevant plugins: n/a\n", "before_files": [{"content": "\"\"\"Sopel's plugins interface.\n\n.. versionadded:: 7.0\n\nSopel uses what are called Plugin Handlers as an interface between the bot and\nits plugins (formerly called \"modules\"). This interface is defined by the\n:class:`~.handlers.AbstractPluginHandler` abstract class.\n\nPlugins that can be used by Sopel are provided by :func:`~.get_usable_plugins`\nin an :class:`ordered dict<collections.OrderedDict>`. This dict contains one\nand only one plugin per unique name, using a specific order:\n\n* extra directories defined in the settings\n* homedir's ``plugins`` directory\n* ``sopel.plugins`` entry point group\n* ``sopel_modules``'s subpackages\n* ``sopel.builtins``'s core plugins\n\n(The ``coretasks`` plugin is *always* the one from ``sopel.coretasks`` and\ncannot be overridden.)\n\nTo find all plugins (no matter their sources), the :func:`~.enumerate_plugins`\nfunction can be used. For a more fine-grained search, ``find_*`` functions\nexist for each type of plugin.\n\"\"\"\n# Copyright 2019, Florian Strzelecki <[email protected]>\n#\n# Licensed under the Eiffel Forum License 2.\nfrom __future__ import annotations\n\nimport collections\nimport importlib\nimport itertools\nimport os\n\n# TODO: use stdlib importlib.metadata when possible, after dropping py3.9.\n# Stdlib does not support `entry_points(group='filter')` until py3.10, but\n# fallback logic is more trouble than it's worth when e.g. clean Ubuntu\n# py3.10 envs include old versions of this backport.\nimport importlib_metadata\n\nfrom . import exceptions, handlers, rules # noqa\n\n\ndef _list_plugin_filenames(directory):\n # list plugin filenames from a directory\n # yield 2-value tuples: (name, absolute path)\n base = os.path.abspath(directory)\n for filename in os.listdir(base):\n abspath = os.path.join(base, filename)\n\n if os.path.isdir(abspath):\n if os.path.isfile(os.path.join(abspath, '__init__.py')):\n yield os.path.basename(filename), abspath\n else:\n name, ext = os.path.splitext(filename)\n if ext == '.py' and name != '__init__':\n yield name, abspath\n\n\ndef find_internal_plugins():\n \"\"\"List internal plugins.\n\n :return: yield instances of :class:`~.handlers.PyModulePlugin`\n configured for ``sopel.builtins.*``\n\n Internal plugins can be found under ``sopel.builtins``. This list does not\n include the ``coretasks`` plugin.\n \"\"\"\n builtins = importlib.util.find_spec('sopel.builtins')\n if builtins is None or builtins.submodule_search_locations is None:\n raise RuntimeError('Cannot resolve internal plugins')\n plugin_list = itertools.chain.from_iterable(\n _list_plugin_filenames(path)\n for path in builtins.submodule_search_locations\n )\n\n for name, _ in set(plugin_list):\n yield handlers.PyModulePlugin(name, 'sopel.builtins')\n\n\ndef find_sopel_modules_plugins():\n \"\"\"List plugins from ``sopel_modules.*``.\n\n :return: yield instances of :class:`~.handlers.PyModulePlugin`\n configured for ``sopel_modules.*``\n\n Before entry point plugins, the only way to package a plugin was to follow\n :pep:`382` by using the ``sopel_modules`` namespace. This function is\n responsible to load such plugins.\n \"\"\"\n try:\n import sopel_modules # type: ignore[import]\n except ImportError:\n return\n\n for plugin_dir in set(sopel_modules.__path__):\n for name, _ in _list_plugin_filenames(plugin_dir):\n yield handlers.PyModulePlugin(name, 'sopel_modules')\n\n\ndef find_entry_point_plugins(group='sopel.plugins'):\n \"\"\"List plugins from an entry point group.\n\n :param str group: entry point group to search in (defaults to\n ``sopel.plugins``)\n :return: yield instances of :class:`~.handlers.EntryPointPlugin`\n created from each entry point in the ``group``\n\n This function finds plugins declared under an entry point group; by\n default it looks in the ``sopel.plugins`` group.\n \"\"\"\n for entry_point in importlib_metadata.entry_points(group=group):\n yield handlers.EntryPointPlugin(entry_point)\n\n\ndef find_directory_plugins(directory):\n \"\"\"List plugins from a ``directory``.\n\n :param str directory: directory path to search\n :return: yield instances of :class:`~.handlers.PyFilePlugin`\n found in ``directory``\n\n This function looks for single file and folder plugins in a directory.\n \"\"\"\n for _, abspath in _list_plugin_filenames(directory):\n yield handlers.PyFilePlugin(abspath)\n\n\ndef enumerate_plugins(settings):\n \"\"\"Yield Sopel's plugins.\n\n :param settings: Sopel's configuration\n :type settings: :class:`sopel.config.Config`\n :return: yield 2-value tuple: an instance of\n :class:`~.handlers.AbstractPluginHandler`, and if the plugin is\n active or not\n\n This function uses the find functions to find all of Sopel's available\n plugins. It uses the bot's ``settings`` to determine if the plugin is\n enabled or disabled.\n\n .. seealso::\n\n The find functions used are:\n\n * :func:`find_internal_plugins` for internal plugins\n * :func:`find_sopel_modules_plugins` for ``sopel_modules.*`` plugins\n * :func:`find_entry_point_plugins` for plugins exposed via packages'\n entry points\n * :func:`find_directory_plugins` for plugins in ``$homedir/plugins``,\n and in extra directories as defined by ``settings.core.extra``\n\n .. versionchanged:: 8.0\n\n Looks in ``$homedir/plugins`` instead of the ``$homedir/modules``\n directory, reflecting Sopel's shift away from calling them \"modules\".\n\n \"\"\"\n from_internals = find_internal_plugins()\n from_sopel_modules = find_sopel_modules_plugins()\n from_entry_points = find_entry_point_plugins()\n # load from directories\n source_dirs = [\n os.path.join(settings.homedir, 'plugins'),\n ]\n if settings.core.extra:\n source_dirs = source_dirs + settings.core.extra\n\n from_directories = [\n find_directory_plugins(source_dir)\n for source_dir in source_dirs\n if os.path.isdir(source_dir)\n ]\n\n # Retrieve all plugins\n all_plugins = itertools.chain(\n from_internals,\n from_sopel_modules,\n from_entry_points,\n *from_directories)\n\n # Get plugin settings\n enabled = settings.core.enable\n disabled = settings.core.exclude\n\n # Yield all found plugins with their enabled status (True/False)\n for plugin in all_plugins:\n name = plugin.name\n is_enabled = name not in disabled and (not enabled or name in enabled)\n yield plugin, is_enabled\n\n # And always yield coretasks\n yield handlers.PyModulePlugin('coretasks', 'sopel'), True\n\n\ndef get_usable_plugins(settings):\n \"\"\"Get usable plugins, unique per name.\n\n :param settings: Sopel's configuration\n :type settings: :class:`sopel.config.Config`\n :return: an ordered dict of usable plugins\n :rtype: collections.OrderedDict\n\n This function provides the plugins Sopel can use to load, enable,\n or disable, as an :class:`ordered dict<collections.OrderedDict>`. This dict\n contains one and only one plugin per unique name, using a specific order:\n\n * extra directories defined in the settings\n * homedir's ``plugins`` directory\n * ``sopel.plugins`` entry point group\n * ``sopel_modules``'s subpackages\n * ``sopel.builtins``'s core plugins\n\n (The ``coretasks`` plugin is *always* the one from ``sopel.coretasks`` and\n cannot be overridden.)\n\n .. seealso::\n\n The :func:`~.enumerate_plugins` function is used to generate a list\n of all possible plugins, and its return value is used to populate\n the :class:`ordered dict<collections.OrderedDict>`.\n\n \"\"\"\n # Use an OrderedDict to get one and only one plugin per name\n # based on what plugins.enumerate_plugins does, external plugins are\n # allowed to override internal plugins\n plugins_info = collections.OrderedDict(\n (plugin.name, (plugin, is_enabled))\n for plugin, is_enabled in enumerate_plugins(settings))\n # reset coretasks's position at the end of the loading queue\n plugins_info.move_to_end('coretasks')\n\n return plugins_info\n", "path": "sopel/plugins/__init__.py"}]} | 3,712 | 331 |
gh_patches_debug_39411 | rasdani/github-patches | git_diff | wemake-services__wemake-python-styleguide-924 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Forbid `Optional[Union[A, B]]`, use `Union[A, B, None]` instead
# Rule request
<!-- Thanks for coming up with a proposal! -->
## Thesis
<!-- What exactly do you propose? Please, provide code examples. -->
These two types are equal: `Optional[Union[A, B]] == Union[A, B, None]`
For consistency reasons we need to enforce the second form over the first one.
Related to #792
</issue>
<code>
[start of wemake_python_styleguide/violations/annotations.py]
1 # -*- coding: utf-8 -*-
2
3 """
4 These checks help to prevent incorrect usage of Python 3 typing annotations.
5
6 While they maybe of a great help in writing clear
7 and concise code, they still can be abused.
8
9 Once again, these rules are highly subjective. But, we love them.
10
11 .. currentmodule:: wemake_python_styleguide.violations.annotations
12
13 Summary
14 -------
15
16 .. autosummary::
17 :nosignatures:
18
19 LiteralNoneViolation
20 NestedAnnotationsViolation
21
22 Annotation checks
23 ------------------
24
25 .. autoclass:: LiteralNoneViolation
26 .. autoclass:: NestedAnnotationsViolation
27
28 """
29
30 from typing_extensions import final
31
32 from wemake_python_styleguide.violations.base import ASTViolation
33
34
35 @final
36 class LiteralNoneViolation(ASTViolation):
37 """
38 Forbids to use ``Literal[None]`` typing annotation.
39
40 Reasoning:
41 Literal[None] is just the same as None.
42 There's no need to use the first version.
43 It is not type related, it is a consistency rule.
44
45 Solution:
46 Replace ``Literal[None]`` with ``None``.
47
48 Example::
49
50 # Correct:
51 def func(empty: None):
52 '''Empty function.'''
53
54 # Wrong:
55 def func(empty: Literal[None]):
56 '''Empty function.'''
57
58 .. versionadded:: 0.13.0
59
60 """
61
62 code = 701
63 error_template = 'Found useless `Literal[None]` typing annotation'
64
65
66 @final
67 class NestedAnnotationsViolation(ASTViolation):
68 """
69 Forbids use of nested Literal and Union Annotation.
70
71 Reasoning:
72 There is no need to nest certain annotations of the same type.
73 They are exactly equivalent to the flattened version.
74 Use the flattened version for consistency.
75
76 Solution:
77 Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.
78
79 Example::
80 # Correct:
81 Literal[1, 2, 3, "foo", 5, None]
82 Union[int, str, float]
83
84 # Wrong:
85 Literal[Literal[Literal[1, 2, 3], "foo"], 5, None]
86 Union[Union[int, str], float]
87
88 .. versionadded:: 0.13.0
89
90 """
91
92 error_template = 'Found redundant nested typing annotation'
93 code = 702
94
[end of wemake_python_styleguide/violations/annotations.py]
[start of wemake_python_styleguide/visitors/ast/annotations.py]
1 # -*- coding: utf-8 -*-
2
3 import ast
4 from typing import ClassVar, FrozenSet, Optional
5
6 from typing_extensions import final
7
8 from wemake_python_styleguide.logic import source
9 from wemake_python_styleguide.types import AnyFunctionDef
10 from wemake_python_styleguide.violations.annotations import (
11 LiteralNoneViolation,
12 NestedAnnotationsViolation,
13 )
14 from wemake_python_styleguide.violations.consistency import (
15 MultilineFunctionAnnotationViolation,
16 )
17 from wemake_python_styleguide.visitors.base import BaseNodeVisitor
18 from wemake_python_styleguide.visitors.decorators import alias
19
20
21 @alias('visit_any_function', (
22 'visit_FunctionDef',
23 'visit_AsyncFunctionDef',
24 ))
25 class _GenericAnnotationVisitor(BaseNodeVisitor):
26 """Base class for all annotations visitors."""
27
28 _possible_prefixes: ClassVar[FrozenSet[str]] = frozenset((
29 'typing.',
30 'typing_extensions.',
31 ))
32
33 @final
34 def visit_AnnAssign(self, node: ast.AnnAssign) -> None:
35 """
36 Checks for assigned value annotation.
37
38 Raises:
39 LiteralNoneAnnotation
40
41 """
42 self._check_annotation(node.annotation)
43 self.generic_visit(node)
44
45 @final
46 def visit_any_function(self, node: AnyFunctionDef) -> None:
47 """
48 Checks return type annotations.
49
50 Raises:
51 LiteralNoneAnnotation
52
53 """
54 self._check_annotation(node.returns)
55 self.generic_visit(node)
56
57 @final
58 def visit_arg(self, node: ast.arg) -> None:
59 """
60 Checks arguments annotations.
61
62 Raises:
63 LiteralNoneAnnotation
64
65 """
66 self._check_annotation(node.annotation)
67 self.generic_visit(node)
68
69 @final
70 def _get_annotation(self, node: ast.AST) -> str:
71 """Smartly turns annotation node to string."""
72 full_annotation = source.node_to_string(node)
73 for prefix in self._possible_prefixes:
74 full_annotation = full_annotation.replace(prefix, '')
75 return full_annotation
76
77 def _check_annotation(self, annotation: Optional[ast.expr]) -> None:
78 """The only method that need to me implemented in child visitors."""
79
80
81 @final
82 class SemanticAnnotationVisitor(_GenericAnnotationVisitor):
83 """Ensures that nested annotations are used correctly."""
84
85 _flat_types: ClassVar[FrozenSet[str]] = frozenset((
86 'Literal',
87 'Union',
88 'Annotated',
89 ))
90
91 def _check_annotation(self, annotation: Optional[ast.expr]) -> None:
92 if not annotation:
93 return
94
95 self._check_nested_annotations(annotation)
96 self._check_literal_none(annotation)
97
98 def _check_nested_annotations(self, annotation: ast.expr) -> None:
99 annotation_string = self._get_annotation(annotation)
100 for flat_type in self._flat_types:
101 if annotation_string.count(flat_type) > 1:
102 self.add_violation(NestedAnnotationsViolation(annotation))
103
104 def _check_literal_none(self, annotation: ast.expr) -> None:
105 annotation_string = self._get_annotation(annotation)
106 if 'Literal[None]' in annotation_string:
107 self.add_violation(LiteralNoneViolation(annotation))
108
109
110 @final
111 @alias('visit_any_function', (
112 'visit_FunctionDef',
113 'visit_AsyncFunctionDef',
114 ))
115 class WrongAnnotationVisitor(BaseNodeVisitor):
116 """Ensures that annotations are used correctly."""
117
118 def visit_any_function(self, node: AnyFunctionDef) -> None:
119 """
120 Checks return type annotations.
121
122 Raises:
123 MultilineFunctionAnnotationViolation
124 LiteralNoneAnnotation
125
126 """
127 self._check_return_annotation(node)
128 self.generic_visit(node)
129
130 def visit_arg(self, node: ast.arg) -> None:
131 """
132 Checks arguments annotations.
133
134 Raises:
135 MultilineFunctionAnnotationViolation
136 LiteralNoneAnnotation
137
138 """
139 self._check_arg_annotation(node)
140 self.generic_visit(node)
141
142 def _check_arg_annotation(self, node: ast.arg) -> None:
143 for sub_node in ast.walk(node):
144 lineno = getattr(sub_node, 'lineno', None)
145 if lineno and lineno != node.lineno:
146 self.add_violation(MultilineFunctionAnnotationViolation(node))
147 return
148
149 def _check_return_annotation(self, node: AnyFunctionDef) -> None:
150 if not node.returns:
151 return
152
153 for sub_node in ast.walk(node.returns):
154 lineno = getattr(sub_node, 'lineno', None)
155 if lineno and lineno != node.returns.lineno:
156 self.add_violation(MultilineFunctionAnnotationViolation(node))
157 return
158
[end of wemake_python_styleguide/visitors/ast/annotations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wemake_python_styleguide/violations/annotations.py b/wemake_python_styleguide/violations/annotations.py
--- a/wemake_python_styleguide/violations/annotations.py
+++ b/wemake_python_styleguide/violations/annotations.py
@@ -18,12 +18,14 @@
LiteralNoneViolation
NestedAnnotationsViolation
+ UnionNestedInOptionalViolation
Annotation checks
------------------
.. autoclass:: LiteralNoneViolation
.. autoclass:: NestedAnnotationsViolation
+.. autoclass:: UnionNestedInOptionalViolation
"""
@@ -77,6 +79,7 @@
Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.
Example::
+
# Correct:
Literal[1, 2, 3, "foo", 5, None]
Union[int, str, float]
@@ -91,3 +94,31 @@
error_template = 'Found redundant nested typing annotation'
code = 702
+
+
+@final
+class UnionNestedInOptionalViolation(ASTViolation):
+ """
+ Forbids to use ``Optional[Union[int, str]]`` annotation.
+
+ Reasoning:
+ Optional[Union[int, str]] equals to Union[int, str, None].
+ Use Union[int, str, None] version for consistency.
+
+ Solution:
+ Replace ``Optional[Union[int, str]]`` with ``Union[int, str, None]``.
+
+ Example::
+
+ # Correct:
+ Union[int, str, None]
+
+ # Wrong:
+ Optional[Union[int, str]]
+
+ .. versionadded:: 0.13.0
+
+ """
+
+ error_template = 'Found typing annotation with `Union` nested in `Optional`'
+ code = 704
diff --git a/wemake_python_styleguide/visitors/ast/annotations.py b/wemake_python_styleguide/visitors/ast/annotations.py
--- a/wemake_python_styleguide/visitors/ast/annotations.py
+++ b/wemake_python_styleguide/visitors/ast/annotations.py
@@ -10,6 +10,7 @@
from wemake_python_styleguide.violations.annotations import (
LiteralNoneViolation,
NestedAnnotationsViolation,
+ UnionNestedInOptionalViolation,
)
from wemake_python_styleguide.violations.consistency import (
MultilineFunctionAnnotationViolation,
@@ -94,6 +95,7 @@
self._check_nested_annotations(annotation)
self._check_literal_none(annotation)
+ self._check_union_nested_in_optional(annotation)
def _check_nested_annotations(self, annotation: ast.expr) -> None:
annotation_string = self._get_annotation(annotation)
@@ -106,6 +108,11 @@
if 'Literal[None]' in annotation_string:
self.add_violation(LiteralNoneViolation(annotation))
+ def _check_union_nested_in_optional(self, annotation: ast.expr) -> None:
+ annotation_string = self._get_annotation(annotation)
+ if 'Optional[Union[' in annotation_string:
+ self.add_violation(UnionNestedInOptionalViolation(annotation))
+
@final
@alias('visit_any_function', (
| {"golden_diff": "diff --git a/wemake_python_styleguide/violations/annotations.py b/wemake_python_styleguide/violations/annotations.py\n--- a/wemake_python_styleguide/violations/annotations.py\n+++ b/wemake_python_styleguide/violations/annotations.py\n@@ -18,12 +18,14 @@\n \n LiteralNoneViolation\n NestedAnnotationsViolation\n+ UnionNestedInOptionalViolation\n \n Annotation checks\n ------------------\n \n .. autoclass:: LiteralNoneViolation\n .. autoclass:: NestedAnnotationsViolation\n+.. autoclass:: UnionNestedInOptionalViolation\n \n \"\"\"\n \n@@ -77,6 +79,7 @@\n Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.\n \n Example::\n+\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n Union[int, str, float]\n@@ -91,3 +94,31 @@\n \n error_template = 'Found redundant nested typing annotation'\n code = 702\n+\n+\n+@final\n+class UnionNestedInOptionalViolation(ASTViolation):\n+ \"\"\"\n+ Forbids to use ``Optional[Union[int, str]]`` annotation.\n+\n+ Reasoning:\n+ Optional[Union[int, str]] equals to Union[int, str, None].\n+ Use Union[int, str, None] version for consistency.\n+\n+ Solution:\n+ Replace ``Optional[Union[int, str]]`` with ``Union[int, str, None]``.\n+\n+ Example::\n+\n+ # Correct:\n+ Union[int, str, None]\n+\n+ # Wrong:\n+ Optional[Union[int, str]]\n+\n+ .. versionadded:: 0.13.0\n+\n+ \"\"\"\n+\n+ error_template = 'Found typing annotation with `Union` nested in `Optional`'\n+ code = 704\ndiff --git a/wemake_python_styleguide/visitors/ast/annotations.py b/wemake_python_styleguide/visitors/ast/annotations.py\n--- a/wemake_python_styleguide/visitors/ast/annotations.py\n+++ b/wemake_python_styleguide/visitors/ast/annotations.py\n@@ -10,6 +10,7 @@\n from wemake_python_styleguide.violations.annotations import (\n LiteralNoneViolation,\n NestedAnnotationsViolation,\n+ UnionNestedInOptionalViolation,\n )\n from wemake_python_styleguide.violations.consistency import (\n MultilineFunctionAnnotationViolation,\n@@ -94,6 +95,7 @@\n \n self._check_nested_annotations(annotation)\n self._check_literal_none(annotation)\n+ self._check_union_nested_in_optional(annotation)\n \n def _check_nested_annotations(self, annotation: ast.expr) -> None:\n annotation_string = self._get_annotation(annotation)\n@@ -106,6 +108,11 @@\n if 'Literal[None]' in annotation_string:\n self.add_violation(LiteralNoneViolation(annotation))\n \n+ def _check_union_nested_in_optional(self, annotation: ast.expr) -> None:\n+ annotation_string = self._get_annotation(annotation)\n+ if 'Optional[Union[' in annotation_string:\n+ self.add_violation(UnionNestedInOptionalViolation(annotation))\n+\n \n @final\n @alias('visit_any_function', (\n", "issue": "Forbid `Optional[Union[A, B]]`, use `Union[A, B, None]` instead\n# Rule request\r\n\r\n<!-- Thanks for coming up with a proposal! -->\r\n\r\n## Thesis\r\n\r\n<!-- What exactly do you propose? Please, provide code examples. -->\r\n\r\nThese two types are equal: `Optional[Union[A, B]] == Union[A, B, None]`\r\n\r\nFor consistency reasons we need to enforce the second form over the first one.\r\nRelated to #792 \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"\nThese checks help to prevent incorrect usage of Python 3 typing annotations.\n\nWhile they maybe of a great help in writing clear\nand concise code, they still can be abused.\n\nOnce again, these rules are highly subjective. But, we love them.\n\n.. currentmodule:: wemake_python_styleguide.violations.annotations\n\nSummary\n-------\n\n.. autosummary::\n :nosignatures:\n\n LiteralNoneViolation\n NestedAnnotationsViolation\n\nAnnotation checks\n------------------\n\n.. autoclass:: LiteralNoneViolation\n.. autoclass:: NestedAnnotationsViolation\n\n\"\"\"\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.violations.base import ASTViolation\n\n\n@final\nclass LiteralNoneViolation(ASTViolation):\n \"\"\"\n Forbids to use ``Literal[None]`` typing annotation.\n\n Reasoning:\n Literal[None] is just the same as None.\n There's no need to use the first version.\n It is not type related, it is a consistency rule.\n\n Solution:\n Replace ``Literal[None]`` with ``None``.\n\n Example::\n\n # Correct:\n def func(empty: None):\n '''Empty function.'''\n\n # Wrong:\n def func(empty: Literal[None]):\n '''Empty function.'''\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n code = 701\n error_template = 'Found useless `Literal[None]` typing annotation'\n\n\n@final\nclass NestedAnnotationsViolation(ASTViolation):\n \"\"\"\n Forbids use of nested Literal and Union Annotation.\n\n Reasoning:\n There is no need to nest certain annotations of the same type.\n They are exactly equivalent to the flattened version.\n Use the flattened version for consistency.\n\n Solution:\n Flatten consecutively nested ``typing.Literal`` and ``typing.Union``.\n\n Example::\n # Correct:\n Literal[1, 2, 3, \"foo\", 5, None]\n Union[int, str, float]\n\n # Wrong:\n Literal[Literal[Literal[1, 2, 3], \"foo\"], 5, None]\n Union[Union[int, str], float]\n\n .. versionadded:: 0.13.0\n\n \"\"\"\n\n error_template = 'Found redundant nested typing annotation'\n code = 702\n", "path": "wemake_python_styleguide/violations/annotations.py"}, {"content": "# -*- coding: utf-8 -*-\n\nimport ast\nfrom typing import ClassVar, FrozenSet, Optional\n\nfrom typing_extensions import final\n\nfrom wemake_python_styleguide.logic import source\nfrom wemake_python_styleguide.types import AnyFunctionDef\nfrom wemake_python_styleguide.violations.annotations import (\n LiteralNoneViolation,\n NestedAnnotationsViolation,\n)\nfrom wemake_python_styleguide.violations.consistency import (\n MultilineFunctionAnnotationViolation,\n)\nfrom wemake_python_styleguide.visitors.base import BaseNodeVisitor\nfrom wemake_python_styleguide.visitors.decorators import alias\n\n\n@alias('visit_any_function', (\n 'visit_FunctionDef',\n 'visit_AsyncFunctionDef',\n))\nclass _GenericAnnotationVisitor(BaseNodeVisitor):\n \"\"\"Base class for all annotations visitors.\"\"\"\n\n _possible_prefixes: ClassVar[FrozenSet[str]] = frozenset((\n 'typing.',\n 'typing_extensions.',\n ))\n\n @final\n def visit_AnnAssign(self, node: ast.AnnAssign) -> None:\n \"\"\"\n Checks for assigned value annotation.\n\n Raises:\n LiteralNoneAnnotation\n\n \"\"\"\n self._check_annotation(node.annotation)\n self.generic_visit(node)\n\n @final\n def visit_any_function(self, node: AnyFunctionDef) -> None:\n \"\"\"\n Checks return type annotations.\n\n Raises:\n LiteralNoneAnnotation\n\n \"\"\"\n self._check_annotation(node.returns)\n self.generic_visit(node)\n\n @final\n def visit_arg(self, node: ast.arg) -> None:\n \"\"\"\n Checks arguments annotations.\n\n Raises:\n LiteralNoneAnnotation\n\n \"\"\"\n self._check_annotation(node.annotation)\n self.generic_visit(node)\n\n @final\n def _get_annotation(self, node: ast.AST) -> str:\n \"\"\"Smartly turns annotation node to string.\"\"\"\n full_annotation = source.node_to_string(node)\n for prefix in self._possible_prefixes:\n full_annotation = full_annotation.replace(prefix, '')\n return full_annotation\n\n def _check_annotation(self, annotation: Optional[ast.expr]) -> None:\n \"\"\"The only method that need to me implemented in child visitors.\"\"\"\n\n\n@final\nclass SemanticAnnotationVisitor(_GenericAnnotationVisitor):\n \"\"\"Ensures that nested annotations are used correctly.\"\"\"\n\n _flat_types: ClassVar[FrozenSet[str]] = frozenset((\n 'Literal',\n 'Union',\n 'Annotated',\n ))\n\n def _check_annotation(self, annotation: Optional[ast.expr]) -> None:\n if not annotation:\n return\n\n self._check_nested_annotations(annotation)\n self._check_literal_none(annotation)\n\n def _check_nested_annotations(self, annotation: ast.expr) -> None:\n annotation_string = self._get_annotation(annotation)\n for flat_type in self._flat_types:\n if annotation_string.count(flat_type) > 1:\n self.add_violation(NestedAnnotationsViolation(annotation))\n\n def _check_literal_none(self, annotation: ast.expr) -> None:\n annotation_string = self._get_annotation(annotation)\n if 'Literal[None]' in annotation_string:\n self.add_violation(LiteralNoneViolation(annotation))\n\n\n@final\n@alias('visit_any_function', (\n 'visit_FunctionDef',\n 'visit_AsyncFunctionDef',\n))\nclass WrongAnnotationVisitor(BaseNodeVisitor):\n \"\"\"Ensures that annotations are used correctly.\"\"\"\n\n def visit_any_function(self, node: AnyFunctionDef) -> None:\n \"\"\"\n Checks return type annotations.\n\n Raises:\n MultilineFunctionAnnotationViolation\n LiteralNoneAnnotation\n\n \"\"\"\n self._check_return_annotation(node)\n self.generic_visit(node)\n\n def visit_arg(self, node: ast.arg) -> None:\n \"\"\"\n Checks arguments annotations.\n\n Raises:\n MultilineFunctionAnnotationViolation\n LiteralNoneAnnotation\n\n \"\"\"\n self._check_arg_annotation(node)\n self.generic_visit(node)\n\n def _check_arg_annotation(self, node: ast.arg) -> None:\n for sub_node in ast.walk(node):\n lineno = getattr(sub_node, 'lineno', None)\n if lineno and lineno != node.lineno:\n self.add_violation(MultilineFunctionAnnotationViolation(node))\n return\n\n def _check_return_annotation(self, node: AnyFunctionDef) -> None:\n if not node.returns:\n return\n\n for sub_node in ast.walk(node.returns):\n lineno = getattr(sub_node, 'lineno', None)\n if lineno and lineno != node.returns.lineno:\n self.add_violation(MultilineFunctionAnnotationViolation(node))\n return\n", "path": "wemake_python_styleguide/visitors/ast/annotations.py"}]} | 2,732 | 723 |
gh_patches_debug_9889 | rasdani/github-patches | git_diff | pulp__pulpcore-299 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[PR #5443/c7079c89 backport][3.54] Pin the pytest-redis requirement for unit tests
**This is a backport of PR #5443 as merged into main (c7079c891b87ecd9073260db5baa35826803f635).**
[noissue]
</issue>
<code>
[start of pulpcore/app/serializers/content.py]
1 import hashlib
2 from gettext import gettext as _
3
4 from django.db import transaction
5 from rest_framework import serializers
6 from rest_framework.validators import UniqueValidator
7
8 from pulpcore.app import models
9 from pulpcore.app.serializers import base, fields
10
11 UNIQUE_ALGORITHMS = ['sha256', 'sha384', 'sha512']
12
13
14 class BaseContentSerializer(base.MasterModelSerializer):
15 _href = base.DetailIdentityField()
16
17 class Meta:
18 model = models.Content
19 fields = base.MasterModelSerializer.Meta.fields
20
21
22 class NoArtifactContentSerializer(BaseContentSerializer):
23
24 class Meta:
25 model = models.Content
26 fields = BaseContentSerializer.Meta.fields
27
28
29 class SingleArtifactContentSerializer(BaseContentSerializer):
30 artifact = fields.SingleContentArtifactField(
31 help_text=_("Artifact file representing the physical content"),
32 )
33
34 relative_path = serializers.CharField(
35 help_text=_("Path where the artifact is located relative to distributions base_path"),
36 validators=[fields.relative_path_validator],
37 write_only=True,
38 )
39
40 def __init__(self, *args, **kwargs):
41 """
42 Initializer for SingleArtifactContentSerializer
43 """
44 super().__init__(*args, **kwargs)
45
46 # If the content model has its own database field 'relative_path',
47 # we should not mark the field write_only
48 if hasattr(self.Meta.model, 'relative_path') and "relative_path" in self.fields:
49 self.fields["relative_path"].write_only = False
50
51 @transaction.atomic
52 def create(self, validated_data):
53 """
54 Create the content and associate it with its Artifact.
55
56 Args:
57 validated_data (dict): Data to save to the database
58 """
59 artifact = validated_data.pop('artifact')
60 if "relative_path" in self.fields and self.fields["relative_path"].write_only:
61 relative_path = validated_data.pop('relative_path')
62 else:
63 relative_path = validated_data.get('relative_path')
64 content = self.Meta.model.objects.create(**validated_data)
65 models.ContentArtifact.objects.create(
66 artifact=artifact,
67 content=content,
68 relative_path=relative_path,
69 )
70 return content
71
72 class Meta:
73 model = models.Content
74 fields = BaseContentSerializer.Meta.fields + ('artifact', 'relative_path')
75
76
77 class MultipleArtifactContentSerializer(BaseContentSerializer):
78 artifacts = fields.ContentArtifactsField(
79 help_text=_("A dict mapping relative paths inside the Content to the corresponding"
80 "Artifact URLs. E.g.: {'relative/path': "
81 "'/artifacts/1/'"),
82 )
83
84 @transaction.atomic
85 def create(self, validated_data):
86 """
87 Create the content and associate it with all its Artifacts.
88
89 Args:
90 validated_data (dict): Data to save to the database
91 """
92 artifacts = validated_data.pop('artifacts')
93 content = self.Meta.model.objects.create(**validated_data)
94 for relative_path, artifact in artifacts.items():
95 models.ContentArtifact.objects.create(
96 artifact=artifact,
97 content=content,
98 relative_path=relative_path,
99 )
100 return content
101
102 class Meta:
103 model = models.Content
104 fields = BaseContentSerializer.Meta.fields + ('artifacts',)
105
106
107 class ContentChecksumSerializer(serializers.Serializer):
108 """
109 Provide a serializer with artifact checksum fields for single artifact content.
110
111 If you use this serializer, it's recommended that you prefetch artifacts:
112
113 Content.objects.prefetch_related("_artifacts").all()
114 """
115
116 md5 = fields.ContentArtifactChecksumField(
117 help_text=_("The MD5 checksum if available."),
118 checksum='md5',
119 )
120
121 sha1 = fields.ContentArtifactChecksumField(
122 help_text=_("The SHA-1 checksum if available."),
123 checksum='sha1',
124 )
125
126 sha224 = fields.ContentArtifactChecksumField(
127 help_text=_("The SHA-224 checksum if available."),
128 checksum='sha224',
129 )
130
131 sha256 = fields.ContentArtifactChecksumField(
132 help_text=_("The SHA-256 checksum if available."),
133 checksum='sha256',
134 )
135
136 sha384 = fields.ContentArtifactChecksumField(
137 help_text=_("The SHA-384 checksum if available."),
138 checksum='sha384',
139 )
140
141 sha512 = fields.ContentArtifactChecksumField(
142 help_text=_("The SHA-512 checksum if available."),
143 checksum='sha512',
144 )
145
146 class Meta:
147 model = models.Content
148 fields = base.ModelSerializer.Meta.fields + ('md5', 'sha1', 'sha224', 'sha256', 'sha384',
149 'sha512')
150
151
152 class ArtifactSerializer(base.ModelSerializer):
153 _href = base.IdentityField(
154 view_name='artifacts-detail',
155 )
156
157 file = serializers.FileField(
158 help_text=_("The stored file."),
159 allow_empty_file=True,
160 required=False
161 )
162
163 size = serializers.IntegerField(
164 help_text=_("The size of the file in bytes."),
165 required=False
166 )
167
168 md5 = serializers.CharField(
169 help_text=_("The MD5 checksum of the file if available."),
170 required=False,
171 allow_null=True,
172 )
173
174 sha1 = serializers.CharField(
175 help_text=_("The SHA-1 checksum of the file if available."),
176 required=False,
177 allow_null=True,
178 )
179
180 sha224 = serializers.CharField(
181 help_text=_("The SHA-224 checksum of the file if available."),
182 required=False,
183 allow_null=True,
184 )
185
186 sha256 = serializers.CharField(
187 help_text=_("The SHA-256 checksum of the file if available."),
188 required=False,
189 allow_null=True,
190 )
191
192 sha384 = serializers.CharField(
193 help_text=_("The SHA-384 checksum of the file if available."),
194 required=False,
195 allow_null=True,
196 )
197
198 sha512 = serializers.CharField(
199 help_text=_("The SHA-512 checksum of the file if available."),
200 required=False,
201 allow_null=True,
202 )
203
204 def validate(self, data):
205 """
206 Validate file by size and by all checksums provided.
207
208 Args:
209 data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their
210 values
211
212 Raises:
213 :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any
214 of the checksums don't match their actual values.
215 """
216 super().validate(data)
217
218 if 'size' in data:
219 if data['file'].size != int(data['size']):
220 raise serializers.ValidationError(_("The size did not match actual size of file."))
221 else:
222 data['size'] = data['file'].size
223
224 for algorithm in hashlib.algorithms_guaranteed:
225 if algorithm in models.Artifact.DIGEST_FIELDS:
226 digest = data['file'].hashers[algorithm].hexdigest()
227
228 if algorithm in data and digest != data[algorithm]:
229 raise serializers.ValidationError(_("The %s checksum did not match.")
230 % algorithm)
231 else:
232 data[algorithm] = digest
233 if algorithm in UNIQUE_ALGORITHMS:
234 validator = UniqueValidator(models.Artifact.objects.all(),
235 message=_("{0} checksum must be "
236 "unique.").format(algorithm))
237 validator.field_name = algorithm
238 validator.instance = None
239 validator(digest)
240 return data
241
242 class Meta:
243 model = models.Artifact
244 fields = base.ModelSerializer.Meta.fields + ('file', 'size', 'md5', 'sha1', 'sha224',
245 'sha256', 'sha384', 'sha512')
246
[end of pulpcore/app/serializers/content.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py
--- a/pulpcore/app/serializers/content.py
+++ b/pulpcore/app/serializers/content.py
@@ -57,7 +57,7 @@
validated_data (dict): Data to save to the database
"""
artifact = validated_data.pop('artifact')
- if "relative_path" in self.fields and self.fields["relative_path"].write_only:
+ if "relative_path" not in self.fields or self.fields["relative_path"].write_only:
relative_path = validated_data.pop('relative_path')
else:
relative_path = validated_data.get('relative_path')
| {"golden_diff": "diff --git a/pulpcore/app/serializers/content.py b/pulpcore/app/serializers/content.py\n--- a/pulpcore/app/serializers/content.py\n+++ b/pulpcore/app/serializers/content.py\n@@ -57,7 +57,7 @@\n validated_data (dict): Data to save to the database\n \"\"\"\n artifact = validated_data.pop('artifact')\n- if \"relative_path\" in self.fields and self.fields[\"relative_path\"].write_only:\n+ if \"relative_path\" not in self.fields or self.fields[\"relative_path\"].write_only:\n relative_path = validated_data.pop('relative_path')\n else:\n relative_path = validated_data.get('relative_path')\n", "issue": "[PR #5443/c7079c89 backport][3.54] Pin the pytest-redis requirement for unit tests\n**This is a backport of PR #5443 as merged into main (c7079c891b87ecd9073260db5baa35826803f635).**\n\n[noissue]\n", "before_files": [{"content": "import hashlib\nfrom gettext import gettext as _\n\nfrom django.db import transaction\nfrom rest_framework import serializers\nfrom rest_framework.validators import UniqueValidator\n\nfrom pulpcore.app import models\nfrom pulpcore.app.serializers import base, fields\n\nUNIQUE_ALGORITHMS = ['sha256', 'sha384', 'sha512']\n\n\nclass BaseContentSerializer(base.MasterModelSerializer):\n _href = base.DetailIdentityField()\n\n class Meta:\n model = models.Content\n fields = base.MasterModelSerializer.Meta.fields\n\n\nclass NoArtifactContentSerializer(BaseContentSerializer):\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields\n\n\nclass SingleArtifactContentSerializer(BaseContentSerializer):\n artifact = fields.SingleContentArtifactField(\n help_text=_(\"Artifact file representing the physical content\"),\n )\n\n relative_path = serializers.CharField(\n help_text=_(\"Path where the artifact is located relative to distributions base_path\"),\n validators=[fields.relative_path_validator],\n write_only=True,\n )\n\n def __init__(self, *args, **kwargs):\n \"\"\"\n Initializer for SingleArtifactContentSerializer\n \"\"\"\n super().__init__(*args, **kwargs)\n\n # If the content model has its own database field 'relative_path',\n # we should not mark the field write_only\n if hasattr(self.Meta.model, 'relative_path') and \"relative_path\" in self.fields:\n self.fields[\"relative_path\"].write_only = False\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with its Artifact.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n artifact = validated_data.pop('artifact')\n if \"relative_path\" in self.fields and self.fields[\"relative_path\"].write_only:\n relative_path = validated_data.pop('relative_path')\n else:\n relative_path = validated_data.get('relative_path')\n content = self.Meta.model.objects.create(**validated_data)\n models.ContentArtifact.objects.create(\n artifact=artifact,\n content=content,\n relative_path=relative_path,\n )\n return content\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + ('artifact', 'relative_path')\n\n\nclass MultipleArtifactContentSerializer(BaseContentSerializer):\n artifacts = fields.ContentArtifactsField(\n help_text=_(\"A dict mapping relative paths inside the Content to the corresponding\"\n \"Artifact URLs. E.g.: {'relative/path': \"\n \"'/artifacts/1/'\"),\n )\n\n @transaction.atomic\n def create(self, validated_data):\n \"\"\"\n Create the content and associate it with all its Artifacts.\n\n Args:\n validated_data (dict): Data to save to the database\n \"\"\"\n artifacts = validated_data.pop('artifacts')\n content = self.Meta.model.objects.create(**validated_data)\n for relative_path, artifact in artifacts.items():\n models.ContentArtifact.objects.create(\n artifact=artifact,\n content=content,\n relative_path=relative_path,\n )\n return content\n\n class Meta:\n model = models.Content\n fields = BaseContentSerializer.Meta.fields + ('artifacts',)\n\n\nclass ContentChecksumSerializer(serializers.Serializer):\n \"\"\"\n Provide a serializer with artifact checksum fields for single artifact content.\n\n If you use this serializer, it's recommended that you prefetch artifacts:\n\n Content.objects.prefetch_related(\"_artifacts\").all()\n \"\"\"\n\n md5 = fields.ContentArtifactChecksumField(\n help_text=_(\"The MD5 checksum if available.\"),\n checksum='md5',\n )\n\n sha1 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-1 checksum if available.\"),\n checksum='sha1',\n )\n\n sha224 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-224 checksum if available.\"),\n checksum='sha224',\n )\n\n sha256 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-256 checksum if available.\"),\n checksum='sha256',\n )\n\n sha384 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-384 checksum if available.\"),\n checksum='sha384',\n )\n\n sha512 = fields.ContentArtifactChecksumField(\n help_text=_(\"The SHA-512 checksum if available.\"),\n checksum='sha512',\n )\n\n class Meta:\n model = models.Content\n fields = base.ModelSerializer.Meta.fields + ('md5', 'sha1', 'sha224', 'sha256', 'sha384',\n 'sha512')\n\n\nclass ArtifactSerializer(base.ModelSerializer):\n _href = base.IdentityField(\n view_name='artifacts-detail',\n )\n\n file = serializers.FileField(\n help_text=_(\"The stored file.\"),\n allow_empty_file=True,\n required=False\n )\n\n size = serializers.IntegerField(\n help_text=_(\"The size of the file in bytes.\"),\n required=False\n )\n\n md5 = serializers.CharField(\n help_text=_(\"The MD5 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha1 = serializers.CharField(\n help_text=_(\"The SHA-1 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha224 = serializers.CharField(\n help_text=_(\"The SHA-224 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha256 = serializers.CharField(\n help_text=_(\"The SHA-256 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha384 = serializers.CharField(\n help_text=_(\"The SHA-384 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n sha512 = serializers.CharField(\n help_text=_(\"The SHA-512 checksum of the file if available.\"),\n required=False,\n allow_null=True,\n )\n\n def validate(self, data):\n \"\"\"\n Validate file by size and by all checksums provided.\n\n Args:\n data (:class:`django.http.QueryDict`): QueryDict mapping Artifact model fields to their\n values\n\n Raises:\n :class:`rest_framework.exceptions.ValidationError`: When the expected file size or any\n of the checksums don't match their actual values.\n \"\"\"\n super().validate(data)\n\n if 'size' in data:\n if data['file'].size != int(data['size']):\n raise serializers.ValidationError(_(\"The size did not match actual size of file.\"))\n else:\n data['size'] = data['file'].size\n\n for algorithm in hashlib.algorithms_guaranteed:\n if algorithm in models.Artifact.DIGEST_FIELDS:\n digest = data['file'].hashers[algorithm].hexdigest()\n\n if algorithm in data and digest != data[algorithm]:\n raise serializers.ValidationError(_(\"The %s checksum did not match.\")\n % algorithm)\n else:\n data[algorithm] = digest\n if algorithm in UNIQUE_ALGORITHMS:\n validator = UniqueValidator(models.Artifact.objects.all(),\n message=_(\"{0} checksum must be \"\n \"unique.\").format(algorithm))\n validator.field_name = algorithm\n validator.instance = None\n validator(digest)\n return data\n\n class Meta:\n model = models.Artifact\n fields = base.ModelSerializer.Meta.fields + ('file', 'size', 'md5', 'sha1', 'sha224',\n 'sha256', 'sha384', 'sha512')\n", "path": "pulpcore/app/serializers/content.py"}]} | 2,894 | 150 |
gh_patches_debug_21325 | rasdani/github-patches | git_diff | akvo__akvo-rsr-3020 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Incorrect mandatory validation for qualitative indicators
The indicator measure for qualitative indicators is hidden, but the code checking for completion of sections doesn't ignore that field, and considers it mandatory.
</issue>
<code>
[start of akvo/iati/checks/fields/results.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7
8 def results(project):
9 """
10 Check if result has a type, title and at least one indicator.
11 Check if indicator has a measure and title.
12 Check if indicator baseline has year and value.
13 Check if indicator period has a start and end date, and start before end.
14
15 :param project: Project object
16 :return: All checks passed boolean, [Check results]
17 """
18 checks = []
19 all_checks_passed = True
20
21 for result in project.results.all():
22 if not result.type:
23 all_checks_passed = False
24 checks.append((u'error', u'result (id: %s) has no type specified' % str(result.pk)))
25
26 if not result.title:
27 all_checks_passed = False
28 checks.append((u'error', u'result (id: %s) has no title specified' % str(result.pk)))
29
30 if not result.indicators.all():
31 all_checks_passed = False
32 checks.append((u'error', u'result (id: %s) has no indicator(s)' % str(result.pk)))
33
34 for indicator in result.indicators.all():
35 if not indicator.measure:
36 all_checks_passed = False
37 checks.append((u'error', u'indicator (id: %s) has no measure specified' %
38 str(indicator.pk)))
39
40 if not indicator.title:
41 all_checks_passed = False
42 checks.append((u'error', u'indicator (id: %s) has no title specified' %
43 str(indicator.pk)))
44
45 if (indicator.baseline_value and not indicator.baseline_year) or \
46 (not indicator.baseline_value and indicator.baseline_year):
47 all_checks_passed = False
48 checks.append((u'error', u'indicator (id: %s) baseline has no value or year '
49 u'specified' % str(indicator.pk)))
50
51 for reference in indicator.references.all():
52 if not reference.reference:
53 all_checks_passed = False
54 checks.append((u'error', u'indicator reference (id: %s) has no code '
55 u'specified' % str(reference.pk)))
56
57 if not reference.vocabulary:
58 all_checks_passed = False
59 checks.append((u'error', u'indicator reference (id: %s) has no vocabulary '
60 u'specified' % str(reference.pk)))
61
62 if reference.vocabulary == '99' and not reference.vocabulary_uri:
63 all_checks_passed = False
64 checks.append((u'error', u'indicator reference (id: %s) has vocabulary 99 '
65 u'(reporting organisation) but no vocabulary URI '
66 u'specified' % str(reference.pk)))
67
68 for period in indicator.periods.all():
69 if not period.period_start:
70 all_checks_passed = False
71 checks.append((u'error', u'indicator period (id: %s) has no start date '
72 u'specified' % str(period.pk)))
73
74 if not period.period_end:
75 all_checks_passed = False
76 checks.append((u'error', u'indicator period (id: %s) has no end date '
77 u'specified' % str(period.pk)))
78
79 if period.period_start and period.period_end and \
80 period.period_start > period.period_end:
81 all_checks_passed = False
82 checks.append((u'error', u'indicator period (id: %s) has a start date '
83 u'later than the end date' % str(period.pk)))
84
85 if not period.target_value and (period.target_comment or
86 period.target_locations.all() or
87 period.target_dimensions.all()):
88 all_checks_passed = False
89 checks.append((u'error', u'indicator period (id: %s) has no target value, but '
90 u'does have a target comment, target location(s) or '
91 u'target dimension(s)' % str(period.pk)))
92
93 if not period.actual_value and (period.actual_comment or
94 period.actual_locations.all() or
95 period.actual_dimensions.all()):
96 all_checks_passed = False
97 checks.append((u'error', u'indicator period (id: %s) has no actual value, but '
98 u'does have an actual comment, actual location(s) or '
99 u'actual dimension(s)' % str(period.pk)))
100
101 if project.results.all() and all_checks_passed:
102 checks.append((u'success', u'has valid result(s)'))
103
104 return all_checks_passed, checks
105
[end of akvo/iati/checks/fields/results.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/akvo/iati/checks/fields/results.py b/akvo/iati/checks/fields/results.py
--- a/akvo/iati/checks/fields/results.py
+++ b/akvo/iati/checks/fields/results.py
@@ -4,6 +4,8 @@
# See more details in the license.txt file located at the root folder of the Akvo RSR module.
# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
+from akvo.rsr.models.result.utils import QUALITATIVE
+
def results(project):
"""
@@ -32,7 +34,7 @@
checks.append((u'error', u'result (id: %s) has no indicator(s)' % str(result.pk)))
for indicator in result.indicators.all():
- if not indicator.measure:
+ if indicator.type != QUALITATIVE and not indicator.measure:
all_checks_passed = False
checks.append((u'error', u'indicator (id: %s) has no measure specified' %
str(indicator.pk)))
| {"golden_diff": "diff --git a/akvo/iati/checks/fields/results.py b/akvo/iati/checks/fields/results.py\n--- a/akvo/iati/checks/fields/results.py\n+++ b/akvo/iati/checks/fields/results.py\n@@ -4,6 +4,8 @@\n # See more details in the license.txt file located at the root folder of the Akvo RSR module.\n # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n \n+from akvo.rsr.models.result.utils import QUALITATIVE\n+\n \n def results(project):\n \"\"\"\n@@ -32,7 +34,7 @@\n checks.append((u'error', u'result (id: %s) has no indicator(s)' % str(result.pk)))\n \n for indicator in result.indicators.all():\n- if not indicator.measure:\n+ if indicator.type != QUALITATIVE and not indicator.measure:\n all_checks_passed = False\n checks.append((u'error', u'indicator (id: %s) has no measure specified' %\n str(indicator.pk)))\n", "issue": "Incorrect mandatory validation for qualitative indicators\nThe indicator measure for qualitative indicators is hidden, but the code checking for completion of sections doesn't ignore that field, and considers it mandatory. \n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\n\ndef results(project):\n \"\"\"\n Check if result has a type, title and at least one indicator.\n Check if indicator has a measure and title.\n Check if indicator baseline has year and value.\n Check if indicator period has a start and end date, and start before end.\n\n :param project: Project object\n :return: All checks passed boolean, [Check results]\n \"\"\"\n checks = []\n all_checks_passed = True\n\n for result in project.results.all():\n if not result.type:\n all_checks_passed = False\n checks.append((u'error', u'result (id: %s) has no type specified' % str(result.pk)))\n\n if not result.title:\n all_checks_passed = False\n checks.append((u'error', u'result (id: %s) has no title specified' % str(result.pk)))\n\n if not result.indicators.all():\n all_checks_passed = False\n checks.append((u'error', u'result (id: %s) has no indicator(s)' % str(result.pk)))\n\n for indicator in result.indicators.all():\n if not indicator.measure:\n all_checks_passed = False\n checks.append((u'error', u'indicator (id: %s) has no measure specified' %\n str(indicator.pk)))\n\n if not indicator.title:\n all_checks_passed = False\n checks.append((u'error', u'indicator (id: %s) has no title specified' %\n str(indicator.pk)))\n\n if (indicator.baseline_value and not indicator.baseline_year) or \\\n (not indicator.baseline_value and indicator.baseline_year):\n all_checks_passed = False\n checks.append((u'error', u'indicator (id: %s) baseline has no value or year '\n u'specified' % str(indicator.pk)))\n\n for reference in indicator.references.all():\n if not reference.reference:\n all_checks_passed = False\n checks.append((u'error', u'indicator reference (id: %s) has no code '\n u'specified' % str(reference.pk)))\n\n if not reference.vocabulary:\n all_checks_passed = False\n checks.append((u'error', u'indicator reference (id: %s) has no vocabulary '\n u'specified' % str(reference.pk)))\n\n if reference.vocabulary == '99' and not reference.vocabulary_uri:\n all_checks_passed = False\n checks.append((u'error', u'indicator reference (id: %s) has vocabulary 99 '\n u'(reporting organisation) but no vocabulary URI '\n u'specified' % str(reference.pk)))\n\n for period in indicator.periods.all():\n if not period.period_start:\n all_checks_passed = False\n checks.append((u'error', u'indicator period (id: %s) has no start date '\n u'specified' % str(period.pk)))\n\n if not period.period_end:\n all_checks_passed = False\n checks.append((u'error', u'indicator period (id: %s) has no end date '\n u'specified' % str(period.pk)))\n\n if period.period_start and period.period_end and \\\n period.period_start > period.period_end:\n all_checks_passed = False\n checks.append((u'error', u'indicator period (id: %s) has a start date '\n u'later than the end date' % str(period.pk)))\n\n if not period.target_value and (period.target_comment or\n period.target_locations.all() or\n period.target_dimensions.all()):\n all_checks_passed = False\n checks.append((u'error', u'indicator period (id: %s) has no target value, but '\n u'does have a target comment, target location(s) or '\n u'target dimension(s)' % str(period.pk)))\n\n if not period.actual_value and (period.actual_comment or\n period.actual_locations.all() or\n period.actual_dimensions.all()):\n all_checks_passed = False\n checks.append((u'error', u'indicator period (id: %s) has no actual value, but '\n u'does have an actual comment, actual location(s) or '\n u'actual dimension(s)' % str(period.pk)))\n\n if project.results.all() and all_checks_passed:\n checks.append((u'success', u'has valid result(s)'))\n\n return all_checks_passed, checks\n", "path": "akvo/iati/checks/fields/results.py"}]} | 1,807 | 240 |
gh_patches_debug_14323 | rasdani/github-patches | git_diff | zalando__patroni-698 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Could not load library ".../libpqwalreceiver.so"
We compile and install Postgres in a non-standard location (/opt/custom/share/postgresql-9.6.2_7 in the example below) which requires that we specify the `LD_LIBRARY_PATH` env var when starting up postgres and when working with postgres utilities. When starting up Postgres, Patroni currently passes along `PATH`, `LC_ALL` and `LANG` env vars if they are set, but does not do the same with `LD_LIBRARY_PATH`. This results in the following FATAL replication error on the replica node:
```
2018-06-05 21:10:26,675 INFO: Lock owner: postgresdb-01; I am postgresdb-02
2018-06-05 21:10:26,675 INFO: does not have lock
2018-06-05 21:10:26,676 INFO: no action. i am a secondary and i am following a leader
FATAL: could not load library "/opt/custom/share/postgresql-9.6.2_7/lib/libpqwalreceiver.so": libpq.so.5: cannot open shared object file: No such file or directory
FATAL: could not load library "/opt/custom/share/postgresql-9.6.2_7/lib/libpqwalreceiver.so": libpq.so.5: cannot open shared object file: No such file or directory
```
The fix for this is to pass `LD_LIBRARY_PATH` along with the other env vars, which results in replication behaving as expected:
```
2018-06-05 22:03:08,722 INFO: Lock owner: postgresdb-01; I am postgresdb-02
2018-06-05 22:03:08,722 INFO: does not have lock
2018-06-05 22:03:08,727 DEBUG: Sending request(xid=49): SetData(path='/service/postgres_cluster/members/postgresdb-02', data='{"conn_url":"postgres://postgresdb-02:5432/postgres","api_url":"http://postgresdb-02:8008/patroni","timeline":1,"state":"running","role":"replica","xlog_location":184664496}', version=-1)
2018-06-05 22:03:08,737 DEBUG: Received response(xid=49): ZnodeStat(czxid=4294967413, mzxid=4294967497, ctime=1528235848173, mtime=1528236188727, version=28, cversion=0, aversion=0, ephemeralOwner=216194533074731009, dataLength=169, numChildren=0, pzxid=4294967413)
2018-06-05 22:03:08,741 INFO: no action. i am a secondary and i am following a leader
```
I will submit a pull request shortly that addresses this problem.
</issue>
<code>
[start of patroni/postmaster.py]
1 import logging
2 import os
3 import psutil
4 import re
5 import signal
6 import subprocess
7
8 from patroni import call_self
9
10 logger = logging.getLogger(__name__)
11
12 STOP_SIGNALS = {
13 'smart': signal.SIGTERM,
14 'fast': signal.SIGINT,
15 'immediate': signal.SIGQUIT,
16 }
17
18
19 class PostmasterProcess(psutil.Process):
20
21 def __init__(self, pid):
22 self.is_single_user = False
23 if pid < 0:
24 pid = -pid
25 self.is_single_user = True
26 super(PostmasterProcess, self).__init__(pid)
27
28 @staticmethod
29 def _read_postmaster_pidfile(data_dir):
30 """Reads and parses postmaster.pid from the data directory
31
32 :returns dictionary of values if successful, empty dictionary otherwise
33 """
34 pid_line_names = ['pid', 'data_dir', 'start_time', 'port', 'socket_dir', 'listen_addr', 'shmem_key']
35 try:
36 with open(os.path.join(data_dir, 'postmaster.pid')) as f:
37 return {name: line.rstrip('\n') for name, line in zip(pid_line_names, f)}
38 except IOError:
39 return {}
40
41 def _is_postmaster_process(self):
42 try:
43 start_time = int(self._postmaster_pid.get('start_time', 0))
44 if start_time and abs(self.create_time() - start_time) > 3:
45 logger.info('Too much difference between %s and %s', self.create_time(), start_time)
46 return False
47 except ValueError:
48 logger.warning('Garbage start time value in pid file: %r', self._postmaster_pid.get('start_time'))
49
50 # Extra safety check. The process can't be ourselves, our parent or our direct child.
51 if self.pid == os.getpid() or self.pid == os.getppid() or self.ppid() == os.getpid():
52 logger.info('Patroni (pid=%s, ppid=%s), "fake postmaster" (pid=%s, ppid=%s)',
53 os.getpid(), os.getppid(), self.pid, self.ppid())
54 return False
55
56 return True
57
58 @classmethod
59 def _from_pidfile(cls, data_dir):
60 postmaster_pid = PostmasterProcess._read_postmaster_pidfile(data_dir)
61 try:
62 pid = int(postmaster_pid.get('pid', 0))
63 if pid:
64 proc = cls(pid)
65 proc._postmaster_pid = postmaster_pid
66 return proc
67 except ValueError:
68 pass
69
70 @staticmethod
71 def from_pidfile(data_dir):
72 try:
73 proc = PostmasterProcess._from_pidfile(data_dir)
74 return proc if proc and proc._is_postmaster_process() else None
75 except psutil.NoSuchProcess:
76 return None
77
78 @classmethod
79 def from_pid(cls, pid):
80 try:
81 return cls(pid)
82 except psutil.NoSuchProcess:
83 return None
84
85 def signal_stop(self, mode):
86 """Signal postmaster process to stop
87
88 :returns None if signaled, True if process is already gone, False if error
89 """
90 if self.is_single_user:
91 logger.warning("Cannot stop server; single-user server is running (PID: {0})".format(self.pid))
92 return False
93 try:
94 self.send_signal(STOP_SIGNALS[mode])
95 except psutil.NoSuchProcess:
96 return True
97 except psutil.AccessDenied as e:
98 logger.warning("Could not send stop signal to PostgreSQL (error: {0})".format(e))
99 return False
100
101 return None
102
103 def wait_for_user_backends_to_close(self):
104 # These regexps are cross checked against versions PostgreSQL 9.1 .. 9.6
105 aux_proc_re = re.compile("(?:postgres:)( .*:)? (?:""(?:startup|logger|checkpointer|writer|wal writer|"
106 "autovacuum launcher|autovacuum worker|stats collector|wal receiver|archiver|"
107 "wal sender) process|bgworker: )")
108
109 try:
110 user_backends = []
111 user_backends_cmdlines = []
112 for child in self.children():
113 try:
114 cmdline = child.cmdline()[0]
115 if not aux_proc_re.match(cmdline):
116 user_backends.append(child)
117 user_backends_cmdlines.append(cmdline)
118 except psutil.NoSuchProcess:
119 pass
120 if user_backends:
121 logger.debug('Waiting for user backends %s to close', ', '.join(user_backends_cmdlines))
122 psutil.wait_procs(user_backends)
123 logger.debug("Backends closed")
124 except psutil.Error:
125 logger.exception('wait_for_user_backends_to_close')
126
127 @staticmethod
128 def start(pgcommand, data_dir, conf, options):
129 # Unfortunately `pg_ctl start` does not return postmaster pid to us. Without this information
130 # it is hard to know the current state of postgres startup, so we had to reimplement pg_ctl start
131 # in python. It will start postgres, wait for port to be open and wait until postgres will start
132 # accepting connections.
133 # Important!!! We can't just start postgres using subprocess.Popen, because in this case it
134 # will be our child for the rest of our live and we will have to take care of it (`waitpid`).
135 # So we will use the same approach as pg_ctl uses: start a new process, which will start postgres.
136 # This process will write postmaster pid to stdout and exit immediately. Now it's responsibility
137 # of init process to take care about postmaster.
138 # In order to make everything portable we can't use fork&exec approach here, so we will call
139 # ourselves and pass list of arguments which must be used to start postgres.
140 env = {p: os.environ[p] for p in ('PATH', 'LC_ALL', 'LANG') if p in os.environ}
141 try:
142 proc = PostmasterProcess._from_pidfile(data_dir)
143 if proc and not proc._is_postmaster_process():
144 # Upon start postmaster process performs various safety checks if there is a postmaster.pid
145 # file in the data directory. Although Patroni already detected that the running process
146 # corresponding to the postmaster.pid is not a postmaster, the new postmaster might fail
147 # to start, because it thinks that postmaster.pid is already locked.
148 # Important!!! Unlink of postmaster.pid isn't an option, because it has a lot of nasty race conditions.
149 # Luckily there is a workaround to this problem, we can pass the pid from postmaster.pid
150 # in the `PG_GRANDPARENT_PID` environment variable and postmaster will ignore it.
151 env['PG_GRANDPARENT_PID'] = str(proc.pid)
152 except psutil.NoSuchProcess:
153 pass
154
155 proc = call_self(['pg_ctl_start', pgcommand, '-D', data_dir,
156 '--config-file={}'.format(conf)] + options, close_fds=True,
157 preexec_fn=os.setsid, stdout=subprocess.PIPE, env=env)
158 pid = int(proc.stdout.readline().strip())
159 proc.wait()
160 logger.info('postmaster pid=%s', pid)
161
162 # TODO: In an extremely unlikely case, the process could have exited and the pid reassigned. The start
163 # initiation time is not accurate enough to compare to create time as start time would also likely
164 # be relatively close. We need the subprocess extract pid+start_time in a race free manner.
165 return PostmasterProcess.from_pid(pid)
166
[end of patroni/postmaster.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/patroni/postmaster.py b/patroni/postmaster.py
--- a/patroni/postmaster.py
+++ b/patroni/postmaster.py
@@ -137,7 +137,7 @@
# of init process to take care about postmaster.
# In order to make everything portable we can't use fork&exec approach here, so we will call
# ourselves and pass list of arguments which must be used to start postgres.
- env = {p: os.environ[p] for p in ('PATH', 'LC_ALL', 'LANG') if p in os.environ}
+ env = {p: os.environ[p] for p in ('PATH', 'LD_LIBRARY_PATH', 'LC_ALL', 'LANG') if p in os.environ}
try:
proc = PostmasterProcess._from_pidfile(data_dir)
if proc and not proc._is_postmaster_process():
| {"golden_diff": "diff --git a/patroni/postmaster.py b/patroni/postmaster.py\n--- a/patroni/postmaster.py\n+++ b/patroni/postmaster.py\n@@ -137,7 +137,7 @@\n # of init process to take care about postmaster.\n # In order to make everything portable we can't use fork&exec approach here, so we will call\n # ourselves and pass list of arguments which must be used to start postgres.\n- env = {p: os.environ[p] for p in ('PATH', 'LC_ALL', 'LANG') if p in os.environ}\n+ env = {p: os.environ[p] for p in ('PATH', 'LD_LIBRARY_PATH', 'LC_ALL', 'LANG') if p in os.environ}\n try:\n proc = PostmasterProcess._from_pidfile(data_dir)\n if proc and not proc._is_postmaster_process():\n", "issue": "Could not load library \".../libpqwalreceiver.so\"\nWe compile and install Postgres in a non-standard location (/opt/custom/share/postgresql-9.6.2_7 in the example below) which requires that we specify the `LD_LIBRARY_PATH` env var when starting up postgres and when working with postgres utilities. When starting up Postgres, Patroni currently passes along `PATH`, `LC_ALL` and `LANG` env vars if they are set, but does not do the same with `LD_LIBRARY_PATH`. This results in the following FATAL replication error on the replica node:\r\n```\r\n2018-06-05 21:10:26,675 INFO: Lock owner: postgresdb-01; I am postgresdb-02\r\n2018-06-05 21:10:26,675 INFO: does not have lock\r\n2018-06-05 21:10:26,676 INFO: no action. i am a secondary and i am following a leader\r\nFATAL: could not load library \"/opt/custom/share/postgresql-9.6.2_7/lib/libpqwalreceiver.so\": libpq.so.5: cannot open shared object file: No such file or directory\r\nFATAL: could not load library \"/opt/custom/share/postgresql-9.6.2_7/lib/libpqwalreceiver.so\": libpq.so.5: cannot open shared object file: No such file or directory\r\n```\r\n\r\nThe fix for this is to pass `LD_LIBRARY_PATH` along with the other env vars, which results in replication behaving as expected:\r\n```\r\n2018-06-05 22:03:08,722 INFO: Lock owner: postgresdb-01; I am postgresdb-02\r\n2018-06-05 22:03:08,722 INFO: does not have lock\r\n2018-06-05 22:03:08,727 DEBUG: Sending request(xid=49): SetData(path='/service/postgres_cluster/members/postgresdb-02', data='{\"conn_url\":\"postgres://postgresdb-02:5432/postgres\",\"api_url\":\"http://postgresdb-02:8008/patroni\",\"timeline\":1,\"state\":\"running\",\"role\":\"replica\",\"xlog_location\":184664496}', version=-1)\r\n2018-06-05 22:03:08,737 DEBUG: Received response(xid=49): ZnodeStat(czxid=4294967413, mzxid=4294967497, ctime=1528235848173, mtime=1528236188727, version=28, cversion=0, aversion=0, ephemeralOwner=216194533074731009, dataLength=169, numChildren=0, pzxid=4294967413)\r\n2018-06-05 22:03:08,741 INFO: no action. i am a secondary and i am following a leader\r\n```\r\n\r\nI will submit a pull request shortly that addresses this problem.\n", "before_files": [{"content": "import logging\nimport os\nimport psutil\nimport re\nimport signal\nimport subprocess\n\nfrom patroni import call_self\n\nlogger = logging.getLogger(__name__)\n\nSTOP_SIGNALS = {\n 'smart': signal.SIGTERM,\n 'fast': signal.SIGINT,\n 'immediate': signal.SIGQUIT,\n}\n\n\nclass PostmasterProcess(psutil.Process):\n\n def __init__(self, pid):\n self.is_single_user = False\n if pid < 0:\n pid = -pid\n self.is_single_user = True\n super(PostmasterProcess, self).__init__(pid)\n\n @staticmethod\n def _read_postmaster_pidfile(data_dir):\n \"\"\"Reads and parses postmaster.pid from the data directory\n\n :returns dictionary of values if successful, empty dictionary otherwise\n \"\"\"\n pid_line_names = ['pid', 'data_dir', 'start_time', 'port', 'socket_dir', 'listen_addr', 'shmem_key']\n try:\n with open(os.path.join(data_dir, 'postmaster.pid')) as f:\n return {name: line.rstrip('\\n') for name, line in zip(pid_line_names, f)}\n except IOError:\n return {}\n\n def _is_postmaster_process(self):\n try:\n start_time = int(self._postmaster_pid.get('start_time', 0))\n if start_time and abs(self.create_time() - start_time) > 3:\n logger.info('Too much difference between %s and %s', self.create_time(), start_time)\n return False\n except ValueError:\n logger.warning('Garbage start time value in pid file: %r', self._postmaster_pid.get('start_time'))\n\n # Extra safety check. The process can't be ourselves, our parent or our direct child.\n if self.pid == os.getpid() or self.pid == os.getppid() or self.ppid() == os.getpid():\n logger.info('Patroni (pid=%s, ppid=%s), \"fake postmaster\" (pid=%s, ppid=%s)',\n os.getpid(), os.getppid(), self.pid, self.ppid())\n return False\n\n return True\n\n @classmethod\n def _from_pidfile(cls, data_dir):\n postmaster_pid = PostmasterProcess._read_postmaster_pidfile(data_dir)\n try:\n pid = int(postmaster_pid.get('pid', 0))\n if pid:\n proc = cls(pid)\n proc._postmaster_pid = postmaster_pid\n return proc\n except ValueError:\n pass\n\n @staticmethod\n def from_pidfile(data_dir):\n try:\n proc = PostmasterProcess._from_pidfile(data_dir)\n return proc if proc and proc._is_postmaster_process() else None\n except psutil.NoSuchProcess:\n return None\n\n @classmethod\n def from_pid(cls, pid):\n try:\n return cls(pid)\n except psutil.NoSuchProcess:\n return None\n\n def signal_stop(self, mode):\n \"\"\"Signal postmaster process to stop\n\n :returns None if signaled, True if process is already gone, False if error\n \"\"\"\n if self.is_single_user:\n logger.warning(\"Cannot stop server; single-user server is running (PID: {0})\".format(self.pid))\n return False\n try:\n self.send_signal(STOP_SIGNALS[mode])\n except psutil.NoSuchProcess:\n return True\n except psutil.AccessDenied as e:\n logger.warning(\"Could not send stop signal to PostgreSQL (error: {0})\".format(e))\n return False\n\n return None\n\n def wait_for_user_backends_to_close(self):\n # These regexps are cross checked against versions PostgreSQL 9.1 .. 9.6\n aux_proc_re = re.compile(\"(?:postgres:)( .*:)? (?:\"\"(?:startup|logger|checkpointer|writer|wal writer|\"\n \"autovacuum launcher|autovacuum worker|stats collector|wal receiver|archiver|\"\n \"wal sender) process|bgworker: )\")\n\n try:\n user_backends = []\n user_backends_cmdlines = []\n for child in self.children():\n try:\n cmdline = child.cmdline()[0]\n if not aux_proc_re.match(cmdline):\n user_backends.append(child)\n user_backends_cmdlines.append(cmdline)\n except psutil.NoSuchProcess:\n pass\n if user_backends:\n logger.debug('Waiting for user backends %s to close', ', '.join(user_backends_cmdlines))\n psutil.wait_procs(user_backends)\n logger.debug(\"Backends closed\")\n except psutil.Error:\n logger.exception('wait_for_user_backends_to_close')\n\n @staticmethod\n def start(pgcommand, data_dir, conf, options):\n # Unfortunately `pg_ctl start` does not return postmaster pid to us. Without this information\n # it is hard to know the current state of postgres startup, so we had to reimplement pg_ctl start\n # in python. It will start postgres, wait for port to be open and wait until postgres will start\n # accepting connections.\n # Important!!! We can't just start postgres using subprocess.Popen, because in this case it\n # will be our child for the rest of our live and we will have to take care of it (`waitpid`).\n # So we will use the same approach as pg_ctl uses: start a new process, which will start postgres.\n # This process will write postmaster pid to stdout and exit immediately. Now it's responsibility\n # of init process to take care about postmaster.\n # In order to make everything portable we can't use fork&exec approach here, so we will call\n # ourselves and pass list of arguments which must be used to start postgres.\n env = {p: os.environ[p] for p in ('PATH', 'LC_ALL', 'LANG') if p in os.environ}\n try:\n proc = PostmasterProcess._from_pidfile(data_dir)\n if proc and not proc._is_postmaster_process():\n # Upon start postmaster process performs various safety checks if there is a postmaster.pid\n # file in the data directory. Although Patroni already detected that the running process\n # corresponding to the postmaster.pid is not a postmaster, the new postmaster might fail\n # to start, because it thinks that postmaster.pid is already locked.\n # Important!!! Unlink of postmaster.pid isn't an option, because it has a lot of nasty race conditions.\n # Luckily there is a workaround to this problem, we can pass the pid from postmaster.pid\n # in the `PG_GRANDPARENT_PID` environment variable and postmaster will ignore it.\n env['PG_GRANDPARENT_PID'] = str(proc.pid)\n except psutil.NoSuchProcess:\n pass\n\n proc = call_self(['pg_ctl_start', pgcommand, '-D', data_dir,\n '--config-file={}'.format(conf)] + options, close_fds=True,\n preexec_fn=os.setsid, stdout=subprocess.PIPE, env=env)\n pid = int(proc.stdout.readline().strip())\n proc.wait()\n logger.info('postmaster pid=%s', pid)\n\n # TODO: In an extremely unlikely case, the process could have exited and the pid reassigned. The start\n # initiation time is not accurate enough to compare to create time as start time would also likely\n # be relatively close. We need the subprocess extract pid+start_time in a race free manner.\n return PostmasterProcess.from_pid(pid)\n", "path": "patroni/postmaster.py"}]} | 3,319 | 195 |
gh_patches_debug_16340 | rasdani/github-patches | git_diff | qutip__qutip-2335 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typos in qutip.expect()'s docstring
### Bug Description
Two words in the docstring contain typos. See below.
### Code to Reproduce the Bug
```shell
from qutip import expect
print(expect.__doc__)
```
### Code Output
```shell
Parameters
----------
oper : qobj/array-like
A single or a `list` or operators for expectation value.
state : qobj/array-like
A single or a `list` of quantum states or density matrices.
Returns
-------
expt : float/complex/array-like
Expectation value. ``real`` if `oper` is Hermitian, ``complex``
otherwise. A (nested) array of expectaction values of state or operator
are arrays.
```
### Expected Behaviour
Parameters
----------
oper : qobj/array-like
A single or a `list` **~~or~~** **_of_** operators for expectation value.
state : qobj/array-like
A single or a `list` of quantum states or density matrices.
Returns
-------
expt : float/complex/array-like
Expectation value. ``real`` if `oper` is Hermitian, ``complex``
otherwise. A (nested) array of expectaction values **~~of~~** **_if_** state or operator
are arrays.
### Your Environment
```shell
QuTiP Version: 4.7.5
Numpy Version: 1.26.4
Scipy Version: 1.11.4
Cython Version: None
Matplotlib Version: 3.8.3
Python Version: 3.12.2
Number of CPUs: 8
BLAS Info: Generic
OPENMP Installed: False
INTEL MKL Ext: False
Platform Info: Windows (AMD64)
```
### Additional Context
_No response_
</issue>
<code>
[start of qutip/core/expect.py]
1 __all__ = ['expect', 'variance']
2
3 import numpy as np
4
5 from .qobj import Qobj
6 from . import data as _data
7
8
9 def expect(oper, state):
10 """
11 Calculate the expectation value for operator(s) and state(s). The
12 expectation of state ``k`` on operator ``A`` is defined as
13 ``k.dag() @ A @ k``, and for density matrix ``R`` on operator ``A`` it is
14 ``trace(A @ R)``.
15
16 Parameters
17 ----------
18 oper : qobj/array-like
19 A single or a `list` or operators for expectation value.
20
21 state : qobj/array-like
22 A single or a `list` of quantum states or density matrices.
23
24 Returns
25 -------
26 expt : float/complex/array-like
27 Expectation value. ``real`` if ``oper`` is Hermitian, ``complex``
28 otherwise. A (nested) array of expectaction values of state or operator
29 are arrays.
30
31 Examples
32 --------
33 >>> expect(num(4), basis(4, 3)) == 3 # doctest: +NORMALIZE_WHITESPACE
34 True
35
36 """
37 if isinstance(state, Qobj) and isinstance(oper, Qobj):
38 return _single_qobj_expect(oper, state)
39
40 elif isinstance(oper, (list, np.ndarray)):
41 if isinstance(state, Qobj):
42 dtype = np.complex128
43 if all(op.isherm for op in oper) and (state.isket or state.isherm):
44 dtype = np.float64
45 return np.array([_single_qobj_expect(op, state) for op in oper],
46 dtype=dtype)
47 return [expect(op, state) for op in oper]
48
49 elif isinstance(state, (list, np.ndarray)):
50 dtype = np.complex128
51 if oper.isherm and all(op.isherm or op.isket for op in state):
52 dtype = np.float64
53 return np.array([_single_qobj_expect(oper, x) for x in state],
54 dtype=dtype)
55 raise TypeError('Arguments must be quantum objects')
56
57
58 def _single_qobj_expect(oper, state):
59 """
60 Private function used by expect to calculate expectation values of Qobjs.
61 """
62 if not oper.isoper or not (state.isket or state.isoper):
63 raise TypeError('invalid operand types')
64 if oper.dims[1] != state.dims[0]:
65 msg = (
66 "incompatible dimensions "
67 + str(oper.dims[1]) + " and " + str(state.dims[0])
68 )
69 raise ValueError(msg)
70 out = _data.expect(oper.data, state.data)
71
72 # This ensures that expect can return something that is not a number such
73 # as a `tensorflow.Tensor` in qutip-tensorflow.
74 return out.real if (oper.isherm
75 and (state.isket or state.isherm)
76 and hasattr(out, "real")
77 ) else out
78
79
80 def variance(oper, state):
81 """
82 Variance of an operator for the given state vector or density matrix.
83
84 Parameters
85 ----------
86 oper : qobj
87 Operator for expectation value.
88
89 state : qobj/list
90 A single or ``list`` of quantum states or density matrices..
91
92 Returns
93 -------
94 var : float
95 Variance of operator 'oper' for given state.
96
97 """
98 return expect(oper**2, state) - expect(oper, state)**2
99
[end of qutip/core/expect.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qutip/core/expect.py b/qutip/core/expect.py
--- a/qutip/core/expect.py
+++ b/qutip/core/expect.py
@@ -16,7 +16,7 @@
Parameters
----------
oper : qobj/array-like
- A single or a `list` or operators for expectation value.
+ A single or a `list` of operators for expectation value.
state : qobj/array-like
A single or a `list` of quantum states or density matrices.
@@ -25,8 +25,8 @@
-------
expt : float/complex/array-like
Expectation value. ``real`` if ``oper`` is Hermitian, ``complex``
- otherwise. A (nested) array of expectaction values of state or operator
- are arrays.
+ otherwise. A (nested) array of expectaction values if ``state`` or
+ ``oper`` are arrays.
Examples
--------
| {"golden_diff": "diff --git a/qutip/core/expect.py b/qutip/core/expect.py\n--- a/qutip/core/expect.py\n+++ b/qutip/core/expect.py\n@@ -16,7 +16,7 @@\n Parameters\n ----------\n oper : qobj/array-like\n- A single or a `list` or operators for expectation value.\n+ A single or a `list` of operators for expectation value.\n \n state : qobj/array-like\n A single or a `list` of quantum states or density matrices.\n@@ -25,8 +25,8 @@\n -------\n expt : float/complex/array-like\n Expectation value. ``real`` if ``oper`` is Hermitian, ``complex``\n- otherwise. A (nested) array of expectaction values of state or operator\n- are arrays.\n+ otherwise. A (nested) array of expectaction values if ``state`` or\n+ ``oper`` are arrays.\n \n Examples\n --------\n", "issue": "Typos in qutip.expect()'s docstring\n### Bug Description\r\n\r\nTwo words in the docstring contain typos. See below.\r\n\r\n### Code to Reproduce the Bug\r\n\r\n```shell\r\nfrom qutip import expect\r\nprint(expect.__doc__)\r\n```\r\n\r\n\r\n### Code Output\r\n\r\n```shell\r\nParameters\r\n----------\r\noper : qobj/array-like\r\n A single or a `list` or operators for expectation value.\r\n\r\nstate : qobj/array-like\r\n A single or a `list` of quantum states or density matrices.\r\n\r\nReturns\r\n-------\r\nexpt : float/complex/array-like\r\n Expectation value. ``real`` if `oper` is Hermitian, ``complex``\r\n otherwise. A (nested) array of expectaction values of state or operator\r\n are arrays.\r\n```\r\n\r\n\r\n### Expected Behaviour\r\n\r\nParameters\r\n----------\r\noper : qobj/array-like\r\n A single or a `list` **~~or~~** **_of_** operators for expectation value.\r\n\r\nstate : qobj/array-like\r\n A single or a `list` of quantum states or density matrices.\r\n\r\nReturns\r\n-------\r\nexpt : float/complex/array-like\r\n Expectation value. ``real`` if `oper` is Hermitian, ``complex``\r\n otherwise. A (nested) array of expectaction values **~~of~~** **_if_** state or operator\r\n are arrays.\r\n\r\n### Your Environment\r\n\r\n```shell\r\nQuTiP Version: 4.7.5\r\nNumpy Version: 1.26.4\r\nScipy Version: 1.11.4\r\nCython Version: None\r\nMatplotlib Version: 3.8.3\r\nPython Version: 3.12.2\r\nNumber of CPUs: 8\r\nBLAS Info: Generic\r\nOPENMP Installed: False\r\nINTEL MKL Ext: False\r\nPlatform Info: Windows (AMD64)\r\n```\r\n\r\n\r\n### Additional Context\r\n\r\n_No response_\n", "before_files": [{"content": "__all__ = ['expect', 'variance']\n\nimport numpy as np\n\nfrom .qobj import Qobj\nfrom . import data as _data\n\n\ndef expect(oper, state):\n \"\"\"\n Calculate the expectation value for operator(s) and state(s). The\n expectation of state ``k`` on operator ``A`` is defined as\n ``k.dag() @ A @ k``, and for density matrix ``R`` on operator ``A`` it is\n ``trace(A @ R)``.\n\n Parameters\n ----------\n oper : qobj/array-like\n A single or a `list` or operators for expectation value.\n\n state : qobj/array-like\n A single or a `list` of quantum states or density matrices.\n\n Returns\n -------\n expt : float/complex/array-like\n Expectation value. ``real`` if ``oper`` is Hermitian, ``complex``\n otherwise. A (nested) array of expectaction values of state or operator\n are arrays.\n\n Examples\n --------\n >>> expect(num(4), basis(4, 3)) == 3 # doctest: +NORMALIZE_WHITESPACE\n True\n\n \"\"\"\n if isinstance(state, Qobj) and isinstance(oper, Qobj):\n return _single_qobj_expect(oper, state)\n\n elif isinstance(oper, (list, np.ndarray)):\n if isinstance(state, Qobj):\n dtype = np.complex128\n if all(op.isherm for op in oper) and (state.isket or state.isherm):\n dtype = np.float64\n return np.array([_single_qobj_expect(op, state) for op in oper],\n dtype=dtype)\n return [expect(op, state) for op in oper]\n\n elif isinstance(state, (list, np.ndarray)):\n dtype = np.complex128\n if oper.isherm and all(op.isherm or op.isket for op in state):\n dtype = np.float64\n return np.array([_single_qobj_expect(oper, x) for x in state],\n dtype=dtype)\n raise TypeError('Arguments must be quantum objects')\n\n\ndef _single_qobj_expect(oper, state):\n \"\"\"\n Private function used by expect to calculate expectation values of Qobjs.\n \"\"\"\n if not oper.isoper or not (state.isket or state.isoper):\n raise TypeError('invalid operand types')\n if oper.dims[1] != state.dims[0]:\n msg = (\n \"incompatible dimensions \"\n + str(oper.dims[1]) + \" and \" + str(state.dims[0])\n )\n raise ValueError(msg)\n out = _data.expect(oper.data, state.data)\n\n # This ensures that expect can return something that is not a number such\n # as a `tensorflow.Tensor` in qutip-tensorflow.\n return out.real if (oper.isherm\n and (state.isket or state.isherm)\n and hasattr(out, \"real\")\n ) else out\n\n\ndef variance(oper, state):\n \"\"\"\n Variance of an operator for the given state vector or density matrix.\n\n Parameters\n ----------\n oper : qobj\n Operator for expectation value.\n\n state : qobj/list\n A single or ``list`` of quantum states or density matrices..\n\n Returns\n -------\n var : float\n Variance of operator 'oper' for given state.\n\n \"\"\"\n return expect(oper**2, state) - expect(oper, state)**2\n", "path": "qutip/core/expect.py"}]} | 1,922 | 220 |
gh_patches_debug_22545 | rasdani/github-patches | git_diff | learningequality__kolibri-3179 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Rename Buildkite assets for clarity
## Summary
When downloading builds from `buildkite/kolibri/assets` details link, filenames are too complex and difficult to distinguish.

These links do save some time, but filenames are longer then those downloaded individually from Buildkite (green). Could we make them shorter? For example, get rid of the useless prefix and various `%2F` in between in order to make this:
`kolibri%2Fbuildkite%2Fbuild-2313%2F2755%2Fkolibri-0.6.dev020170929214748-git.pex` (red)
into this:
`build-2313-kolibri-0.6.dev020170929214748-git.pex` (yellow)
Or just `kolibri-0.6.dev020170929214748-git.pex` if those build details are not necessary.

</issue>
<code>
[start of .buildkite/upload_artifacts.py]
1 """
2 # Requirements:
3 * Generate access token in your Github account, then create environment variable GITHUB_ACCESS_TOKEN.
4 - e.g export GITHUB_ACCESS_TOKEN=1ns3rt-my-t0k3n-h3re.
5
6 * Generate a service account key for your Google API credentials, then create environment variable GOOGLE_APPLICATION_CREDENTIALS.
7 - e.g export GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json.
8
9 # Environment Variable/s:
10 * IS_KOLIBRI_RELEASE = Upload artifacts to the Google Cloud as a release candidate.
11 * GITHUB_ACCESS_TOKEN = Personal access token used to authenticate in your Github account via API.
12 * BUILDKITE_BUILD_NUMBER = Build identifier for each directory created.
13 * BUILDKITE_PULL_REQUEST = Pull request issue or the value is false.
14 * BUILDKITE_TAG = Tag identifier if this build was built from a tag.
15 * BUILDKITE_COMMIT = Git commit hash that the build was made from.
16 * GOOGLE_APPLICATION_CREDENTIALS = Your service account key.
17 """
18 import logging
19 import os
20 import sys
21 from os import listdir
22
23 import requests
24 from gcloud import storage
25 from github3 import login
26
27 logging.getLogger().setLevel(logging.INFO)
28
29 ACCESS_TOKEN = os.getenv("GITHUB_ACCESS_TOKEN")
30 REPO_OWNER = "learningequality"
31 REPO_NAME = "kolibri"
32 ISSUE_ID = os.getenv("BUILDKITE_PULL_REQUEST")
33 BUILD_ID = os.getenv("BUILDKITE_BUILD_NUMBER")
34 TAG = os.getenv("BUILDKITE_TAG")
35 COMMIT = os.getenv("BUILDKITE_COMMIT")
36
37
38 RELEASE_DIR = 'release'
39 PROJECT_PATH = os.path.join(os.getcwd())
40
41 # Python packages artifact location
42 DIST_DIR = os.path.join(PROJECT_PATH, "dist")
43
44 # Installer artifact location
45 INSTALLER_DIR = os.path.join(PROJECT_PATH, "installer")
46
47 headers = {'Authorization': 'token %s' % ACCESS_TOKEN}
48
49 INSTALLER_CAT = 'Installers'
50
51 PYTHON_PKG_CAT = 'Python packages'
52
53 # Manifest of files, keyed by extension
54 file_manifest = {
55 'exe': {
56 'extension': 'exe',
57 'description': 'Windows Installer',
58 'category': INSTALLER_CAT,
59 'content_type': 'application/x-ms-dos-executable',
60 },
61 'pex': {
62 'extension': 'pex',
63 'description': 'Pex file',
64 'category': PYTHON_PKG_CAT,
65 'content_type': 'application/octet-stream',
66 },
67 'whl': {
68 'extension': 'whl',
69 'description': 'Whl file',
70 'category': PYTHON_PKG_CAT,
71 'content_type': 'application/zip',
72 },
73 'gz': {
74 'extension': 'gz',
75 'description': 'Tar file',
76 'category': PYTHON_PKG_CAT,
77 'content_type': 'application/gzip',
78 },
79 'apk': {
80 'extension': 'apk',
81 'description': 'Android Installer',
82 'category': INSTALLER_CAT,
83 'content_type': 'application/vnd.android.package-archive',
84 },
85 }
86
87 file_order = [
88 'exe',
89 'apk',
90 'pex',
91 'whl',
92 'gz',
93 ]
94
95 gh = login(token=ACCESS_TOKEN)
96 repository = gh.repository(REPO_OWNER, REPO_NAME)
97
98
99 def create_status_report_html(artifacts):
100 """
101 Create html page to list build artifacts for linking from github status.
102 """
103 html = "<html>\n<body>\n<h1>Build Artifacts</h1>\n"
104 current_heading = None
105 for ext in file_order:
106 artifact = artifacts[ext]
107 if artifact['category'] != current_heading:
108 current_heading = artifact['category']
109 html += "<h2>{heading}</h2>\n".format(heading=current_heading)
110 html += "<p>{description}: <a href='{media_url}'>{name}</a></p>\n".format(
111 **artifact
112 )
113 html += "</body>\n</html>"
114 return html
115
116 def create_github_status(report_url):
117 """
118 Create a github status with a link to the report URL,
119 only do this once buildkite has been successful, so only report success here.
120 """
121 status = repository.create_status(
122 COMMIT,
123 "success",
124 target_url=report_url,
125 description="Kolibri Buildkite assets",
126 context="buildkite/kolibri/assets"
127 )
128 if status:
129 logging.info('Successfully created Github status for commit %s.' % COMMIT)
130 else:
131 logging.info('Error encounter. Now exiting!')
132 sys.exit(1)
133
134
135 def collect_local_artifacts():
136 """
137 Create a dict of the artifact name and the location.
138 """
139
140 artifacts_dict = {}
141
142 def create_artifact_data(artifact_dir):
143 for artifact in listdir(artifact_dir):
144 filename, file_extension = os.path.splitext(artifact)
145 # Remove leading '.'
146 file_extension = file_extension[1:]
147 if file_extension in file_manifest:
148 data = {"name": artifact,
149 "file_location": "%s/%s" % (artifact_dir, artifact)}
150 data.update(file_manifest[file_extension])
151 logging.info("Collect file data: (%s)" % data)
152 artifacts_dict[file_extension] = data
153 create_artifact_data(DIST_DIR)
154 create_artifact_data(INSTALLER_DIR)
155 return artifacts_dict
156
157
158 def upload_artifacts():
159 """
160 Upload the artifacts on the Google Cloud Storage.
161 Create a comment on the pull requester with artifact media link.
162 """
163 client = storage.Client()
164 bucket = client.bucket("le-downloads")
165 artifacts = collect_local_artifacts()
166 is_release = os.getenv("IS_KOLIBRI_RELEASE")
167 for file_data in artifacts.values():
168 logging.info("Uploading file (%s)" % (file_data.get("name")))
169 if is_release:
170 blob = bucket.blob('kolibri/%s/%s/%s' % (RELEASE_DIR, BUILD_ID, file_data.get("name")))
171 else:
172 blob = bucket.blob('kolibri/buildkite/build-%s/%s/%s' % (ISSUE_ID, BUILD_ID, file_data.get("name")))
173 blob.upload_from_filename(filename=file_data.get("file_location"))
174 blob.make_public()
175 file_data.update({'media_url': blob.media_link})
176
177 html = create_status_report_html(artifacts)
178
179 blob = bucket.blob('kolibri/%s/%s/report.html' % (RELEASE_DIR, BUILD_ID))
180
181 blob.upload_from_string(html, content_type='text/html')
182
183 blob.make_public()
184
185 create_github_status(blob.public_url)
186
187 if TAG:
188 # Building from a tag, this is probably a release!
189 # Have to do this with requests because github3 does not support this interface yet
190 get_release_asset_url = requests.get("https://api.github.com/repos/{owner}/{repo}/releases/tags/{tag}".format(
191 owner=REPO_OWNER,
192 repo=REPO_NAME,
193 tag=TAG,
194 ))
195 if get_release_asset_url.status_code == 200:
196 # Definitely a release!
197 release_id = get_release_asset_url.json()['id']
198 release_name = get_release_asset_url.json()['name']
199 release = repository.release(id=release_id)
200 logging.info("Uploading built assets to Github Release: %s" % release_name)
201 for file_extension in file_order:
202 artifact = artifacts[file_extension]
203 logging.info("Uploading release asset: %s" % (artifact.get("name")))
204 # For some reason github3 does not let us set a label at initial upload
205 asset = release.upload_asset(
206 content_type=artifact['content_type'],
207 name=artifact['name'],
208 asset=open(artifact['file_location'], 'rb')
209 )
210 if asset:
211 # So do it after the initial upload instead
212 asset.edit(artifact['name'], label=artifact['description'])
213 logging.info("Successfully uploaded release asset: %s" % (artifact.get('name')))
214 else:
215 logging.error("Error uploading release asset: %s" % (artifact.get('name')))
216
217
218 def main():
219 upload_artifacts()
220
221
222 if __name__ == "__main__":
223 main()
224
[end of .buildkite/upload_artifacts.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.buildkite/upload_artifacts.py b/.buildkite/upload_artifacts.py
--- a/.buildkite/upload_artifacts.py
+++ b/.buildkite/upload_artifacts.py
@@ -167,16 +167,16 @@
for file_data in artifacts.values():
logging.info("Uploading file (%s)" % (file_data.get("name")))
if is_release:
- blob = bucket.blob('kolibri/%s/%s/%s' % (RELEASE_DIR, BUILD_ID, file_data.get("name")))
+ blob = bucket.blob('kolibri-%s-%s-%s' % (RELEASE_DIR, BUILD_ID, file_data.get("name")))
else:
- blob = bucket.blob('kolibri/buildkite/build-%s/%s/%s' % (ISSUE_ID, BUILD_ID, file_data.get("name")))
+ blob = bucket.blob('kolibri-buildkite-build-%s-%s-%s' % (ISSUE_ID, BUILD_ID, file_data.get("name")))
blob.upload_from_filename(filename=file_data.get("file_location"))
blob.make_public()
file_data.update({'media_url': blob.media_link})
html = create_status_report_html(artifacts)
- blob = bucket.blob('kolibri/%s/%s/report.html' % (RELEASE_DIR, BUILD_ID))
+ blob = bucket.blob('kolibri-%s-%s-report.html' % (RELEASE_DIR, BUILD_ID))
blob.upload_from_string(html, content_type='text/html')
| {"golden_diff": "diff --git a/.buildkite/upload_artifacts.py b/.buildkite/upload_artifacts.py\n--- a/.buildkite/upload_artifacts.py\n+++ b/.buildkite/upload_artifacts.py\n@@ -167,16 +167,16 @@\n for file_data in artifacts.values():\n logging.info(\"Uploading file (%s)\" % (file_data.get(\"name\")))\n if is_release:\n- blob = bucket.blob('kolibri/%s/%s/%s' % (RELEASE_DIR, BUILD_ID, file_data.get(\"name\")))\n+ blob = bucket.blob('kolibri-%s-%s-%s' % (RELEASE_DIR, BUILD_ID, file_data.get(\"name\")))\n else:\n- blob = bucket.blob('kolibri/buildkite/build-%s/%s/%s' % (ISSUE_ID, BUILD_ID, file_data.get(\"name\")))\n+ blob = bucket.blob('kolibri-buildkite-build-%s-%s-%s' % (ISSUE_ID, BUILD_ID, file_data.get(\"name\")))\n blob.upload_from_filename(filename=file_data.get(\"file_location\"))\n blob.make_public()\n file_data.update({'media_url': blob.media_link})\n \n html = create_status_report_html(artifacts)\n \n- blob = bucket.blob('kolibri/%s/%s/report.html' % (RELEASE_DIR, BUILD_ID))\n+ blob = bucket.blob('kolibri-%s-%s-report.html' % (RELEASE_DIR, BUILD_ID))\n \n blob.upload_from_string(html, content_type='text/html')\n", "issue": "Rename Buildkite assets for clarity\n## Summary\r\n\r\nWhen downloading builds from `buildkite/kolibri/assets` details link, filenames are too complex and difficult to distinguish. \r\n\r\n\r\n\r\nThese links do save some time, but filenames are longer then those downloaded individually from Buildkite (green). Could we make them shorter? For example, get rid of the useless prefix and various `%2F` in between in order to make this:\r\n\r\n`kolibri%2Fbuildkite%2Fbuild-2313%2F2755%2Fkolibri-0.6.dev020170929214748-git.pex` (red)\r\n\r\ninto this:\r\n\r\n`build-2313-kolibri-0.6.dev020170929214748-git.pex` (yellow)\r\n\r\nOr just `kolibri-0.6.dev020170929214748-git.pex` if those build details are not necessary.\r\n\r\n\n", "before_files": [{"content": "\"\"\"\n# Requirements:\n * Generate access token in your Github account, then create environment variable GITHUB_ACCESS_TOKEN.\n - e.g export GITHUB_ACCESS_TOKEN=1ns3rt-my-t0k3n-h3re.\n\n * Generate a service account key for your Google API credentials, then create environment variable GOOGLE_APPLICATION_CREDENTIALS.\n - e.g export GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json.\n\n# Environment Variable/s:\n * IS_KOLIBRI_RELEASE = Upload artifacts to the Google Cloud as a release candidate.\n * GITHUB_ACCESS_TOKEN = Personal access token used to authenticate in your Github account via API.\n * BUILDKITE_BUILD_NUMBER = Build identifier for each directory created.\n * BUILDKITE_PULL_REQUEST = Pull request issue or the value is false.\n * BUILDKITE_TAG = Tag identifier if this build was built from a tag.\n * BUILDKITE_COMMIT = Git commit hash that the build was made from.\n * GOOGLE_APPLICATION_CREDENTIALS = Your service account key.\n\"\"\"\nimport logging\nimport os\nimport sys\nfrom os import listdir\n\nimport requests\nfrom gcloud import storage\nfrom github3 import login\n\nlogging.getLogger().setLevel(logging.INFO)\n\nACCESS_TOKEN = os.getenv(\"GITHUB_ACCESS_TOKEN\")\nREPO_OWNER = \"learningequality\"\nREPO_NAME = \"kolibri\"\nISSUE_ID = os.getenv(\"BUILDKITE_PULL_REQUEST\")\nBUILD_ID = os.getenv(\"BUILDKITE_BUILD_NUMBER\")\nTAG = os.getenv(\"BUILDKITE_TAG\")\nCOMMIT = os.getenv(\"BUILDKITE_COMMIT\")\n\n\nRELEASE_DIR = 'release'\nPROJECT_PATH = os.path.join(os.getcwd())\n\n# Python packages artifact location\nDIST_DIR = os.path.join(PROJECT_PATH, \"dist\")\n\n# Installer artifact location\nINSTALLER_DIR = os.path.join(PROJECT_PATH, \"installer\")\n\nheaders = {'Authorization': 'token %s' % ACCESS_TOKEN}\n\nINSTALLER_CAT = 'Installers'\n\nPYTHON_PKG_CAT = 'Python packages'\n\n# Manifest of files, keyed by extension\nfile_manifest = {\n 'exe': {\n 'extension': 'exe',\n 'description': 'Windows Installer',\n 'category': INSTALLER_CAT,\n 'content_type': 'application/x-ms-dos-executable',\n },\n 'pex': {\n 'extension': 'pex',\n 'description': 'Pex file',\n 'category': PYTHON_PKG_CAT,\n 'content_type': 'application/octet-stream',\n },\n 'whl': {\n 'extension': 'whl',\n 'description': 'Whl file',\n 'category': PYTHON_PKG_CAT,\n 'content_type': 'application/zip',\n },\n 'gz': {\n 'extension': 'gz',\n 'description': 'Tar file',\n 'category': PYTHON_PKG_CAT,\n 'content_type': 'application/gzip',\n },\n 'apk': {\n 'extension': 'apk',\n 'description': 'Android Installer',\n 'category': INSTALLER_CAT,\n 'content_type': 'application/vnd.android.package-archive',\n },\n}\n\nfile_order = [\n 'exe',\n 'apk',\n 'pex',\n 'whl',\n 'gz',\n]\n\ngh = login(token=ACCESS_TOKEN)\nrepository = gh.repository(REPO_OWNER, REPO_NAME)\n\n\ndef create_status_report_html(artifacts):\n \"\"\"\n Create html page to list build artifacts for linking from github status.\n \"\"\"\n html = \"<html>\\n<body>\\n<h1>Build Artifacts</h1>\\n\"\n current_heading = None\n for ext in file_order:\n artifact = artifacts[ext]\n if artifact['category'] != current_heading:\n current_heading = artifact['category']\n html += \"<h2>{heading}</h2>\\n\".format(heading=current_heading)\n html += \"<p>{description}: <a href='{media_url}'>{name}</a></p>\\n\".format(\n **artifact\n )\n html += \"</body>\\n</html>\"\n return html\n\ndef create_github_status(report_url):\n \"\"\"\n Create a github status with a link to the report URL,\n only do this once buildkite has been successful, so only report success here.\n \"\"\"\n status = repository.create_status(\n COMMIT,\n \"success\",\n target_url=report_url,\n description=\"Kolibri Buildkite assets\",\n context=\"buildkite/kolibri/assets\"\n )\n if status:\n logging.info('Successfully created Github status for commit %s.' % COMMIT)\n else:\n logging.info('Error encounter. Now exiting!')\n sys.exit(1)\n\n\ndef collect_local_artifacts():\n \"\"\"\n Create a dict of the artifact name and the location.\n \"\"\"\n\n artifacts_dict = {}\n\n def create_artifact_data(artifact_dir):\n for artifact in listdir(artifact_dir):\n filename, file_extension = os.path.splitext(artifact)\n # Remove leading '.'\n file_extension = file_extension[1:]\n if file_extension in file_manifest:\n data = {\"name\": artifact,\n \"file_location\": \"%s/%s\" % (artifact_dir, artifact)}\n data.update(file_manifest[file_extension])\n logging.info(\"Collect file data: (%s)\" % data)\n artifacts_dict[file_extension] = data\n create_artifact_data(DIST_DIR)\n create_artifact_data(INSTALLER_DIR)\n return artifacts_dict\n\n\ndef upload_artifacts():\n \"\"\"\n Upload the artifacts on the Google Cloud Storage.\n Create a comment on the pull requester with artifact media link.\n \"\"\"\n client = storage.Client()\n bucket = client.bucket(\"le-downloads\")\n artifacts = collect_local_artifacts()\n is_release = os.getenv(\"IS_KOLIBRI_RELEASE\")\n for file_data in artifacts.values():\n logging.info(\"Uploading file (%s)\" % (file_data.get(\"name\")))\n if is_release:\n blob = bucket.blob('kolibri/%s/%s/%s' % (RELEASE_DIR, BUILD_ID, file_data.get(\"name\")))\n else:\n blob = bucket.blob('kolibri/buildkite/build-%s/%s/%s' % (ISSUE_ID, BUILD_ID, file_data.get(\"name\")))\n blob.upload_from_filename(filename=file_data.get(\"file_location\"))\n blob.make_public()\n file_data.update({'media_url': blob.media_link})\n\n html = create_status_report_html(artifacts)\n\n blob = bucket.blob('kolibri/%s/%s/report.html' % (RELEASE_DIR, BUILD_ID))\n\n blob.upload_from_string(html, content_type='text/html')\n\n blob.make_public()\n\n create_github_status(blob.public_url)\n\n if TAG:\n # Building from a tag, this is probably a release!\n # Have to do this with requests because github3 does not support this interface yet\n get_release_asset_url = requests.get(\"https://api.github.com/repos/{owner}/{repo}/releases/tags/{tag}\".format(\n owner=REPO_OWNER,\n repo=REPO_NAME,\n tag=TAG,\n ))\n if get_release_asset_url.status_code == 200:\n # Definitely a release!\n release_id = get_release_asset_url.json()['id']\n release_name = get_release_asset_url.json()['name']\n release = repository.release(id=release_id)\n logging.info(\"Uploading built assets to Github Release: %s\" % release_name)\n for file_extension in file_order:\n artifact = artifacts[file_extension]\n logging.info(\"Uploading release asset: %s\" % (artifact.get(\"name\")))\n # For some reason github3 does not let us set a label at initial upload\n asset = release.upload_asset(\n content_type=artifact['content_type'],\n name=artifact['name'],\n asset=open(artifact['file_location'], 'rb')\n )\n if asset:\n # So do it after the initial upload instead\n asset.edit(artifact['name'], label=artifact['description'])\n logging.info(\"Successfully uploaded release asset: %s\" % (artifact.get('name')))\n else:\n logging.error(\"Error uploading release asset: %s\" % (artifact.get('name')))\n\n\ndef main():\n upload_artifacts()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": ".buildkite/upload_artifacts.py"}]} | 3,204 | 329 |
gh_patches_debug_505 | rasdani/github-patches | git_diff | googleapis__google-cloud-python-3693 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Increment threadsafety on BigQuery DB-API interface when httplib2 is removed
From https://github.com/GoogleCloudPlatform/google-cloud-python/pull/2921/files/5b29a9e42a40f7c74ced0b7a7e9db0add52a20a1#r123596828
Blocked by https://github.com/GoogleCloudPlatform/google-cloud-python/issues/1998
</issue>
<code>
[start of bigquery/google/cloud/bigquery/dbapi/__init__.py]
1 # Copyright 2017 Google Inc.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Google BigQuery implementation of the Database API Specification v2.0.
16
17 This module implements the `Python Database API Specification v2.0 (DB-API)`_
18 for Google BigQuery.
19
20 .. _Python Database API Specification v2.0 (DB-API):
21 https://www.python.org/dev/peps/pep-0249/
22
23 .. warning::
24 The ``dbapi`` module is **alpha**. The implementation is not complete. It
25 might be changed in backward-incompatible ways and is not subject to any SLA
26 or deprecation policy.
27 """
28
29 from google.cloud.bigquery.dbapi.connection import connect
30 from google.cloud.bigquery.dbapi.connection import Connection
31 from google.cloud.bigquery.dbapi.cursor import Cursor
32 from google.cloud.bigquery.dbapi.exceptions import Warning
33 from google.cloud.bigquery.dbapi.exceptions import Error
34 from google.cloud.bigquery.dbapi.exceptions import InterfaceError
35 from google.cloud.bigquery.dbapi.exceptions import DatabaseError
36 from google.cloud.bigquery.dbapi.exceptions import DataError
37 from google.cloud.bigquery.dbapi.exceptions import OperationalError
38 from google.cloud.bigquery.dbapi.exceptions import IntegrityError
39 from google.cloud.bigquery.dbapi.exceptions import InternalError
40 from google.cloud.bigquery.dbapi.exceptions import ProgrammingError
41 from google.cloud.bigquery.dbapi.exceptions import NotSupportedError
42 from google.cloud.bigquery.dbapi.types import Binary
43 from google.cloud.bigquery.dbapi.types import Date
44 from google.cloud.bigquery.dbapi.types import DateFromTicks
45 from google.cloud.bigquery.dbapi.types import Time
46 from google.cloud.bigquery.dbapi.types import TimeFromTicks
47 from google.cloud.bigquery.dbapi.types import Timestamp
48 from google.cloud.bigquery.dbapi.types import TimestampFromTicks
49 from google.cloud.bigquery.dbapi.types import BINARY
50 from google.cloud.bigquery.dbapi.types import DATETIME
51 from google.cloud.bigquery.dbapi.types import NUMBER
52 from google.cloud.bigquery.dbapi.types import ROWID
53 from google.cloud.bigquery.dbapi.types import STRING
54
55
56 apilevel = '2.0'
57
58 # Threads may share the module, but not connections.
59 threadsafety = 1
60
61 paramstyle = 'pyformat'
62
63 __all__ = [
64 'apilevel', 'threadsafety', 'paramstyle', 'connect', 'Connection',
65 'Cursor', 'Warning', 'Error', 'InterfaceError', 'DatabaseError',
66 'DataError', 'OperationalError', 'IntegrityError', 'InternalError',
67 'ProgrammingError', 'NotSupportedError', 'Binary', 'Date', 'DateFromTicks',
68 'Time', 'TimeFromTicks', 'Timestamp', 'TimestampFromTicks', 'BINARY',
69 'DATETIME', 'NUMBER', 'ROWID', 'STRING',
70 ]
71
[end of bigquery/google/cloud/bigquery/dbapi/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/bigquery/google/cloud/bigquery/dbapi/__init__.py b/bigquery/google/cloud/bigquery/dbapi/__init__.py
--- a/bigquery/google/cloud/bigquery/dbapi/__init__.py
+++ b/bigquery/google/cloud/bigquery/dbapi/__init__.py
@@ -55,8 +55,8 @@
apilevel = '2.0'
-# Threads may share the module, but not connections.
-threadsafety = 1
+# Threads may share the module and connections, but not cursors.
+threadsafety = 2
paramstyle = 'pyformat'
| {"golden_diff": "diff --git a/bigquery/google/cloud/bigquery/dbapi/__init__.py b/bigquery/google/cloud/bigquery/dbapi/__init__.py\n--- a/bigquery/google/cloud/bigquery/dbapi/__init__.py\n+++ b/bigquery/google/cloud/bigquery/dbapi/__init__.py\n@@ -55,8 +55,8 @@\n \n apilevel = '2.0'\n \n-# Threads may share the module, but not connections.\n-threadsafety = 1\n+# Threads may share the module and connections, but not cursors.\n+threadsafety = 2\n \n paramstyle = 'pyformat'\n", "issue": "Increment threadsafety on BigQuery DB-API interface when httplib2 is removed\nFrom https://github.com/GoogleCloudPlatform/google-cloud-python/pull/2921/files/5b29a9e42a40f7c74ced0b7a7e9db0add52a20a1#r123596828\r\n\r\nBlocked by https://github.com/GoogleCloudPlatform/google-cloud-python/issues/1998\n", "before_files": [{"content": "# Copyright 2017 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Google BigQuery implementation of the Database API Specification v2.0.\n\nThis module implements the `Python Database API Specification v2.0 (DB-API)`_\nfor Google BigQuery.\n\n.. _Python Database API Specification v2.0 (DB-API):\n https://www.python.org/dev/peps/pep-0249/\n\n.. warning::\n The ``dbapi`` module is **alpha**. The implementation is not complete. It\n might be changed in backward-incompatible ways and is not subject to any SLA\n or deprecation policy.\n\"\"\"\n\nfrom google.cloud.bigquery.dbapi.connection import connect\nfrom google.cloud.bigquery.dbapi.connection import Connection\nfrom google.cloud.bigquery.dbapi.cursor import Cursor\nfrom google.cloud.bigquery.dbapi.exceptions import Warning\nfrom google.cloud.bigquery.dbapi.exceptions import Error\nfrom google.cloud.bigquery.dbapi.exceptions import InterfaceError\nfrom google.cloud.bigquery.dbapi.exceptions import DatabaseError\nfrom google.cloud.bigquery.dbapi.exceptions import DataError\nfrom google.cloud.bigquery.dbapi.exceptions import OperationalError\nfrom google.cloud.bigquery.dbapi.exceptions import IntegrityError\nfrom google.cloud.bigquery.dbapi.exceptions import InternalError\nfrom google.cloud.bigquery.dbapi.exceptions import ProgrammingError\nfrom google.cloud.bigquery.dbapi.exceptions import NotSupportedError\nfrom google.cloud.bigquery.dbapi.types import Binary\nfrom google.cloud.bigquery.dbapi.types import Date\nfrom google.cloud.bigquery.dbapi.types import DateFromTicks\nfrom google.cloud.bigquery.dbapi.types import Time\nfrom google.cloud.bigquery.dbapi.types import TimeFromTicks\nfrom google.cloud.bigquery.dbapi.types import Timestamp\nfrom google.cloud.bigquery.dbapi.types import TimestampFromTicks\nfrom google.cloud.bigquery.dbapi.types import BINARY\nfrom google.cloud.bigquery.dbapi.types import DATETIME\nfrom google.cloud.bigquery.dbapi.types import NUMBER\nfrom google.cloud.bigquery.dbapi.types import ROWID\nfrom google.cloud.bigquery.dbapi.types import STRING\n\n\napilevel = '2.0'\n\n# Threads may share the module, but not connections.\nthreadsafety = 1\n\nparamstyle = 'pyformat'\n\n__all__ = [\n 'apilevel', 'threadsafety', 'paramstyle', 'connect', 'Connection',\n 'Cursor', 'Warning', 'Error', 'InterfaceError', 'DatabaseError',\n 'DataError', 'OperationalError', 'IntegrityError', 'InternalError',\n 'ProgrammingError', 'NotSupportedError', 'Binary', 'Date', 'DateFromTicks',\n 'Time', 'TimeFromTicks', 'Timestamp', 'TimestampFromTicks', 'BINARY',\n 'DATETIME', 'NUMBER', 'ROWID', 'STRING',\n]\n", "path": "bigquery/google/cloud/bigquery/dbapi/__init__.py"}]} | 1,496 | 130 |
gh_patches_debug_25995 | rasdani/github-patches | git_diff | pytorch__vision-2979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC] Documentation for sigmoid focal loss
## π Documentation
Sigmoid focal loss was added to torchvision, but I think docs are not generated for it
It was initalized in `init` as well [here](https://github.com/pytorch/vision/blob/v0.8.0/torchvision/ops/__init__.py#L11)
I think it's really minor fix, we might need `..autodoc::sigmoid_focal_loss` and it should be fine.
</issue>
<code>
[start of torchvision/ops/focal_loss.py]
1 import torch
2 import torch.nn.functional as F
3
4
5 def sigmoid_focal_loss(
6 inputs,
7 targets,
8 alpha: float = 0.25,
9 gamma: float = 2,
10 reduction: str = "none",
11 ):
12 """
13 Original implementation from https://github.com/facebookresearch/fvcore/blob/master/fvcore/nn/focal_loss.py .
14 Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
15 Args:
16 inputs: A float tensor of arbitrary shape.
17 The predictions for each example.
18 targets: A float tensor with the same shape as inputs. Stores the binary
19 classification label for each element in inputs
20 (0 for the negative class and 1 for the positive class).
21 alpha: (optional) Weighting factor in range (0,1) to balance
22 positive vs negative examples or -1 for ignore. Default = 0.25
23 gamma: Exponent of the modulating factor (1 - p_t) to
24 balance easy vs hard examples.
25 reduction: 'none' | 'mean' | 'sum'
26 'none': No reduction will be applied to the output.
27 'mean': The output will be averaged.
28 'sum': The output will be summed.
29 Returns:
30 Loss tensor with the reduction option applied.
31 """
32 p = torch.sigmoid(inputs)
33 ce_loss = F.binary_cross_entropy_with_logits(
34 inputs, targets, reduction="none"
35 )
36 p_t = p * targets + (1 - p) * (1 - targets)
37 loss = ce_loss * ((1 - p_t) ** gamma)
38
39 if alpha >= 0:
40 alpha_t = alpha * targets + (1 - alpha) * (1 - targets)
41 loss = alpha_t * loss
42
43 if reduction == "mean":
44 loss = loss.mean()
45 elif reduction == "sum":
46 loss = loss.sum()
47
48 return loss
49
[end of torchvision/ops/focal_loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/ops/focal_loss.py b/torchvision/ops/focal_loss.py
--- a/torchvision/ops/focal_loss.py
+++ b/torchvision/ops/focal_loss.py
@@ -3,8 +3,8 @@
def sigmoid_focal_loss(
- inputs,
- targets,
+ inputs: torch.Tensor,
+ targets: torch.Tensor,
alpha: float = 0.25,
gamma: float = 2,
reduction: str = "none",
@@ -12,11 +12,12 @@
"""
Original implementation from https://github.com/facebookresearch/fvcore/blob/master/fvcore/nn/focal_loss.py .
Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
- Args:
+
+ Arguments:
inputs: A float tensor of arbitrary shape.
The predictions for each example.
targets: A float tensor with the same shape as inputs. Stores the binary
- classification label for each element in inputs
+ classification label for each element in inputs
(0 for the negative class and 1 for the positive class).
alpha: (optional) Weighting factor in range (0,1) to balance
positive vs negative examples or -1 for ignore. Default = 0.25
| {"golden_diff": "diff --git a/torchvision/ops/focal_loss.py b/torchvision/ops/focal_loss.py\n--- a/torchvision/ops/focal_loss.py\n+++ b/torchvision/ops/focal_loss.py\n@@ -3,8 +3,8 @@\n \n \n def sigmoid_focal_loss(\n- inputs,\n- targets,\n+ inputs: torch.Tensor,\n+ targets: torch.Tensor,\n alpha: float = 0.25,\n gamma: float = 2,\n reduction: str = \"none\",\n@@ -12,11 +12,12 @@\n \"\"\"\n Original implementation from https://github.com/facebookresearch/fvcore/blob/master/fvcore/nn/focal_loss.py .\n Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.\n- Args:\n+\n+ Arguments:\n inputs: A float tensor of arbitrary shape.\n The predictions for each example.\n targets: A float tensor with the same shape as inputs. Stores the binary\n- classification label for each element in inputs\n+ classification label for each element in inputs\n (0 for the negative class and 1 for the positive class).\n alpha: (optional) Weighting factor in range (0,1) to balance\n positive vs negative examples or -1 for ignore. Default = 0.25\n", "issue": "[DOC] Documentation for sigmoid focal loss\n## \ud83d\udcda Documentation\r\n\r\nSigmoid focal loss was added to torchvision, but I think docs are not generated for it\r\n\r\nIt was initalized in `init` as well [here](https://github.com/pytorch/vision/blob/v0.8.0/torchvision/ops/__init__.py#L11)\r\n\r\nI think it's really minor fix, we might need `..autodoc::sigmoid_focal_loss` and it should be fine.\n", "before_files": [{"content": "import torch\nimport torch.nn.functional as F\n\n\ndef sigmoid_focal_loss(\n inputs,\n targets,\n alpha: float = 0.25,\n gamma: float = 2,\n reduction: str = \"none\",\n):\n \"\"\"\n Original implementation from https://github.com/facebookresearch/fvcore/blob/master/fvcore/nn/focal_loss.py .\n Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.\n Args:\n inputs: A float tensor of arbitrary shape.\n The predictions for each example.\n targets: A float tensor with the same shape as inputs. Stores the binary\n classification label for each element in inputs\n (0 for the negative class and 1 for the positive class).\n alpha: (optional) Weighting factor in range (0,1) to balance\n positive vs negative examples or -1 for ignore. Default = 0.25\n gamma: Exponent of the modulating factor (1 - p_t) to\n balance easy vs hard examples.\n reduction: 'none' | 'mean' | 'sum'\n 'none': No reduction will be applied to the output.\n 'mean': The output will be averaged.\n 'sum': The output will be summed.\n Returns:\n Loss tensor with the reduction option applied.\n \"\"\"\n p = torch.sigmoid(inputs)\n ce_loss = F.binary_cross_entropy_with_logits(\n inputs, targets, reduction=\"none\"\n )\n p_t = p * targets + (1 - p) * (1 - targets)\n loss = ce_loss * ((1 - p_t) ** gamma)\n\n if alpha >= 0:\n alpha_t = alpha * targets + (1 - alpha) * (1 - targets)\n loss = alpha_t * loss\n\n if reduction == \"mean\":\n loss = loss.mean()\n elif reduction == \"sum\":\n loss = loss.sum()\n\n return loss\n", "path": "torchvision/ops/focal_loss.py"}]} | 1,157 | 301 |
gh_patches_debug_19341 | rasdani/github-patches | git_diff | praw-dev__praw-1737 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Assertion error on 429 HTTP status code
**Describe the bug**
From time to time, I found my codes reported 429 errors.
**To Reproduce**
Steps to reproduce the behavior:
`submission.comments.replace_more(limit=limit)` with a higher limit for the query, running multiple processes for the same key.
**Expected behavior**
Expect to handle this error as ratelimit error
**Code/Logs**
<!-- include your code, without the Reddit() initialization, so as to not leak private credentials. -->
submission.comments.replace_more(limit=limit)
File "/usr/local/lib/python3.7/site-packages/praw/models/comment_forest.py", line 178, in replace_more
new_comments = item.comments(update=False)
File "/usr/local/lib/python3.7/site-packages/praw/models/reddit/more.py", line 74, in comments
self._comments = self._reddit.post(API_PATH["morechildren"], data=data)
File "/usr/local/lib/python3.7/site-packages/praw/reddit.py", line 758, in post
path=path,
File "/usr/local/lib/python3.7/site-packages/praw/reddit.py", line 672, in _objectify_request
path=path,
File "/usr/local/lib/python3.7/site-packages/praw/reddit.py", line 855, in request
json=json,
File "/usr/local/lib/python3.7/site-packages/prawcore/sessions.py", line 331, in request
url=url,
File "/usr/local/lib/python3.7/site-packages/prawcore/sessions.py", line 265, in _request_with_retries
), f"Unexpected status code: {response.status_code}"
AssertionError: Unexpected status code: 429
**System Info**
root@1be190a235da:/aitrading# uname -a
Linux 1be190a235da 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux
root@1be190a235da:/aitrading# python --version
Python 3.7.10
root@1be190a235da:/aitrading# pip show praw
Name: praw
Version: 7.2.0
Summary: PRAW, an acronym for `Python Reddit API Wrapper`, is a python package that allows for simple access to reddit's API.
Home-page: UNKNOWN
Author: Bryce Boe
Author-email: [email protected]
License: Simplified BSD License
Location: /usr/local/lib/python3.7/site-packages
Requires: prawcore, update-checker, websocket-client
Required-by:
root@1be190a235da:/aitrading#
- OS:
- Python: python3.7
- PRAW Version:
</issue>
<code>
[start of praw/models/comment_forest.py]
1 """Provide CommentForest for Submission comments."""
2 from heapq import heappop, heappush
3 from typing import TYPE_CHECKING, List, Optional, Union
4
5 from ..exceptions import DuplicateReplaceException
6 from .reddit.more import MoreComments
7
8 if TYPE_CHECKING: # pragma: no cover
9 from ... import praw
10
11
12 class CommentForest:
13 """A forest of comments starts with multiple top-level comments.
14
15 Each of these comments can be a tree of replies.
16
17 """
18
19 @staticmethod
20 def _gather_more_comments(tree, parent_tree=None):
21 """Return a list of MoreComments objects obtained from tree."""
22 more_comments = []
23 queue = [(None, x) for x in tree]
24 while queue:
25 parent, comment = queue.pop(0)
26 if isinstance(comment, MoreComments):
27 heappush(more_comments, comment)
28 if parent:
29 comment._remove_from = parent.replies._comments
30 else:
31 comment._remove_from = parent_tree or tree
32 else:
33 for item in comment.replies:
34 queue.append((comment, item))
35 return more_comments
36
37 def __getitem__(self, index: int):
38 """Return the comment at position ``index`` in the list.
39
40 This method is to be used like an array access, such as:
41
42 .. code-block:: python
43
44 first_comment = submission.comments[0]
45
46 Alternatively, the presence of this method enables one to iterate over all top
47 level comments, like so:
48
49 .. code-block:: python
50
51 for comment in submission.comments:
52 print(comment.body)
53
54 """
55 return self._comments[index]
56
57 def __init__(
58 self,
59 submission: "praw.models.Submission",
60 comments: Optional[List["praw.models.Comment"]] = None,
61 ):
62 """Initialize a CommentForest instance.
63
64 :param submission: An instance of :class:`~.Subreddit` that is the parent of the
65 comments.
66 :param comments: Initialize the Forest with a list of comments (default: None).
67
68 """
69 self._comments = comments
70 self._submission = submission
71
72 def __len__(self) -> int:
73 """Return the number of top-level comments in the forest."""
74 return len(self._comments)
75
76 def _insert_comment(self, comment):
77 if comment.name in self._submission._comments_by_id:
78 raise DuplicateReplaceException
79 comment.submission = self._submission
80 if isinstance(comment, MoreComments) or comment.is_root:
81 self._comments.append(comment)
82 else:
83 assert comment.parent_id in self._submission._comments_by_id, (
84 "PRAW Error occurred. Please file a bug report and include the code"
85 " that caused the error."
86 )
87 parent = self._submission._comments_by_id[comment.parent_id]
88 parent.replies._comments.append(comment)
89
90 def _update(self, comments):
91 self._comments = comments
92 for comment in comments:
93 comment.submission = self._submission
94
95 def list(self) -> List[Union["praw.models.Comment", "praw.models.MoreComments"]]:
96 """Return a flattened list of all Comments.
97
98 This list may contain :class:`.MoreComments` instances if :meth:`.replace_more`
99 was not called first.
100
101 """
102 comments = []
103 queue = list(self)
104 while queue:
105 comment = queue.pop(0)
106 comments.append(comment)
107 if not isinstance(comment, MoreComments):
108 queue.extend(comment.replies)
109 return comments
110
111 def replace_more(
112 self, limit: int = 32, threshold: int = 0
113 ) -> List["praw.models.MoreComments"]:
114 """Update the comment forest by resolving instances of MoreComments.
115
116 :param limit: The maximum number of :class:`.MoreComments` instances to replace.
117 Each replacement requires 1 API request. Set to ``None`` to have no limit,
118 or to ``0`` to remove all :class:`.MoreComments` instances without
119 additional requests (default: 32).
120 :param threshold: The minimum number of children comments a
121 :class:`.MoreComments` instance must have in order to be replaced.
122 :class:`.MoreComments` instances that represent "continue this thread" links
123 unfortunately appear to have 0 children. (default: 0).
124
125 :returns: A list of :class:`.MoreComments` instances that were not replaced.
126
127 For example, to replace up to 32 :class:`.MoreComments` instances of a
128 submission try:
129
130 .. code-block:: python
131
132 submission = reddit.submission("3hahrw")
133 submission.comments.replace_more()
134
135 Alternatively, to replace :class:`.MoreComments` instances within the replies of
136 a single comment try:
137
138 .. code-block:: python
139
140 comment = reddit.comment("d8r4im1")
141 comment.refresh()
142 comment.replies.replace_more()
143
144 .. note::
145
146 This method can take a long time as each replacement will discover at most
147 20 new :class:`.Comment` or :class:`.MoreComments` instances. As a result,
148 consider looping and handling exceptions until the method returns
149 successfully. For example:
150
151 .. code-block:: python
152
153 while True:
154 try:
155 submission.comments.replace_more()
156 break
157 except PossibleExceptions:
158 print("Handling replace_more exception")
159 sleep(1)
160
161 .. warning::
162
163 If this method is called, and the comments are refreshed, calling this
164 method again will result in a :class:`.DuplicateReplaceException`.
165
166 """
167 remaining = limit
168 more_comments = self._gather_more_comments(self._comments)
169 skipped = []
170
171 # Fetch largest more_comments until reaching the limit or the threshold
172 while more_comments:
173 item = heappop(more_comments)
174 if remaining is not None and remaining <= 0 or item.count < threshold:
175 skipped.append(item)
176 item._remove_from.remove(item)
177 continue
178
179 new_comments = item.comments(update=False)
180 if remaining is not None:
181 remaining -= 1
182
183 # Add new MoreComment objects to the heap of more_comments
184 for more in self._gather_more_comments(new_comments, self._comments):
185 more.submission = self._submission
186 heappush(more_comments, more)
187 # Insert all items into the tree
188 for comment in new_comments:
189 self._insert_comment(comment)
190
191 # Remove from forest
192 item._remove_from.remove(item)
193
194 return more_comments + skipped
195
[end of praw/models/comment_forest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/praw/models/comment_forest.py b/praw/models/comment_forest.py
--- a/praw/models/comment_forest.py
+++ b/praw/models/comment_forest.py
@@ -124,6 +124,8 @@
:returns: A list of :class:`.MoreComments` instances that were not replaced.
+ :raises: ``prawcore.TooManyRequests`` when used concurrently.
+
For example, to replace up to 32 :class:`.MoreComments` instances of a
submission try:
@@ -144,9 +146,8 @@
.. note::
This method can take a long time as each replacement will discover at most
- 20 new :class:`.Comment` or :class:`.MoreComments` instances. As a result,
- consider looping and handling exceptions until the method returns
- successfully. For example:
+ 100 new :class:`.Comment` instances. As a result, consider looping and
+ handling exceptions until the method returns successfully. For example:
.. code-block:: python
| {"golden_diff": "diff --git a/praw/models/comment_forest.py b/praw/models/comment_forest.py\n--- a/praw/models/comment_forest.py\n+++ b/praw/models/comment_forest.py\n@@ -124,6 +124,8 @@\n \n :returns: A list of :class:`.MoreComments` instances that were not replaced.\n \n+ :raises: ``prawcore.TooManyRequests`` when used concurrently.\n+\n For example, to replace up to 32 :class:`.MoreComments` instances of a\n submission try:\n \n@@ -144,9 +146,8 @@\n .. note::\n \n This method can take a long time as each replacement will discover at most\n- 20 new :class:`.Comment` or :class:`.MoreComments` instances. As a result,\n- consider looping and handling exceptions until the method returns\n- successfully. For example:\n+ 100 new :class:`.Comment` instances. As a result, consider looping and\n+ handling exceptions until the method returns successfully. For example:\n \n .. code-block:: python\n", "issue": "Assertion error on 429 HTTP status code\n**Describe the bug**\r\nFrom time to time, I found my codes reported 429 errors.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n`submission.comments.replace_more(limit=limit)` with a higher limit for the query, running multiple processes for the same key.\r\n\r\n**Expected behavior**\r\nExpect to handle this error as ratelimit error\r\n\r\n**Code/Logs**\r\n<!-- include your code, without the Reddit() initialization, so as to not leak private credentials. -->\r\n submission.comments.replace_more(limit=limit)\r\n File \"/usr/local/lib/python3.7/site-packages/praw/models/comment_forest.py\", line 178, in replace_more\r\n new_comments = item.comments(update=False)\r\n File \"/usr/local/lib/python3.7/site-packages/praw/models/reddit/more.py\", line 74, in comments\r\n self._comments = self._reddit.post(API_PATH[\"morechildren\"], data=data)\r\n File \"/usr/local/lib/python3.7/site-packages/praw/reddit.py\", line 758, in post\r\n path=path,\r\n File \"/usr/local/lib/python3.7/site-packages/praw/reddit.py\", line 672, in _objectify_request\r\n path=path,\r\n File \"/usr/local/lib/python3.7/site-packages/praw/reddit.py\", line 855, in request\r\n json=json,\r\n File \"/usr/local/lib/python3.7/site-packages/prawcore/sessions.py\", line 331, in request\r\n url=url,\r\n File \"/usr/local/lib/python3.7/site-packages/prawcore/sessions.py\", line 265, in _request_with_retries\r\n ), f\"Unexpected status code: {response.status_code}\"\r\nAssertionError: Unexpected status code: 429\r\n**System Info**\r\n\r\nroot@1be190a235da:/aitrading# uname -a\r\nLinux 1be190a235da 4.19.0-14-cloud-amd64 #1 SMP Debian 4.19.171-2 (2021-01-30) x86_64 GNU/Linux\r\nroot@1be190a235da:/aitrading# python --version\r\nPython 3.7.10\r\nroot@1be190a235da:/aitrading# pip show praw\r\nName: praw\r\nVersion: 7.2.0\r\nSummary: PRAW, an acronym for `Python Reddit API Wrapper`, is a python package that allows for simple access to reddit's API.\r\nHome-page: UNKNOWN\r\nAuthor: Bryce Boe\r\nAuthor-email: [email protected]\r\nLicense: Simplified BSD License\r\nLocation: /usr/local/lib/python3.7/site-packages\r\nRequires: prawcore, update-checker, websocket-client\r\nRequired-by:\r\nroot@1be190a235da:/aitrading#\r\n\r\n - OS: \r\n - Python: python3.7\r\n - PRAW Version:\r\n\n", "before_files": [{"content": "\"\"\"Provide CommentForest for Submission comments.\"\"\"\nfrom heapq import heappop, heappush\nfrom typing import TYPE_CHECKING, List, Optional, Union\n\nfrom ..exceptions import DuplicateReplaceException\nfrom .reddit.more import MoreComments\n\nif TYPE_CHECKING: # pragma: no cover\n from ... import praw\n\n\nclass CommentForest:\n \"\"\"A forest of comments starts with multiple top-level comments.\n\n Each of these comments can be a tree of replies.\n\n \"\"\"\n\n @staticmethod\n def _gather_more_comments(tree, parent_tree=None):\n \"\"\"Return a list of MoreComments objects obtained from tree.\"\"\"\n more_comments = []\n queue = [(None, x) for x in tree]\n while queue:\n parent, comment = queue.pop(0)\n if isinstance(comment, MoreComments):\n heappush(more_comments, comment)\n if parent:\n comment._remove_from = parent.replies._comments\n else:\n comment._remove_from = parent_tree or tree\n else:\n for item in comment.replies:\n queue.append((comment, item))\n return more_comments\n\n def __getitem__(self, index: int):\n \"\"\"Return the comment at position ``index`` in the list.\n\n This method is to be used like an array access, such as:\n\n .. code-block:: python\n\n first_comment = submission.comments[0]\n\n Alternatively, the presence of this method enables one to iterate over all top\n level comments, like so:\n\n .. code-block:: python\n\n for comment in submission.comments:\n print(comment.body)\n\n \"\"\"\n return self._comments[index]\n\n def __init__(\n self,\n submission: \"praw.models.Submission\",\n comments: Optional[List[\"praw.models.Comment\"]] = None,\n ):\n \"\"\"Initialize a CommentForest instance.\n\n :param submission: An instance of :class:`~.Subreddit` that is the parent of the\n comments.\n :param comments: Initialize the Forest with a list of comments (default: None).\n\n \"\"\"\n self._comments = comments\n self._submission = submission\n\n def __len__(self) -> int:\n \"\"\"Return the number of top-level comments in the forest.\"\"\"\n return len(self._comments)\n\n def _insert_comment(self, comment):\n if comment.name in self._submission._comments_by_id:\n raise DuplicateReplaceException\n comment.submission = self._submission\n if isinstance(comment, MoreComments) or comment.is_root:\n self._comments.append(comment)\n else:\n assert comment.parent_id in self._submission._comments_by_id, (\n \"PRAW Error occurred. Please file a bug report and include the code\"\n \" that caused the error.\"\n )\n parent = self._submission._comments_by_id[comment.parent_id]\n parent.replies._comments.append(comment)\n\n def _update(self, comments):\n self._comments = comments\n for comment in comments:\n comment.submission = self._submission\n\n def list(self) -> List[Union[\"praw.models.Comment\", \"praw.models.MoreComments\"]]:\n \"\"\"Return a flattened list of all Comments.\n\n This list may contain :class:`.MoreComments` instances if :meth:`.replace_more`\n was not called first.\n\n \"\"\"\n comments = []\n queue = list(self)\n while queue:\n comment = queue.pop(0)\n comments.append(comment)\n if not isinstance(comment, MoreComments):\n queue.extend(comment.replies)\n return comments\n\n def replace_more(\n self, limit: int = 32, threshold: int = 0\n ) -> List[\"praw.models.MoreComments\"]:\n \"\"\"Update the comment forest by resolving instances of MoreComments.\n\n :param limit: The maximum number of :class:`.MoreComments` instances to replace.\n Each replacement requires 1 API request. Set to ``None`` to have no limit,\n or to ``0`` to remove all :class:`.MoreComments` instances without\n additional requests (default: 32).\n :param threshold: The minimum number of children comments a\n :class:`.MoreComments` instance must have in order to be replaced.\n :class:`.MoreComments` instances that represent \"continue this thread\" links\n unfortunately appear to have 0 children. (default: 0).\n\n :returns: A list of :class:`.MoreComments` instances that were not replaced.\n\n For example, to replace up to 32 :class:`.MoreComments` instances of a\n submission try:\n\n .. code-block:: python\n\n submission = reddit.submission(\"3hahrw\")\n submission.comments.replace_more()\n\n Alternatively, to replace :class:`.MoreComments` instances within the replies of\n a single comment try:\n\n .. code-block:: python\n\n comment = reddit.comment(\"d8r4im1\")\n comment.refresh()\n comment.replies.replace_more()\n\n .. note::\n\n This method can take a long time as each replacement will discover at most\n 20 new :class:`.Comment` or :class:`.MoreComments` instances. As a result,\n consider looping and handling exceptions until the method returns\n successfully. For example:\n\n .. code-block:: python\n\n while True:\n try:\n submission.comments.replace_more()\n break\n except PossibleExceptions:\n print(\"Handling replace_more exception\")\n sleep(1)\n\n .. warning::\n\n If this method is called, and the comments are refreshed, calling this\n method again will result in a :class:`.DuplicateReplaceException`.\n\n \"\"\"\n remaining = limit\n more_comments = self._gather_more_comments(self._comments)\n skipped = []\n\n # Fetch largest more_comments until reaching the limit or the threshold\n while more_comments:\n item = heappop(more_comments)\n if remaining is not None and remaining <= 0 or item.count < threshold:\n skipped.append(item)\n item._remove_from.remove(item)\n continue\n\n new_comments = item.comments(update=False)\n if remaining is not None:\n remaining -= 1\n\n # Add new MoreComment objects to the heap of more_comments\n for more in self._gather_more_comments(new_comments, self._comments):\n more.submission = self._submission\n heappush(more_comments, more)\n # Insert all items into the tree\n for comment in new_comments:\n self._insert_comment(comment)\n\n # Remove from forest\n item._remove_from.remove(item)\n\n return more_comments + skipped\n", "path": "praw/models/comment_forest.py"}]} | 3,097 | 246 |
gh_patches_debug_41570 | rasdani/github-patches | git_diff | genialis__resolwe-328 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
500 error returned when permission does not exist
We should return 400 Bad request response.
Failing unit test: https://github.com/genialis/resolwe/pull/309
</issue>
<code>
[start of resolwe/permissions/utils.py]
1 """.. Ignore pydocstyle D400.
2
3 =================
4 Permissions utils
5 =================
6
7 .. autofunction:: copy_permissions
8
9 """
10 from __future__ import absolute_import, division, print_function, unicode_literals
11
12 import copy
13
14 from django.contrib.auth import get_user_model
15 from django.contrib.auth.models import AnonymousUser, Group
16 from django.contrib.contenttypes.models import ContentType
17
18 from guardian.models import GroupObjectPermission, UserObjectPermission
19 from guardian.shortcuts import assign_perm, remove_perm
20 from rest_framework import exceptions
21
22
23 def copy_permissions(src_obj, dest_obj):
24 """Copy permissions form ``src_obj`` to ``dest_obj``."""
25 src_obj_ctype = ContentType.objects.get_for_model(src_obj)
26 dest_obj_ctype = ContentType.objects.get_for_model(dest_obj)
27
28 if src_obj_ctype != dest_obj_ctype:
29 raise AssertionError('Content types of source and destination objects are not equal.')
30
31 for perm in UserObjectPermission.objects.filter(object_pk=src_obj.pk, content_type=src_obj_ctype):
32 assign_perm(perm.permission.codename, perm.user, dest_obj)
33 for perm in GroupObjectPermission.objects.filter(object_pk=src_obj.pk, content_type=src_obj_ctype):
34 assign_perm(perm.permission.codename, perm.group, dest_obj)
35
36
37 def fetch_user(query):
38 """Get user by ``pk`` or ``username``. Return ``None`` if doesn't exist."""
39 user_filter = {'pk': query} if query.isdigit() else {'username': query}
40 return get_user_model().objects.get(**user_filter)
41
42
43 def fetch_group(query):
44 """Get group by ``pk`` or ``name``. Return ``None`` if doesn't exist."""
45 group_filter = {'pk': query} if query.isdigit() else {'name': query}
46 return Group.objects.get(**group_filter)
47
48
49 def check_owner_permission(payload, allow_user_owner):
50 """Raise ``PermissionDenied``if ``owner`` found in ``data``."""
51 for entity_type in ['users', 'groups']:
52 for perm_type in ['add', 'remove']:
53 for perms in payload.get(entity_type, {}).get(perm_type, {}).values():
54 if 'owner' in perms:
55 if entity_type == 'users' and allow_user_owner:
56 continue
57
58 if entity_type == 'groups':
59 raise exceptions.ParseError("Owner permission cannot be assigned to a group")
60
61 raise exceptions.PermissionDenied("Only owners can grant/revoke owner permission")
62
63
64 def check_public_permissions(payload):
65 """Raise ``PermissionDenied`` if public permissions are too open."""
66 allowed_public_permissions = ['view', 'add', 'download']
67 for perm_type in ['add', 'remove']:
68 for perm in payload.get('public', {}).get(perm_type, []):
69 if perm not in allowed_public_permissions:
70 raise exceptions.PermissionDenied("Permissions for public users are too open")
71
72
73 def check_user_permissions(payload, user_pk):
74 """Raise ``PermissionDenied`` if ``payload`` includes ``user_pk``."""
75 for perm_type in ['add', 'remove']:
76 user_pks = payload.get('users', {}).get(perm_type, {}).keys()
77 if user_pk in user_pks:
78 raise exceptions.PermissionDenied("You cannot change your own permissions")
79
80
81 def remove_permission(payload, permission):
82 """Remove all occurrences of ``permission`` from ``payload``."""
83 payload = copy.deepcopy(payload)
84
85 for entity_type in ['users', 'groups']:
86 for perm_type in ['add', 'remove']:
87 for perms in payload.get(entity_type, {}).get(perm_type, {}).values():
88 if permission in perms:
89 perms.remove(permission)
90
91 for perm_type in ['add', 'remove']:
92 perms = payload.get('public', {}).get(perm_type, [])
93 if permission in perms:
94 perms.remove(permission)
95
96 return payload
97
98
99 def update_permission(obj, data):
100 """Update object permissions."""
101 content_type = ContentType.objects.get_for_model(obj)
102 full_permissions = list(zip(*obj._meta.permissions))[0] # pylint: disable=protected-access
103
104 def set_permissions(entity_type, perm_type):
105 """Set object permissions."""
106 perm_func = assign_perm if perm_type == 'add' else remove_perm
107 fetch_fn = fetch_user if entity_type == 'users' else fetch_group
108
109 for entity_id in data.get(entity_type, {}).get(perm_type, []):
110 entity = fetch_fn(entity_id)
111 if entity:
112 perms = data[entity_type][perm_type][entity_id]
113 if perms == u'ALL':
114 perms = full_permissions
115 for perm in perms:
116 perm_func('{}_{}'.format(perm.lower(), content_type), entity, obj)
117
118 set_permissions('users', 'add')
119 set_permissions('users', 'remove')
120 set_permissions('groups', 'add')
121 set_permissions('groups', 'remove')
122
123 def set_public_permissions(perm_type):
124 """Set public permissions."""
125 perm_func = assign_perm if perm_type == 'add' else remove_perm
126 user = AnonymousUser()
127 perms = data.get('public', {}).get(perm_type, [])
128 if perms == u'ALL':
129 perms = full_permissions
130 for perm in perms:
131 perm_func('{}_{}'.format(perm.lower(), content_type), user, obj)
132
133 set_public_permissions('add')
134 set_public_permissions('remove')
135
136
137 def assign_contributor_permissions(obj):
138 """Assign all permissions to object's contributor."""
139 for permission in list(zip(*obj._meta.permissions))[0]: # pylint: disable=protected-access
140 assign_perm(permission, obj.contributor, obj)
141
[end of resolwe/permissions/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/resolwe/permissions/utils.py b/resolwe/permissions/utils.py
--- a/resolwe/permissions/utils.py
+++ b/resolwe/permissions/utils.py
@@ -14,6 +14,7 @@
from django.contrib.auth import get_user_model
from django.contrib.auth.models import AnonymousUser, Group
from django.contrib.contenttypes.models import ContentType
+from django.db import transaction
from guardian.models import GroupObjectPermission, UserObjectPermission
from guardian.shortcuts import assign_perm, remove_perm
@@ -101,6 +102,35 @@
content_type = ContentType.objects.get_for_model(obj)
full_permissions = list(zip(*obj._meta.permissions))[0] # pylint: disable=protected-access
+ def apply_perm(perm_func, perms, entity):
+ """Apply permissions using given ``perm_func``.
+
+ ``perm_func`` is intended to be ``assign_perms`` or
+ ``remove_perms`` shortcut function from ``django-guardian``, but
+ can be any function that accepts permission codename,
+ user/group and object parameters (in this order).
+
+ If given permission does not exist, ``exceptions.ParseError`` is
+ raised.
+
+ "ALL" passed as ``perms`` parameter, will call ``perm_function``
+ with ``full_permissions`` list.
+
+ :param func perm_func: Permissions function to be applied
+ :param list params: list of params to be allpied
+ :param entity: user or group to be passed to ``perm_func``
+ :type entity: `~django.contrib.auth.models.User` or
+ `~django.contrib.auth.models.Group`
+
+ """
+ if perms == u'ALL':
+ perms = full_permissions
+ for perm in perms:
+ perm_codename = '{}_{}'.format(perm.lower(), content_type)
+ if perm_codename not in full_permissions:
+ raise exceptions.ParseError("Unknown permission: {}".format(perm))
+ perm_func(perm_codename, entity, obj)
+
def set_permissions(entity_type, perm_type):
"""Set object permissions."""
perm_func = assign_perm if perm_type == 'add' else remove_perm
@@ -110,28 +140,22 @@
entity = fetch_fn(entity_id)
if entity:
perms = data[entity_type][perm_type][entity_id]
- if perms == u'ALL':
- perms = full_permissions
- for perm in perms:
- perm_func('{}_{}'.format(perm.lower(), content_type), entity, obj)
-
- set_permissions('users', 'add')
- set_permissions('users', 'remove')
- set_permissions('groups', 'add')
- set_permissions('groups', 'remove')
+ apply_perm(perm_func, perms, entity)
def set_public_permissions(perm_type):
"""Set public permissions."""
perm_func = assign_perm if perm_type == 'add' else remove_perm
user = AnonymousUser()
perms = data.get('public', {}).get(perm_type, [])
- if perms == u'ALL':
- perms = full_permissions
- for perm in perms:
- perm_func('{}_{}'.format(perm.lower(), content_type), user, obj)
-
- set_public_permissions('add')
- set_public_permissions('remove')
+ apply_perm(perm_func, perms, user)
+
+ with transaction.atomic():
+ set_permissions('users', 'add')
+ set_permissions('users', 'remove')
+ set_permissions('groups', 'add')
+ set_permissions('groups', 'remove')
+ set_public_permissions('add')
+ set_public_permissions('remove')
def assign_contributor_permissions(obj):
| {"golden_diff": "diff --git a/resolwe/permissions/utils.py b/resolwe/permissions/utils.py\n--- a/resolwe/permissions/utils.py\n+++ b/resolwe/permissions/utils.py\n@@ -14,6 +14,7 @@\n from django.contrib.auth import get_user_model\n from django.contrib.auth.models import AnonymousUser, Group\n from django.contrib.contenttypes.models import ContentType\n+from django.db import transaction\n \n from guardian.models import GroupObjectPermission, UserObjectPermission\n from guardian.shortcuts import assign_perm, remove_perm\n@@ -101,6 +102,35 @@\n content_type = ContentType.objects.get_for_model(obj)\n full_permissions = list(zip(*obj._meta.permissions))[0] # pylint: disable=protected-access\n \n+ def apply_perm(perm_func, perms, entity):\n+ \"\"\"Apply permissions using given ``perm_func``.\n+\n+ ``perm_func`` is intended to be ``assign_perms`` or\n+ ``remove_perms`` shortcut function from ``django-guardian``, but\n+ can be any function that accepts permission codename,\n+ user/group and object parameters (in this order).\n+\n+ If given permission does not exist, ``exceptions.ParseError`` is\n+ raised.\n+\n+ \"ALL\" passed as ``perms`` parameter, will call ``perm_function``\n+ with ``full_permissions`` list.\n+\n+ :param func perm_func: Permissions function to be applied\n+ :param list params: list of params to be allpied\n+ :param entity: user or group to be passed to ``perm_func``\n+ :type entity: `~django.contrib.auth.models.User` or\n+ `~django.contrib.auth.models.Group`\n+\n+ \"\"\"\n+ if perms == u'ALL':\n+ perms = full_permissions\n+ for perm in perms:\n+ perm_codename = '{}_{}'.format(perm.lower(), content_type)\n+ if perm_codename not in full_permissions:\n+ raise exceptions.ParseError(\"Unknown permission: {}\".format(perm))\n+ perm_func(perm_codename, entity, obj)\n+\n def set_permissions(entity_type, perm_type):\n \"\"\"Set object permissions.\"\"\"\n perm_func = assign_perm if perm_type == 'add' else remove_perm\n@@ -110,28 +140,22 @@\n entity = fetch_fn(entity_id)\n if entity:\n perms = data[entity_type][perm_type][entity_id]\n- if perms == u'ALL':\n- perms = full_permissions\n- for perm in perms:\n- perm_func('{}_{}'.format(perm.lower(), content_type), entity, obj)\n-\n- set_permissions('users', 'add')\n- set_permissions('users', 'remove')\n- set_permissions('groups', 'add')\n- set_permissions('groups', 'remove')\n+ apply_perm(perm_func, perms, entity)\n \n def set_public_permissions(perm_type):\n \"\"\"Set public permissions.\"\"\"\n perm_func = assign_perm if perm_type == 'add' else remove_perm\n user = AnonymousUser()\n perms = data.get('public', {}).get(perm_type, [])\n- if perms == u'ALL':\n- perms = full_permissions\n- for perm in perms:\n- perm_func('{}_{}'.format(perm.lower(), content_type), user, obj)\n-\n- set_public_permissions('add')\n- set_public_permissions('remove')\n+ apply_perm(perm_func, perms, user)\n+\n+ with transaction.atomic():\n+ set_permissions('users', 'add')\n+ set_permissions('users', 'remove')\n+ set_permissions('groups', 'add')\n+ set_permissions('groups', 'remove')\n+ set_public_permissions('add')\n+ set_public_permissions('remove')\n \n \n def assign_contributor_permissions(obj):\n", "issue": "500 error returned when permission does not exist\nWe should return 400 Bad request response.\r\n\r\nFailing unit test: https://github.com/genialis/resolwe/pull/309\n", "before_files": [{"content": "\"\"\".. Ignore pydocstyle D400.\n\n=================\nPermissions utils\n=================\n\n.. autofunction:: copy_permissions\n\n\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport copy\n\nfrom django.contrib.auth import get_user_model\nfrom django.contrib.auth.models import AnonymousUser, Group\nfrom django.contrib.contenttypes.models import ContentType\n\nfrom guardian.models import GroupObjectPermission, UserObjectPermission\nfrom guardian.shortcuts import assign_perm, remove_perm\nfrom rest_framework import exceptions\n\n\ndef copy_permissions(src_obj, dest_obj):\n \"\"\"Copy permissions form ``src_obj`` to ``dest_obj``.\"\"\"\n src_obj_ctype = ContentType.objects.get_for_model(src_obj)\n dest_obj_ctype = ContentType.objects.get_for_model(dest_obj)\n\n if src_obj_ctype != dest_obj_ctype:\n raise AssertionError('Content types of source and destination objects are not equal.')\n\n for perm in UserObjectPermission.objects.filter(object_pk=src_obj.pk, content_type=src_obj_ctype):\n assign_perm(perm.permission.codename, perm.user, dest_obj)\n for perm in GroupObjectPermission.objects.filter(object_pk=src_obj.pk, content_type=src_obj_ctype):\n assign_perm(perm.permission.codename, perm.group, dest_obj)\n\n\ndef fetch_user(query):\n \"\"\"Get user by ``pk`` or ``username``. Return ``None`` if doesn't exist.\"\"\"\n user_filter = {'pk': query} if query.isdigit() else {'username': query}\n return get_user_model().objects.get(**user_filter)\n\n\ndef fetch_group(query):\n \"\"\"Get group by ``pk`` or ``name``. Return ``None`` if doesn't exist.\"\"\"\n group_filter = {'pk': query} if query.isdigit() else {'name': query}\n return Group.objects.get(**group_filter)\n\n\ndef check_owner_permission(payload, allow_user_owner):\n \"\"\"Raise ``PermissionDenied``if ``owner`` found in ``data``.\"\"\"\n for entity_type in ['users', 'groups']:\n for perm_type in ['add', 'remove']:\n for perms in payload.get(entity_type, {}).get(perm_type, {}).values():\n if 'owner' in perms:\n if entity_type == 'users' and allow_user_owner:\n continue\n\n if entity_type == 'groups':\n raise exceptions.ParseError(\"Owner permission cannot be assigned to a group\")\n\n raise exceptions.PermissionDenied(\"Only owners can grant/revoke owner permission\")\n\n\ndef check_public_permissions(payload):\n \"\"\"Raise ``PermissionDenied`` if public permissions are too open.\"\"\"\n allowed_public_permissions = ['view', 'add', 'download']\n for perm_type in ['add', 'remove']:\n for perm in payload.get('public', {}).get(perm_type, []):\n if perm not in allowed_public_permissions:\n raise exceptions.PermissionDenied(\"Permissions for public users are too open\")\n\n\ndef check_user_permissions(payload, user_pk):\n \"\"\"Raise ``PermissionDenied`` if ``payload`` includes ``user_pk``.\"\"\"\n for perm_type in ['add', 'remove']:\n user_pks = payload.get('users', {}).get(perm_type, {}).keys()\n if user_pk in user_pks:\n raise exceptions.PermissionDenied(\"You cannot change your own permissions\")\n\n\ndef remove_permission(payload, permission):\n \"\"\"Remove all occurrences of ``permission`` from ``payload``.\"\"\"\n payload = copy.deepcopy(payload)\n\n for entity_type in ['users', 'groups']:\n for perm_type in ['add', 'remove']:\n for perms in payload.get(entity_type, {}).get(perm_type, {}).values():\n if permission in perms:\n perms.remove(permission)\n\n for perm_type in ['add', 'remove']:\n perms = payload.get('public', {}).get(perm_type, [])\n if permission in perms:\n perms.remove(permission)\n\n return payload\n\n\ndef update_permission(obj, data):\n \"\"\"Update object permissions.\"\"\"\n content_type = ContentType.objects.get_for_model(obj)\n full_permissions = list(zip(*obj._meta.permissions))[0] # pylint: disable=protected-access\n\n def set_permissions(entity_type, perm_type):\n \"\"\"Set object permissions.\"\"\"\n perm_func = assign_perm if perm_type == 'add' else remove_perm\n fetch_fn = fetch_user if entity_type == 'users' else fetch_group\n\n for entity_id in data.get(entity_type, {}).get(perm_type, []):\n entity = fetch_fn(entity_id)\n if entity:\n perms = data[entity_type][perm_type][entity_id]\n if perms == u'ALL':\n perms = full_permissions\n for perm in perms:\n perm_func('{}_{}'.format(perm.lower(), content_type), entity, obj)\n\n set_permissions('users', 'add')\n set_permissions('users', 'remove')\n set_permissions('groups', 'add')\n set_permissions('groups', 'remove')\n\n def set_public_permissions(perm_type):\n \"\"\"Set public permissions.\"\"\"\n perm_func = assign_perm if perm_type == 'add' else remove_perm\n user = AnonymousUser()\n perms = data.get('public', {}).get(perm_type, [])\n if perms == u'ALL':\n perms = full_permissions\n for perm in perms:\n perm_func('{}_{}'.format(perm.lower(), content_type), user, obj)\n\n set_public_permissions('add')\n set_public_permissions('remove')\n\n\ndef assign_contributor_permissions(obj):\n \"\"\"Assign all permissions to object's contributor.\"\"\"\n for permission in list(zip(*obj._meta.permissions))[0]: # pylint: disable=protected-access\n assign_perm(permission, obj.contributor, obj)\n", "path": "resolwe/permissions/utils.py"}]} | 2,088 | 814 |
gh_patches_debug_3415 | rasdani/github-patches | git_diff | pyro-ppl__numpyro-1041 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
New versions of sphinx and jinja2 break docs linting
This is observed by @tcbegley in #1034
</issue>
<code>
[start of setup.py]
1 # Copyright Contributors to the Pyro project.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from __future__ import absolute_import, division, print_function
5
6 import os
7 import sys
8
9 from setuptools import find_packages, setup
10
11 PROJECT_PATH = os.path.dirname(os.path.abspath(__file__))
12
13 # Find version
14 for line in open(os.path.join(PROJECT_PATH, "numpyro", "version.py")):
15 if line.startswith("__version__ = "):
16 version = line.strip().split()[2][1:-1]
17
18 # READ README.md for long description on PyPi.
19 try:
20 long_description = open("README.md", encoding="utf-8").read()
21 except Exception as e:
22 sys.stderr.write("Failed to read README.md:\n {}\n".format(e))
23 sys.stderr.flush()
24 long_description = ""
25
26
27 setup(
28 name="numpyro",
29 version=version,
30 description="Pyro PPL on NumPy",
31 packages=find_packages(include=["numpyro", "numpyro.*"]),
32 url="https://github.com/pyro-ppl/numpyro",
33 author="Uber AI Labs",
34 install_requires=[
35 "jax>=0.2.11",
36 "jaxlib>=0.1.62",
37 "tqdm",
38 ],
39 extras_require={
40 "doc": [
41 "ipython", # sphinx needs this to render codes
42 "jinja2<3.0.0",
43 "nbsphinx",
44 "sphinx<4.0.0",
45 "sphinx_rtd_theme",
46 "sphinx-gallery",
47 ],
48 "test": [
49 "black",
50 "flake8",
51 "isort>=5.0",
52 "pytest>=4.1",
53 "pyro-api>=0.1.1",
54 "scipy>=1.1",
55 ],
56 "dev": [
57 "dm-haiku",
58 "flax",
59 # TODO: bump funsor version before the release
60 "funsor @ git+https://github.com/pyro-ppl/funsor.git@d5574988665dd822ec64e41f2b54b9dc929959dc",
61 "graphviz",
62 "optax==0.0.6",
63 # TODO: change this to tensorflow_probability>0.12.1 when the next version
64 # of tfp is released. The current release is not compatible with jax>=0.2.12.
65 "tfp-nightly",
66 ],
67 "examples": ["arviz", "jupyter", "matplotlib", "pandas", "seaborn"],
68 },
69 long_description=long_description,
70 long_description_content_type="text/markdown",
71 keywords="probabilistic machine learning bayesian statistics",
72 license="Apache License 2.0",
73 classifiers=[
74 "Intended Audience :: Developers",
75 "Intended Audience :: Education",
76 "Intended Audience :: Science/Research",
77 "License :: OSI Approved :: Apache Software License",
78 "Operating System :: POSIX :: Linux",
79 "Operating System :: MacOS :: MacOS X",
80 "Programming Language :: Python :: 3.6",
81 "Programming Language :: Python :: 3.7",
82 "Programming Language :: Python :: 3.8",
83 "Programming Language :: Python :: 3.9",
84 ],
85 )
86
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -39,9 +39,8 @@
extras_require={
"doc": [
"ipython", # sphinx needs this to render codes
- "jinja2<3.0.0",
- "nbsphinx",
- "sphinx<4.0.0",
+ "nbsphinx>=0.8.5",
+ "sphinx",
"sphinx_rtd_theme",
"sphinx-gallery",
],
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -39,9 +39,8 @@\n extras_require={\n \"doc\": [\n \"ipython\", # sphinx needs this to render codes\n- \"jinja2<3.0.0\",\n- \"nbsphinx\",\n- \"sphinx<4.0.0\",\n+ \"nbsphinx>=0.8.5\",\n+ \"sphinx\",\n \"sphinx_rtd_theme\",\n \"sphinx-gallery\",\n ],\n", "issue": "New versions of sphinx and jinja2 break docs linting\nThis is observed by @tcbegley in #1034\n", "before_files": [{"content": "# Copyright Contributors to the Pyro project.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom __future__ import absolute_import, division, print_function\n\nimport os\nimport sys\n\nfrom setuptools import find_packages, setup\n\nPROJECT_PATH = os.path.dirname(os.path.abspath(__file__))\n\n# Find version\nfor line in open(os.path.join(PROJECT_PATH, \"numpyro\", \"version.py\")):\n if line.startswith(\"__version__ = \"):\n version = line.strip().split()[2][1:-1]\n\n# READ README.md for long description on PyPi.\ntry:\n long_description = open(\"README.md\", encoding=\"utf-8\").read()\nexcept Exception as e:\n sys.stderr.write(\"Failed to read README.md:\\n {}\\n\".format(e))\n sys.stderr.flush()\n long_description = \"\"\n\n\nsetup(\n name=\"numpyro\",\n version=version,\n description=\"Pyro PPL on NumPy\",\n packages=find_packages(include=[\"numpyro\", \"numpyro.*\"]),\n url=\"https://github.com/pyro-ppl/numpyro\",\n author=\"Uber AI Labs\",\n install_requires=[\n \"jax>=0.2.11\",\n \"jaxlib>=0.1.62\",\n \"tqdm\",\n ],\n extras_require={\n \"doc\": [\n \"ipython\", # sphinx needs this to render codes\n \"jinja2<3.0.0\",\n \"nbsphinx\",\n \"sphinx<4.0.0\",\n \"sphinx_rtd_theme\",\n \"sphinx-gallery\",\n ],\n \"test\": [\n \"black\",\n \"flake8\",\n \"isort>=5.0\",\n \"pytest>=4.1\",\n \"pyro-api>=0.1.1\",\n \"scipy>=1.1\",\n ],\n \"dev\": [\n \"dm-haiku\",\n \"flax\",\n # TODO: bump funsor version before the release\n \"funsor @ git+https://github.com/pyro-ppl/funsor.git@d5574988665dd822ec64e41f2b54b9dc929959dc\",\n \"graphviz\",\n \"optax==0.0.6\",\n # TODO: change this to tensorflow_probability>0.12.1 when the next version\n # of tfp is released. The current release is not compatible with jax>=0.2.12.\n \"tfp-nightly\",\n ],\n \"examples\": [\"arviz\", \"jupyter\", \"matplotlib\", \"pandas\", \"seaborn\"],\n },\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n keywords=\"probabilistic machine learning bayesian statistics\",\n license=\"Apache License 2.0\",\n classifiers=[\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Education\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Programming Language :: Python :: 3.9\",\n ],\n)\n", "path": "setup.py"}]} | 1,452 | 123 |
gh_patches_debug_39181 | rasdani/github-patches | git_diff | scikit-image__scikit-image-1895 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
denoise_bilateral works separately on channels, but no warning raised for 3-D images
The implementation of `restoration.denoise_bilateral` is 2-D only, with the last dimension considered to be for channels. However, this is not documented in the docstring, and no warning is raised if a 3-D image (with last dimension > 3) is passed.
I used `denoise_bilateral` in the past on 3-D images thinking that the implementation was 3-D. We could consider making the code 3-D or nD in the future, but for now fixing the docstring + warning looks like a reasonable goal for 0.12.
</issue>
<code>
[start of skimage/restoration/_denoise.py]
1 # coding: utf-8
2 import numpy as np
3 from .. import img_as_float
4 from ..restoration._denoise_cy import _denoise_bilateral, _denoise_tv_bregman
5 from .._shared.utils import _mode_deprecations
6
7
8 def denoise_bilateral(image, win_size=5, sigma_range=None, sigma_spatial=1,
9 bins=10000, mode='constant', cval=0):
10 """Denoise image using bilateral filter.
11
12 This is an edge-preserving and noise reducing denoising filter. It averages
13 pixels based on their spatial closeness and radiometric similarity.
14
15 Spatial closeness is measured by the gaussian function of the euclidian
16 distance between two pixels and a certain standard deviation
17 (`sigma_spatial`).
18
19 Radiometric similarity is measured by the gaussian function of the euclidian
20 distance between two color values and a certain standard deviation
21 (`sigma_range`).
22
23 Parameters
24 ----------
25 image : ndarray, shape (M, N[, 3])
26 Input image, 2D grayscale or RGB.
27 win_size : int
28 Window size for filtering.
29 sigma_range : float
30 Standard deviation for grayvalue/color distance (radiometric
31 similarity). A larger value results in averaging of pixels with larger
32 radiometric differences. Note, that the image will be converted using
33 the `img_as_float` function and thus the standard deviation is in
34 respect to the range ``[0, 1]``. If the value is ``None`` the standard
35 deviation of the ``image`` will be used.
36 sigma_spatial : float
37 Standard deviation for range distance. A larger value results in
38 averaging of pixels with larger spatial differences.
39 bins : int
40 Number of discrete values for gaussian weights of color filtering.
41 A larger value results in improved accuracy.
42 mode : {'constant', 'edge', 'symmetric', 'reflect', 'wrap'}
43 How to handle values outside the image borders. See
44 `numpy.pad` for detail.
45 cval : string
46 Used in conjunction with mode 'constant', the value outside
47 the image boundaries.
48
49 Returns
50 -------
51 denoised : ndarray
52 Denoised image.
53
54 References
55 ----------
56 .. [1] http://users.soe.ucsc.edu/~manduchi/Papers/ICCV98.pdf
57
58 Examples
59 --------
60 >>> from skimage import data, img_as_float
61 >>> astro = img_as_float(data.astronaut())
62 >>> astro = astro[220:300, 220:320]
63 >>> noisy = astro + 0.6 * astro.std() * np.random.random(astro.shape)
64 >>> noisy = np.clip(noisy, 0, 1)
65 >>> denoised = denoise_bilateral(noisy, sigma_range=0.05, sigma_spatial=15)
66 """
67 mode = _mode_deprecations(mode)
68 return _denoise_bilateral(image, win_size, sigma_range, sigma_spatial,
69 bins, mode, cval)
70
71
72 def denoise_tv_bregman(image, weight, max_iter=100, eps=1e-3, isotropic=True):
73 """Perform total-variation denoising using split-Bregman optimization.
74
75 Total-variation denoising (also know as total-variation regularization)
76 tries to find an image with less total-variation under the constraint
77 of being similar to the input image, which is controlled by the
78 regularization parameter.
79
80 Parameters
81 ----------
82 image : ndarray
83 Input data to be denoised (converted using img_as_float`).
84 weight : float
85 Denoising weight. The smaller the `weight`, the more denoising (at
86 the expense of less similarity to the `input`). The regularization
87 parameter `lambda` is chosen as `2 * weight`.
88 eps : float, optional
89 Relative difference of the value of the cost function that determines
90 the stop criterion. The algorithm stops when::
91
92 SUM((u(n) - u(n-1))**2) < eps
93
94 max_iter : int, optional
95 Maximal number of iterations used for the optimization.
96 isotropic : boolean, optional
97 Switch between isotropic and anisotropic TV denoising.
98
99 Returns
100 -------
101 u : ndarray
102 Denoised image.
103
104 References
105 ----------
106 .. [1] http://en.wikipedia.org/wiki/Total_variation_denoising
107 .. [2] Tom Goldstein and Stanley Osher, "The Split Bregman Method For L1
108 Regularized Problems",
109 ftp://ftp.math.ucla.edu/pub/camreport/cam08-29.pdf
110 .. [3] Pascal Getreuer, "RudinβOsherβFatemi Total Variation Denoising
111 using Split Bregman" in Image Processing On Line on 2012β05β19,
112 http://www.ipol.im/pub/art/2012/g-tvd/article_lr.pdf
113 .. [4] http://www.math.ucsb.edu/~cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf
114
115 """
116 return _denoise_tv_bregman(image, weight, max_iter, eps, isotropic)
117
118
119 def _denoise_tv_chambolle_nd(im, weight=0.1, eps=2.e-4, n_iter_max=200):
120 """Perform total-variation denoising on n-dimensional images.
121
122 Parameters
123 ----------
124 im : ndarray
125 n-D input data to be denoised.
126 weight : float, optional
127 Denoising weight. The greater `weight`, the more denoising (at
128 the expense of fidelity to `input`).
129 eps : float, optional
130 Relative difference of the value of the cost function that determines
131 the stop criterion. The algorithm stops when:
132
133 (E_(n-1) - E_n) < eps * E_0
134
135 n_iter_max : int, optional
136 Maximal number of iterations used for the optimization.
137
138 Returns
139 -------
140 out : ndarray
141 Denoised array of floats.
142
143 Notes
144 -----
145 Rudin, Osher and Fatemi algorithm.
146
147 """
148
149 ndim = im.ndim
150 p = np.zeros((im.ndim, ) + im.shape, dtype=im.dtype)
151 g = np.zeros_like(p)
152 d = np.zeros_like(im)
153 i = 0
154 while i < n_iter_max:
155 if i > 0:
156 # d will be the (negative) divergence of p
157 d = -p.sum(0)
158 slices_d = [slice(None), ] * ndim
159 slices_p = [slice(None), ] * (ndim + 1)
160 for ax in range(ndim):
161 slices_d[ax] = slice(1, None)
162 slices_p[ax+1] = slice(0, -1)
163 slices_p[0] = ax
164 d[slices_d] += p[slices_p]
165 slices_d[ax] = slice(None)
166 slices_p[ax+1] = slice(None)
167 out = im + d
168 else:
169 out = im
170 E = (d ** 2).sum()
171
172 # g stores the gradients of out along each axis
173 # e.g. g[0] is the first order finite difference along axis 0
174 slices_g = [slice(None), ] * (ndim + 1)
175 for ax in range(ndim):
176 slices_g[ax+1] = slice(0, -1)
177 slices_g[0] = ax
178 g[slices_g] = np.diff(out, axis=ax)
179 slices_g[ax+1] = slice(None)
180
181 norm = np.sqrt((g ** 2).sum(axis=0))[np.newaxis, ...]
182 E += weight * norm.sum()
183 tau = 1. / (2.*ndim)
184 norm *= tau / weight
185 norm += 1.
186 p -= tau * g
187 p /= norm
188 E /= float(im.size)
189 if i == 0:
190 E_init = E
191 E_previous = E
192 else:
193 if np.abs(E_previous - E) < eps * E_init:
194 break
195 else:
196 E_previous = E
197 i += 1
198 return out
199
200
201 def denoise_tv_chambolle(im, weight=0.1, eps=2.e-4, n_iter_max=200,
202 multichannel=False):
203 """Perform total-variation denoising on n-dimensional images.
204
205 Parameters
206 ----------
207 im : ndarray of ints, uints or floats
208 Input data to be denoised. `im` can be of any numeric type,
209 but it is cast into an ndarray of floats for the computation
210 of the denoised image.
211 weight : float, optional
212 Denoising weight. The greater `weight`, the more denoising (at
213 the expense of fidelity to `input`).
214 eps : float, optional
215 Relative difference of the value of the cost function that
216 determines the stop criterion. The algorithm stops when:
217
218 (E_(n-1) - E_n) < eps * E_0
219
220 n_iter_max : int, optional
221 Maximal number of iterations used for the optimization.
222 multichannel : bool, optional
223 Apply total-variation denoising separately for each channel. This
224 option should be true for color images, otherwise the denoising is
225 also applied in the channels dimension.
226
227 Returns
228 -------
229 out : ndarray
230 Denoised image.
231
232 Notes
233 -----
234 Make sure to set the multichannel parameter appropriately for color images.
235
236 The principle of total variation denoising is explained in
237 http://en.wikipedia.org/wiki/Total_variation_denoising
238
239 The principle of total variation denoising is to minimize the
240 total variation of the image, which can be roughly described as
241 the integral of the norm of the image gradient. Total variation
242 denoising tends to produce "cartoon-like" images, that is,
243 piecewise-constant images.
244
245 This code is an implementation of the algorithm of Rudin, Fatemi and Osher
246 that was proposed by Chambolle in [1]_.
247
248 References
249 ----------
250 .. [1] A. Chambolle, An algorithm for total variation minimization and
251 applications, Journal of Mathematical Imaging and Vision,
252 Springer, 2004, 20, 89-97.
253
254 Examples
255 --------
256 2D example on astronaut image:
257
258 >>> from skimage import color, data
259 >>> img = color.rgb2gray(data.astronaut())[:50, :50]
260 >>> img += 0.5 * img.std() * np.random.randn(*img.shape)
261 >>> denoised_img = denoise_tv_chambolle(img, weight=60)
262
263 3D example on synthetic data:
264
265 >>> x, y, z = np.ogrid[0:20, 0:20, 0:20]
266 >>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2
267 >>> mask = mask.astype(np.float)
268 >>> mask += 0.2*np.random.randn(*mask.shape)
269 >>> res = denoise_tv_chambolle(mask, weight=100)
270
271 """
272
273 im_type = im.dtype
274 if not im_type.kind == 'f':
275 im = img_as_float(im)
276
277 if multichannel:
278 out = np.zeros_like(im)
279 for c in range(im.shape[-1]):
280 out[..., c] = _denoise_tv_chambolle_nd(im[..., c], weight, eps,
281 n_iter_max)
282 else:
283 out = _denoise_tv_chambolle_nd(im, weight, eps, n_iter_max)
284 return out
285
[end of skimage/restoration/_denoise.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/restoration/_denoise.py b/skimage/restoration/_denoise.py
--- a/skimage/restoration/_denoise.py
+++ b/skimage/restoration/_denoise.py
@@ -3,10 +3,11 @@
from .. import img_as_float
from ..restoration._denoise_cy import _denoise_bilateral, _denoise_tv_bregman
from .._shared.utils import _mode_deprecations
+import warnings
def denoise_bilateral(image, win_size=5, sigma_range=None, sigma_spatial=1,
- bins=10000, mode='constant', cval=0):
+ bins=10000, mode='constant', cval=0, multichannel=True):
"""Denoise image using bilateral filter.
This is an edge-preserving and noise reducing denoising filter. It averages
@@ -45,6 +46,9 @@
cval : string
Used in conjunction with mode 'constant', the value outside
the image boundaries.
+ multichannel : bool
+ Whether the last axis of the image is to be interpreted as multiple
+ channels or another spatial dimension.
Returns
-------
@@ -64,6 +68,38 @@
>>> noisy = np.clip(noisy, 0, 1)
>>> denoised = denoise_bilateral(noisy, sigma_range=0.05, sigma_spatial=15)
"""
+ if multichannel:
+ if image.ndim != 3:
+ if image.ndim == 2:
+ raise ValueError("Use ``multichannel=False`` for 2D grayscale "
+ "images. The last axis of the input image "
+ "must be multiple color channels not another "
+ "spatial dimension.")
+ else:
+ raise ValueError("Bilateral filter is only implemented for "
+ "2D grayscale images (image.ndim == 2) and "
+ "2D multichannel (image.ndim == 3) images, "
+ "but the input image has {0} dimensions. "
+ "".format(image.ndim))
+ elif image.shape[2] not in (3, 4):
+ if image.shape[2] > 4:
+ warnings.warn("The last axis of the input image is interpreted "
+ "as channels. Input image with shape {0} has {1} "
+ "channels in last axis. ``denoise_bilateral`` is "
+ "implemented for 2D grayscale and color images "
+ "only.".format(image.shape, image.shape[2]))
+ else:
+ msg = "Input image must be grayscale, RGB, or RGBA; but has shape {0}."
+ warnings.warn(msg.format(image.shape))
+ else:
+ if image.ndim > 2:
+ raise ValueError("Bilateral filter is not implemented for "
+ "grayscale images of 3 or more dimensions, "
+ "but input image has {0} dimension. Use "
+ "``multichannel=True`` for 2-D RGB "
+ "images.".format(image.shape))
+
+
mode = _mode_deprecations(mode)
return _denoise_bilateral(image, win_size, sigma_range, sigma_spatial,
bins, mode, cval)
| {"golden_diff": "diff --git a/skimage/restoration/_denoise.py b/skimage/restoration/_denoise.py\n--- a/skimage/restoration/_denoise.py\n+++ b/skimage/restoration/_denoise.py\n@@ -3,10 +3,11 @@\n from .. import img_as_float\n from ..restoration._denoise_cy import _denoise_bilateral, _denoise_tv_bregman\n from .._shared.utils import _mode_deprecations\n+import warnings\n \n \n def denoise_bilateral(image, win_size=5, sigma_range=None, sigma_spatial=1,\n- bins=10000, mode='constant', cval=0):\n+ bins=10000, mode='constant', cval=0, multichannel=True):\n \"\"\"Denoise image using bilateral filter.\n \n This is an edge-preserving and noise reducing denoising filter. It averages\n@@ -45,6 +46,9 @@\n cval : string\n Used in conjunction with mode 'constant', the value outside\n the image boundaries.\n+ multichannel : bool\n+ Whether the last axis of the image is to be interpreted as multiple\n+ channels or another spatial dimension.\n \n Returns\n -------\n@@ -64,6 +68,38 @@\n >>> noisy = np.clip(noisy, 0, 1)\n >>> denoised = denoise_bilateral(noisy, sigma_range=0.05, sigma_spatial=15)\n \"\"\"\n+ if multichannel:\n+ if image.ndim != 3:\n+ if image.ndim == 2:\n+ raise ValueError(\"Use ``multichannel=False`` for 2D grayscale \"\n+ \"images. The last axis of the input image \"\n+ \"must be multiple color channels not another \"\n+ \"spatial dimension.\")\n+ else:\n+ raise ValueError(\"Bilateral filter is only implemented for \"\n+ \"2D grayscale images (image.ndim == 2) and \"\n+ \"2D multichannel (image.ndim == 3) images, \"\n+ \"but the input image has {0} dimensions. \"\n+ \"\".format(image.ndim))\n+ elif image.shape[2] not in (3, 4):\n+ if image.shape[2] > 4:\n+ warnings.warn(\"The last axis of the input image is interpreted \"\n+ \"as channels. Input image with shape {0} has {1} \"\n+ \"channels in last axis. ``denoise_bilateral`` is \"\n+ \"implemented for 2D grayscale and color images \"\n+ \"only.\".format(image.shape, image.shape[2]))\n+ else:\n+ msg = \"Input image must be grayscale, RGB, or RGBA; but has shape {0}.\"\n+ warnings.warn(msg.format(image.shape))\n+ else:\n+ if image.ndim > 2:\n+ raise ValueError(\"Bilateral filter is not implemented for \"\n+ \"grayscale images of 3 or more dimensions, \"\n+ \"but input image has {0} dimension. Use \"\n+ \"``multichannel=True`` for 2-D RGB \"\n+ \"images.\".format(image.shape))\n+\n+\n mode = _mode_deprecations(mode)\n return _denoise_bilateral(image, win_size, sigma_range, sigma_spatial,\n bins, mode, cval)\n", "issue": "denoise_bilateral works separately on channels, but no warning raised for 3-D images\nThe implementation of `restoration.denoise_bilateral` is 2-D only, with the last dimension considered to be for channels. However, this is not documented in the docstring, and no warning is raised if a 3-D image (with last dimension > 3) is passed. \n\nI used `denoise_bilateral` in the past on 3-D images thinking that the implementation was 3-D. We could consider making the code 3-D or nD in the future, but for now fixing the docstring + warning looks like a reasonable goal for 0.12.\n\n", "before_files": [{"content": "# coding: utf-8\nimport numpy as np\nfrom .. import img_as_float\nfrom ..restoration._denoise_cy import _denoise_bilateral, _denoise_tv_bregman\nfrom .._shared.utils import _mode_deprecations\n\n\ndef denoise_bilateral(image, win_size=5, sigma_range=None, sigma_spatial=1,\n bins=10000, mode='constant', cval=0):\n \"\"\"Denoise image using bilateral filter.\n\n This is an edge-preserving and noise reducing denoising filter. It averages\n pixels based on their spatial closeness and radiometric similarity.\n\n Spatial closeness is measured by the gaussian function of the euclidian\n distance between two pixels and a certain standard deviation\n (`sigma_spatial`).\n\n Radiometric similarity is measured by the gaussian function of the euclidian\n distance between two color values and a certain standard deviation\n (`sigma_range`).\n\n Parameters\n ----------\n image : ndarray, shape (M, N[, 3])\n Input image, 2D grayscale or RGB.\n win_size : int\n Window size for filtering.\n sigma_range : float\n Standard deviation for grayvalue/color distance (radiometric\n similarity). A larger value results in averaging of pixels with larger\n radiometric differences. Note, that the image will be converted using\n the `img_as_float` function and thus the standard deviation is in\n respect to the range ``[0, 1]``. If the value is ``None`` the standard\n deviation of the ``image`` will be used.\n sigma_spatial : float\n Standard deviation for range distance. A larger value results in\n averaging of pixels with larger spatial differences.\n bins : int\n Number of discrete values for gaussian weights of color filtering.\n A larger value results in improved accuracy.\n mode : {'constant', 'edge', 'symmetric', 'reflect', 'wrap'}\n How to handle values outside the image borders. See\n `numpy.pad` for detail.\n cval : string\n Used in conjunction with mode 'constant', the value outside\n the image boundaries.\n\n Returns\n -------\n denoised : ndarray\n Denoised image.\n\n References\n ----------\n .. [1] http://users.soe.ucsc.edu/~manduchi/Papers/ICCV98.pdf\n\n Examples\n --------\n >>> from skimage import data, img_as_float\n >>> astro = img_as_float(data.astronaut())\n >>> astro = astro[220:300, 220:320]\n >>> noisy = astro + 0.6 * astro.std() * np.random.random(astro.shape)\n >>> noisy = np.clip(noisy, 0, 1)\n >>> denoised = denoise_bilateral(noisy, sigma_range=0.05, sigma_spatial=15)\n \"\"\"\n mode = _mode_deprecations(mode)\n return _denoise_bilateral(image, win_size, sigma_range, sigma_spatial,\n bins, mode, cval)\n\n\ndef denoise_tv_bregman(image, weight, max_iter=100, eps=1e-3, isotropic=True):\n \"\"\"Perform total-variation denoising using split-Bregman optimization.\n\n Total-variation denoising (also know as total-variation regularization)\n tries to find an image with less total-variation under the constraint\n of being similar to the input image, which is controlled by the\n regularization parameter.\n\n Parameters\n ----------\n image : ndarray\n Input data to be denoised (converted using img_as_float`).\n weight : float\n Denoising weight. The smaller the `weight`, the more denoising (at\n the expense of less similarity to the `input`). The regularization\n parameter `lambda` is chosen as `2 * weight`.\n eps : float, optional\n Relative difference of the value of the cost function that determines\n the stop criterion. The algorithm stops when::\n\n SUM((u(n) - u(n-1))**2) < eps\n\n max_iter : int, optional\n Maximal number of iterations used for the optimization.\n isotropic : boolean, optional\n Switch between isotropic and anisotropic TV denoising.\n\n Returns\n -------\n u : ndarray\n Denoised image.\n\n References\n ----------\n .. [1] http://en.wikipedia.org/wiki/Total_variation_denoising\n .. [2] Tom Goldstein and Stanley Osher, \"The Split Bregman Method For L1\n Regularized Problems\",\n ftp://ftp.math.ucla.edu/pub/camreport/cam08-29.pdf\n .. [3] Pascal Getreuer, \"Rudin\u2013Osher\u2013Fatemi Total Variation Denoising\n using Split Bregman\" in Image Processing On Line on 2012\u201305\u201319,\n http://www.ipol.im/pub/art/2012/g-tvd/article_lr.pdf\n .. [4] http://www.math.ucsb.edu/~cgarcia/UGProjects/BregmanAlgorithms_JacquelineBush.pdf\n\n \"\"\"\n return _denoise_tv_bregman(image, weight, max_iter, eps, isotropic)\n\n\ndef _denoise_tv_chambolle_nd(im, weight=0.1, eps=2.e-4, n_iter_max=200):\n \"\"\"Perform total-variation denoising on n-dimensional images.\n\n Parameters\n ----------\n im : ndarray\n n-D input data to be denoised.\n weight : float, optional\n Denoising weight. The greater `weight`, the more denoising (at\n the expense of fidelity to `input`).\n eps : float, optional\n Relative difference of the value of the cost function that determines\n the stop criterion. The algorithm stops when:\n\n (E_(n-1) - E_n) < eps * E_0\n\n n_iter_max : int, optional\n Maximal number of iterations used for the optimization.\n\n Returns\n -------\n out : ndarray\n Denoised array of floats.\n\n Notes\n -----\n Rudin, Osher and Fatemi algorithm.\n\n \"\"\"\n\n ndim = im.ndim\n p = np.zeros((im.ndim, ) + im.shape, dtype=im.dtype)\n g = np.zeros_like(p)\n d = np.zeros_like(im)\n i = 0\n while i < n_iter_max:\n if i > 0:\n # d will be the (negative) divergence of p\n d = -p.sum(0)\n slices_d = [slice(None), ] * ndim\n slices_p = [slice(None), ] * (ndim + 1)\n for ax in range(ndim):\n slices_d[ax] = slice(1, None)\n slices_p[ax+1] = slice(0, -1)\n slices_p[0] = ax\n d[slices_d] += p[slices_p]\n slices_d[ax] = slice(None)\n slices_p[ax+1] = slice(None)\n out = im + d\n else:\n out = im\n E = (d ** 2).sum()\n\n # g stores the gradients of out along each axis\n # e.g. g[0] is the first order finite difference along axis 0\n slices_g = [slice(None), ] * (ndim + 1)\n for ax in range(ndim):\n slices_g[ax+1] = slice(0, -1)\n slices_g[0] = ax\n g[slices_g] = np.diff(out, axis=ax)\n slices_g[ax+1] = slice(None)\n\n norm = np.sqrt((g ** 2).sum(axis=0))[np.newaxis, ...]\n E += weight * norm.sum()\n tau = 1. / (2.*ndim)\n norm *= tau / weight\n norm += 1.\n p -= tau * g\n p /= norm\n E /= float(im.size)\n if i == 0:\n E_init = E\n E_previous = E\n else:\n if np.abs(E_previous - E) < eps * E_init:\n break\n else:\n E_previous = E\n i += 1\n return out\n\n\ndef denoise_tv_chambolle(im, weight=0.1, eps=2.e-4, n_iter_max=200,\n multichannel=False):\n \"\"\"Perform total-variation denoising on n-dimensional images.\n\n Parameters\n ----------\n im : ndarray of ints, uints or floats\n Input data to be denoised. `im` can be of any numeric type,\n but it is cast into an ndarray of floats for the computation\n of the denoised image.\n weight : float, optional\n Denoising weight. The greater `weight`, the more denoising (at\n the expense of fidelity to `input`).\n eps : float, optional\n Relative difference of the value of the cost function that\n determines the stop criterion. The algorithm stops when:\n\n (E_(n-1) - E_n) < eps * E_0\n\n n_iter_max : int, optional\n Maximal number of iterations used for the optimization.\n multichannel : bool, optional\n Apply total-variation denoising separately for each channel. This\n option should be true for color images, otherwise the denoising is\n also applied in the channels dimension.\n\n Returns\n -------\n out : ndarray\n Denoised image.\n\n Notes\n -----\n Make sure to set the multichannel parameter appropriately for color images.\n\n The principle of total variation denoising is explained in\n http://en.wikipedia.org/wiki/Total_variation_denoising\n\n The principle of total variation denoising is to minimize the\n total variation of the image, which can be roughly described as\n the integral of the norm of the image gradient. Total variation\n denoising tends to produce \"cartoon-like\" images, that is,\n piecewise-constant images.\n\n This code is an implementation of the algorithm of Rudin, Fatemi and Osher\n that was proposed by Chambolle in [1]_.\n\n References\n ----------\n .. [1] A. Chambolle, An algorithm for total variation minimization and\n applications, Journal of Mathematical Imaging and Vision,\n Springer, 2004, 20, 89-97.\n\n Examples\n --------\n 2D example on astronaut image:\n\n >>> from skimage import color, data\n >>> img = color.rgb2gray(data.astronaut())[:50, :50]\n >>> img += 0.5 * img.std() * np.random.randn(*img.shape)\n >>> denoised_img = denoise_tv_chambolle(img, weight=60)\n\n 3D example on synthetic data:\n\n >>> x, y, z = np.ogrid[0:20, 0:20, 0:20]\n >>> mask = (x - 22)**2 + (y - 20)**2 + (z - 17)**2 < 8**2\n >>> mask = mask.astype(np.float)\n >>> mask += 0.2*np.random.randn(*mask.shape)\n >>> res = denoise_tv_chambolle(mask, weight=100)\n\n \"\"\"\n\n im_type = im.dtype\n if not im_type.kind == 'f':\n im = img_as_float(im)\n\n if multichannel:\n out = np.zeros_like(im)\n for c in range(im.shape[-1]):\n out[..., c] = _denoise_tv_chambolle_nd(im[..., c], weight, eps,\n n_iter_max)\n else:\n out = _denoise_tv_chambolle_nd(im, weight, eps, n_iter_max)\n return out\n", "path": "skimage/restoration/_denoise.py"}]} | 4,073 | 737 |
gh_patches_debug_15692 | rasdani/github-patches | git_diff | ethereum__web3.py-1198 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Drop repeated rpc endpoints
* Version: 4.6
### What was wrong?
web3.py has a few repeated endpoints, for example: `web3.version.net` and `web3.net.version`
### How can it be fixed?
Deprecate the endpoint api's that don't mirror the json-rpc spec namespacing, in favor of those that do.
</issue>
<code>
[start of web3/version.py]
1 from web3.module import (
2 Module,
3 )
4
5
6 class Version(Module):
7 @property
8 def api(self):
9 from web3 import __version__
10 return __version__
11
12 @property
13 def node(self):
14 return self.web3.manager.request_blocking("web3_clientVersion", [])
15
16 @property
17 def network(self):
18 return self.web3.manager.request_blocking("net_version", [])
19
20 @property
21 def ethereum(self):
22 return self.web3.manager.request_blocking("eth_protocolVersion", [])
23
[end of web3/version.py]
[start of web3/miner.py]
1 from web3.module import (
2 Module,
3 )
4
5
6 class Miner(Module):
7 @property
8 def hashrate(self):
9 return self.web3.manager.request_blocking("eth_hashrate", [])
10
11 def makeDAG(self, number):
12 return self.web3.manager.request_blocking("miner_makeDag", [number])
13
14 def setExtra(self, extra):
15 return self.web3.manager.request_blocking("miner_setExtra", [extra])
16
17 def setEtherBase(self, etherbase):
18 return self.web3.manager.request_blocking("miner_setEtherbase", [etherbase])
19
20 def setGasPrice(self, gas_price):
21 return self.web3.manager.request_blocking(
22 "miner_setGasPrice", [gas_price],
23 )
24
25 def start(self, num_threads):
26 return self.web3.manager.request_blocking(
27 "miner_start", [num_threads],
28 )
29
30 def stop(self):
31 return self.web3.manager.request_blocking("miner_stop", [])
32
33 def startAutoDAG(self):
34 return self.web3.manager.request_blocking("miner_startAutoDag", [])
35
36 def stopAutoDAG(self):
37 return self.web3.manager.request_blocking("miner_stopAutoDag", [])
38
[end of web3/miner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/web3/miner.py b/web3/miner.py
--- a/web3/miner.py
+++ b/web3/miner.py
@@ -4,10 +4,6 @@
class Miner(Module):
- @property
- def hashrate(self):
- return self.web3.manager.request_blocking("eth_hashrate", [])
-
def makeDAG(self, number):
return self.web3.manager.request_blocking("miner_makeDag", [number])
diff --git a/web3/version.py b/web3/version.py
--- a/web3/version.py
+++ b/web3/version.py
@@ -13,10 +13,6 @@
def node(self):
return self.web3.manager.request_blocking("web3_clientVersion", [])
- @property
- def network(self):
- return self.web3.manager.request_blocking("net_version", [])
-
@property
def ethereum(self):
return self.web3.manager.request_blocking("eth_protocolVersion", [])
| {"golden_diff": "diff --git a/web3/miner.py b/web3/miner.py\n--- a/web3/miner.py\n+++ b/web3/miner.py\n@@ -4,10 +4,6 @@\n \n \n class Miner(Module):\n- @property\n- def hashrate(self):\n- return self.web3.manager.request_blocking(\"eth_hashrate\", [])\n-\n def makeDAG(self, number):\n return self.web3.manager.request_blocking(\"miner_makeDag\", [number])\n \ndiff --git a/web3/version.py b/web3/version.py\n--- a/web3/version.py\n+++ b/web3/version.py\n@@ -13,10 +13,6 @@\n def node(self):\n return self.web3.manager.request_blocking(\"web3_clientVersion\", [])\n \n- @property\n- def network(self):\n- return self.web3.manager.request_blocking(\"net_version\", [])\n-\n @property\n def ethereum(self):\n return self.web3.manager.request_blocking(\"eth_protocolVersion\", [])\n", "issue": "Drop repeated rpc endpoints\n* Version: 4.6\r\n\r\n### What was wrong?\r\n\r\nweb3.py has a few repeated endpoints, for example: `web3.version.net` and `web3.net.version`\r\n\r\n\r\n### How can it be fixed?\r\nDeprecate the endpoint api's that don't mirror the json-rpc spec namespacing, in favor of those that do.\n", "before_files": [{"content": "from web3.module import (\n Module,\n)\n\n\nclass Version(Module):\n @property\n def api(self):\n from web3 import __version__\n return __version__\n\n @property\n def node(self):\n return self.web3.manager.request_blocking(\"web3_clientVersion\", [])\n\n @property\n def network(self):\n return self.web3.manager.request_blocking(\"net_version\", [])\n\n @property\n def ethereum(self):\n return self.web3.manager.request_blocking(\"eth_protocolVersion\", [])\n", "path": "web3/version.py"}, {"content": "from web3.module import (\n Module,\n)\n\n\nclass Miner(Module):\n @property\n def hashrate(self):\n return self.web3.manager.request_blocking(\"eth_hashrate\", [])\n\n def makeDAG(self, number):\n return self.web3.manager.request_blocking(\"miner_makeDag\", [number])\n\n def setExtra(self, extra):\n return self.web3.manager.request_blocking(\"miner_setExtra\", [extra])\n\n def setEtherBase(self, etherbase):\n return self.web3.manager.request_blocking(\"miner_setEtherbase\", [etherbase])\n\n def setGasPrice(self, gas_price):\n return self.web3.manager.request_blocking(\n \"miner_setGasPrice\", [gas_price],\n )\n\n def start(self, num_threads):\n return self.web3.manager.request_blocking(\n \"miner_start\", [num_threads],\n )\n\n def stop(self):\n return self.web3.manager.request_blocking(\"miner_stop\", [])\n\n def startAutoDAG(self):\n return self.web3.manager.request_blocking(\"miner_startAutoDag\", [])\n\n def stopAutoDAG(self):\n return self.web3.manager.request_blocking(\"miner_stopAutoDag\", [])\n", "path": "web3/miner.py"}]} | 1,096 | 215 |
gh_patches_debug_41673 | rasdani/github-patches | git_diff | rucio__rucio-3087 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
check global quota in running RSESelector
Motivation
----------
check global quota not only when initializing the RSESelector but also when calling the select_rse method.
Modification
------------
update select_rse/__update_quota method in the RSESelector
</issue>
<code>
[start of lib/rucio/core/rse_selector.py]
1 # Copyright European Organization for Nuclear Research (CERN)
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # You may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Authors:
9 # - Martin Barisits, <[email protected]>, 2013-2017
10 # - Cedric Serfon, <[email protected]>, 2015-2019
11 # - Robert Illingworth, <[email protected]>, 2019
12 # - Andrew Lister, <[email protected]>, 2019
13 # - Hannes Hansen, <[email protected]>, 2019
14 #
15 # PY3K COMPATIBLE
16
17 from random import uniform, shuffle
18
19 from rucio.common.exception import InsufficientAccountLimit, InsufficientTargetRSEs, InvalidRuleWeight, RSEOverQuota
20 from rucio.core.account import has_account_attribute, get_usage, get_all_rse_usages_per_account
21 from rucio.core.account_limit import get_local_account_limit, get_global_account_limits
22 from rucio.core.rse import list_rse_attributes, has_rse_attribute, get_rse_limits
23 from rucio.core.rse_counter import get_counter as get_rse_counter
24 from rucio.db.sqla.session import read_session
25
26
27 class RSESelector():
28 """
29 Representation of the RSE selector
30 """
31
32 @read_session
33 def __init__(self, account, rses, weight, copies, ignore_account_limit=False, session=None):
34 """
35 Initialize the RSE Selector.
36
37 :param account: Account owning the rule.
38 :param rses: List of rse dictionaries.
39 :param weight: Weighting to use.
40 :param copies: Number of copies to create.
41 :param ignore_account_limit: Flag if the quota should be ignored.
42 :param session: DB Session in use.
43 :raises: InvalidRuleWeight, InsufficientAccountLimit, InsufficientTargetRSEs
44 """
45 self.account = account
46 self.rses = [] # [{'rse_id':, 'weight':, 'staging_area'}]
47 self.copies = copies
48 if weight is not None:
49 for rse in rses:
50 attributes = list_rse_attributes(rse_id=rse['id'], session=session)
51 availability_write = True if rse.get('availability', 7) & 2 else False
52 if weight not in attributes:
53 continue # The RSE does not have the required weight set, therefore it is ignored
54 try:
55 self.rses.append({'rse_id': rse['id'],
56 'weight': float(attributes[weight]),
57 'mock_rse': attributes.get('mock', False),
58 'availability_write': availability_write,
59 'staging_area': rse['staging_area']})
60 except ValueError:
61 raise InvalidRuleWeight('The RSE \'%s\' has a non-number specified for the weight \'%s\'' % (rse['rse'], weight))
62 else:
63 for rse in rses:
64 mock_rse = has_rse_attribute(rse['id'], 'mock', session=session)
65 availability_write = True if rse.get('availability', 7) & 2 else False
66 self.rses.append({'rse_id': rse['id'],
67 'weight': 1,
68 'mock_rse': mock_rse,
69 'availability_write': availability_write,
70 'staging_area': rse['staging_area']})
71
72 if len(self.rses) < self.copies:
73 raise InsufficientTargetRSEs('Target RSE set not sufficient for number of copies. (%s copies requested, RSE set size %s)' % (self.copies, len(self.rses)))
74
75 rses_with_enough_quota = []
76 if has_account_attribute(account=account, key='admin', session=session) or ignore_account_limit:
77 for rse in self.rses:
78 rse['quota_left'] = float('inf')
79 rse['space_left'] = float('inf')
80 rses_with_enough_quota.append(rse)
81 else:
82 global_quota_limit = get_global_account_limits(account=account, session=session)
83 all_rse_usages = {usage['rse_id']: usage['bytes'] for usage in get_all_rse_usages_per_account(account=account, session=session)}
84 for rse in self.rses:
85 if rse['mock_rse']:
86 rse['quota_left'] = float('inf')
87 rse['space_left'] = float('inf')
88 rses_with_enough_quota.append(rse)
89 else:
90 # check local quota
91 local_quota_left = None
92 quota_limit = get_local_account_limit(account=account, rse_id=rse['rse_id'], session=session)
93 if quota_limit is None:
94 local_quota_left = 0
95 else:
96 local_quota_left = quota_limit - get_usage(rse_id=rse['rse_id'], account=account, session=session)['bytes']
97 # check global quota
98 all_global_quota_enough = True
99 for rse_expression, limit in global_quota_limit.items():
100 if rse['rse_id'] in limit['resolved_rse_ids']:
101 quota_limit = limit['limit']
102 global_quota_left = None
103 if quota_limit is None:
104 global_quota_left = 0
105 else:
106 rse_expression_usage = 0
107 for rse_id in limit['resolved_rse_ids']:
108 rse_expression_usage += all_rse_usages.get(rse_id, 0)
109 global_quota_left = quota_limit - rse_expression_usage
110 if global_quota_left <= 0:
111 all_global_quota_enough = False
112 break
113 if local_quota_left > 0 and all_global_quota_enough:
114 rse['quota_left'] = local_quota_left
115 space_limit = get_rse_limits(name='MaxSpaceAvailable', rse_id=rse['rse_id'], session=session).get('MaxSpaceAvailable')
116 if space_limit is None or space_limit < 0:
117 rse['space_left'] = float('inf')
118 else:
119 rse['space_left'] = space_limit - get_rse_counter(rse_id=rse['rse_id'], session=session)['bytes']
120 rses_with_enough_quota.append(rse)
121
122 self.rses = rses_with_enough_quota
123 if len(self.rses) < self.copies:
124 raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\'s to fullfill the operation.')
125
126 # don't consider removing rses based on the total space here - because files already on the RSE are taken into account
127 # it is possible to have no space but still be able to fulfil the rule
128
129 def select_rse(self, size, preferred_rse_ids, copies=0, blacklist=[], prioritize_order_over_weight=False, existing_rse_size=None):
130 """
131 Select n RSEs to replicate data to.
132
133 :param size: Size of the block being replicated.
134 :param preferred_rse_ids: Ordered list of preferred rses. (If possible replicate to them)
135 :param copies: Select this amount of copies, if 0 use the pre-defined rule value.
136 :param blacklist: List of blacklisted rses. (Do not put replicas on these sites)
137 :param prioritze_order_over_weight: Prioritize the order of the preferred_rse_ids list over the picking done by weight.
138 :existing_rse_size: Dictionary of size of files already present at each rse
139 :returns: List of (RSE_id, staging_area, availability_write) tuples.
140 :raises: InsufficientAccountLimit, InsufficientTargetRSEs
141 """
142
143 result = []
144 rses = self.rses
145 count = self.copies if copies == 0 else copies
146
147 # Remove blacklisted rses
148 if blacklist:
149 rses = [rse for rse in self.rses if rse['rse_id'] not in blacklist]
150 if len(rses) < count:
151 raise InsufficientTargetRSEs('There are not enough target RSEs to fulfil the request at this time.')
152
153 # Remove rses which do not have enough space, accounting for the files already at each rse
154 if existing_rse_size is None:
155 existing_rse_size = {}
156 rses = [rse for rse in rses if rse['space_left'] >= size - existing_rse_size.get(rse['rse_id'], 0)]
157 if len(rses) < count:
158 raise RSEOverQuota('There is insufficient space on any of the target RSE\'s to fullfill the operation.')
159
160 # Remove rses which do not have enough quota
161 rses = [rse for rse in rses if rse['quota_left'] > size]
162 if len(rses) < count:
163 raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\'s to fullfill the operation.')
164
165 for copy in range(count):
166 # Remove rses already in the result set
167 rses = [rse for rse in rses if rse['rse_id'] not in [item[0] for item in result]]
168 rses_dict = {}
169 for rse in rses:
170 rses_dict[rse['rse_id']] = rse
171 # Prioritize the preffered rses
172 preferred_rses = [rses_dict[rse_id] for rse_id in preferred_rse_ids if rse_id in rses_dict]
173 if prioritize_order_over_weight and preferred_rses:
174 rse = (preferred_rses[0]['rse_id'], preferred_rses[0]['staging_area'], preferred_rses[0]['availability_write'])
175 elif preferred_rses:
176 rse = self.__choose_rse(preferred_rses)
177 else:
178 rse = self.__choose_rse(rses)
179 result.append(rse)
180 self.__update_quota(rse, size)
181 return result
182
183 def get_rse_dictionary(self):
184 """
185 Return the current dictionary of potential RSEs stored in the RSE selector
186
187 :returns: List of RSE dictionaries
188 """
189 rse_dict = {}
190 for rse in self.rses:
191 rse_dict[rse['rse_id']] = rse
192 return rse_dict
193
194 def __update_quota(self, rse, size):
195 """
196 Update the internal quota value.
197
198 :param rse: RSE tuple to update.
199 :param size: Size to substract.
200 """
201
202 for element in self.rses:
203 if element['rse_id'] == rse[0]:
204 element['quota_left'] -= size
205 return
206
207 def __choose_rse(self, rses):
208 """
209 Choose an RSE based on weighting.
210
211 :param rses: The rses to be considered for the choose.
212 :return: The (rse_id, staging_area) tuple of the chosen RSE.
213 """
214
215 shuffle(rses)
216 pick = uniform(0, sum([rse['weight'] for rse in rses]))
217 weight = 0
218 for rse in rses:
219 weight += rse['weight']
220 if pick <= weight:
221 return (rse['rse_id'], rse['staging_area'], rse['availability_write'])
222
[end of lib/rucio/core/rse_selector.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/lib/rucio/core/rse_selector.py b/lib/rucio/core/rse_selector.py
--- a/lib/rucio/core/rse_selector.py
+++ b/lib/rucio/core/rse_selector.py
@@ -94,7 +94,9 @@
local_quota_left = 0
else:
local_quota_left = quota_limit - get_usage(rse_id=rse['rse_id'], account=account, session=session)['bytes']
+
# check global quota
+ rse['global_quota_left'] = {}
all_global_quota_enough = True
for rse_expression, limit in global_quota_limit.items():
if rse['rse_id'] in limit['resolved_rse_ids']:
@@ -110,6 +112,8 @@
if global_quota_left <= 0:
all_global_quota_enough = False
break
+ else:
+ rse['global_quota_left'][rse_expression] = global_quota_left
if local_quota_left > 0 and all_global_quota_enough:
rse['quota_left'] = local_quota_left
space_limit = get_rse_limits(name='MaxSpaceAvailable', rse_id=rse['rse_id'], session=session).get('MaxSpaceAvailable')
@@ -157,11 +161,25 @@
if len(rses) < count:
raise RSEOverQuota('There is insufficient space on any of the target RSE\'s to fullfill the operation.')
- # Remove rses which do not have enough quota
+ # Remove rses which do not have enough local quota
rses = [rse for rse in rses if rse['quota_left'] > size]
if len(rses) < count:
raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\'s to fullfill the operation.')
+ # Remove rses which do not have enough global quota
+ rses_with_enough_quota = []
+ for rse in rses:
+ enough_global_quota = True
+ for rse_expression in rse.get('global_quota_left', []):
+ if rse['global_quota_left'][rse_expression] < size:
+ enough_global_quota = False
+ break
+ if enough_global_quota:
+ rses_with_enough_quota.append(rse)
+ rses = rses_with_enough_quota
+ if len(rses) < count:
+ raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\'s to fullfill the operation.')
+
for copy in range(count):
# Remove rses already in the result set
rses = [rse for rse in rses if rse['rse_id'] not in [item[0] for item in result]]
@@ -202,6 +220,8 @@
for element in self.rses:
if element['rse_id'] == rse[0]:
element['quota_left'] -= size
+ for rse_expression in element.get('global_quota_left', []):
+ element['global_quota_left'][rse_expression] -= size
return
def __choose_rse(self, rses):
| {"golden_diff": "diff --git a/lib/rucio/core/rse_selector.py b/lib/rucio/core/rse_selector.py\n--- a/lib/rucio/core/rse_selector.py\n+++ b/lib/rucio/core/rse_selector.py\n@@ -94,7 +94,9 @@\n local_quota_left = 0\n else:\n local_quota_left = quota_limit - get_usage(rse_id=rse['rse_id'], account=account, session=session)['bytes']\n+\n # check global quota\n+ rse['global_quota_left'] = {}\n all_global_quota_enough = True\n for rse_expression, limit in global_quota_limit.items():\n if rse['rse_id'] in limit['resolved_rse_ids']:\n@@ -110,6 +112,8 @@\n if global_quota_left <= 0:\n all_global_quota_enough = False\n break\n+ else:\n+ rse['global_quota_left'][rse_expression] = global_quota_left\n if local_quota_left > 0 and all_global_quota_enough:\n rse['quota_left'] = local_quota_left\n space_limit = get_rse_limits(name='MaxSpaceAvailable', rse_id=rse['rse_id'], session=session).get('MaxSpaceAvailable')\n@@ -157,11 +161,25 @@\n if len(rses) < count:\n raise RSEOverQuota('There is insufficient space on any of the target RSE\\'s to fullfill the operation.')\n \n- # Remove rses which do not have enough quota\n+ # Remove rses which do not have enough local quota\n rses = [rse for rse in rses if rse['quota_left'] > size]\n if len(rses) < count:\n raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\\'s to fullfill the operation.')\n \n+ # Remove rses which do not have enough global quota\n+ rses_with_enough_quota = []\n+ for rse in rses:\n+ enough_global_quota = True\n+ for rse_expression in rse.get('global_quota_left', []):\n+ if rse['global_quota_left'][rse_expression] < size:\n+ enough_global_quota = False\n+ break\n+ if enough_global_quota:\n+ rses_with_enough_quota.append(rse)\n+ rses = rses_with_enough_quota\n+ if len(rses) < count:\n+ raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\\'s to fullfill the operation.')\n+\n for copy in range(count):\n # Remove rses already in the result set\n rses = [rse for rse in rses if rse['rse_id'] not in [item[0] for item in result]]\n@@ -202,6 +220,8 @@\n for element in self.rses:\n if element['rse_id'] == rse[0]:\n element['quota_left'] -= size\n+ for rse_expression in element.get('global_quota_left', []):\n+ element['global_quota_left'][rse_expression] -= size\n return\n \n def __choose_rse(self, rses):\n", "issue": "check global quota in running RSESelector\nMotivation\r\n----------\r\ncheck global quota not only when initializing the RSESelector but also when calling the select_rse method.\r\n\r\n\r\nModification\r\n------------\r\nupdate select_rse/__update_quota method in the RSESelector\r\n\r\n\n", "before_files": [{"content": "# Copyright European Organization for Nuclear Research (CERN)\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# You may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Authors:\n# - Martin Barisits, <[email protected]>, 2013-2017\n# - Cedric Serfon, <[email protected]>, 2015-2019\n# - Robert Illingworth, <[email protected]>, 2019\n# - Andrew Lister, <[email protected]>, 2019\n# - Hannes Hansen, <[email protected]>, 2019\n#\n# PY3K COMPATIBLE\n\nfrom random import uniform, shuffle\n\nfrom rucio.common.exception import InsufficientAccountLimit, InsufficientTargetRSEs, InvalidRuleWeight, RSEOverQuota\nfrom rucio.core.account import has_account_attribute, get_usage, get_all_rse_usages_per_account\nfrom rucio.core.account_limit import get_local_account_limit, get_global_account_limits\nfrom rucio.core.rse import list_rse_attributes, has_rse_attribute, get_rse_limits\nfrom rucio.core.rse_counter import get_counter as get_rse_counter\nfrom rucio.db.sqla.session import read_session\n\n\nclass RSESelector():\n \"\"\"\n Representation of the RSE selector\n \"\"\"\n\n @read_session\n def __init__(self, account, rses, weight, copies, ignore_account_limit=False, session=None):\n \"\"\"\n Initialize the RSE Selector.\n\n :param account: Account owning the rule.\n :param rses: List of rse dictionaries.\n :param weight: Weighting to use.\n :param copies: Number of copies to create.\n :param ignore_account_limit: Flag if the quota should be ignored.\n :param session: DB Session in use.\n :raises: InvalidRuleWeight, InsufficientAccountLimit, InsufficientTargetRSEs\n \"\"\"\n self.account = account\n self.rses = [] # [{'rse_id':, 'weight':, 'staging_area'}]\n self.copies = copies\n if weight is not None:\n for rse in rses:\n attributes = list_rse_attributes(rse_id=rse['id'], session=session)\n availability_write = True if rse.get('availability', 7) & 2 else False\n if weight not in attributes:\n continue # The RSE does not have the required weight set, therefore it is ignored\n try:\n self.rses.append({'rse_id': rse['id'],\n 'weight': float(attributes[weight]),\n 'mock_rse': attributes.get('mock', False),\n 'availability_write': availability_write,\n 'staging_area': rse['staging_area']})\n except ValueError:\n raise InvalidRuleWeight('The RSE \\'%s\\' has a non-number specified for the weight \\'%s\\'' % (rse['rse'], weight))\n else:\n for rse in rses:\n mock_rse = has_rse_attribute(rse['id'], 'mock', session=session)\n availability_write = True if rse.get('availability', 7) & 2 else False\n self.rses.append({'rse_id': rse['id'],\n 'weight': 1,\n 'mock_rse': mock_rse,\n 'availability_write': availability_write,\n 'staging_area': rse['staging_area']})\n\n if len(self.rses) < self.copies:\n raise InsufficientTargetRSEs('Target RSE set not sufficient for number of copies. (%s copies requested, RSE set size %s)' % (self.copies, len(self.rses)))\n\n rses_with_enough_quota = []\n if has_account_attribute(account=account, key='admin', session=session) or ignore_account_limit:\n for rse in self.rses:\n rse['quota_left'] = float('inf')\n rse['space_left'] = float('inf')\n rses_with_enough_quota.append(rse)\n else:\n global_quota_limit = get_global_account_limits(account=account, session=session)\n all_rse_usages = {usage['rse_id']: usage['bytes'] for usage in get_all_rse_usages_per_account(account=account, session=session)}\n for rse in self.rses:\n if rse['mock_rse']:\n rse['quota_left'] = float('inf')\n rse['space_left'] = float('inf')\n rses_with_enough_quota.append(rse)\n else:\n # check local quota\n local_quota_left = None\n quota_limit = get_local_account_limit(account=account, rse_id=rse['rse_id'], session=session)\n if quota_limit is None:\n local_quota_left = 0\n else:\n local_quota_left = quota_limit - get_usage(rse_id=rse['rse_id'], account=account, session=session)['bytes']\n # check global quota\n all_global_quota_enough = True\n for rse_expression, limit in global_quota_limit.items():\n if rse['rse_id'] in limit['resolved_rse_ids']:\n quota_limit = limit['limit']\n global_quota_left = None\n if quota_limit is None:\n global_quota_left = 0\n else:\n rse_expression_usage = 0\n for rse_id in limit['resolved_rse_ids']:\n rse_expression_usage += all_rse_usages.get(rse_id, 0)\n global_quota_left = quota_limit - rse_expression_usage\n if global_quota_left <= 0:\n all_global_quota_enough = False\n break\n if local_quota_left > 0 and all_global_quota_enough:\n rse['quota_left'] = local_quota_left\n space_limit = get_rse_limits(name='MaxSpaceAvailable', rse_id=rse['rse_id'], session=session).get('MaxSpaceAvailable')\n if space_limit is None or space_limit < 0:\n rse['space_left'] = float('inf')\n else:\n rse['space_left'] = space_limit - get_rse_counter(rse_id=rse['rse_id'], session=session)['bytes']\n rses_with_enough_quota.append(rse)\n\n self.rses = rses_with_enough_quota\n if len(self.rses) < self.copies:\n raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\\'s to fullfill the operation.')\n\n # don't consider removing rses based on the total space here - because files already on the RSE are taken into account\n # it is possible to have no space but still be able to fulfil the rule\n\n def select_rse(self, size, preferred_rse_ids, copies=0, blacklist=[], prioritize_order_over_weight=False, existing_rse_size=None):\n \"\"\"\n Select n RSEs to replicate data to.\n\n :param size: Size of the block being replicated.\n :param preferred_rse_ids: Ordered list of preferred rses. (If possible replicate to them)\n :param copies: Select this amount of copies, if 0 use the pre-defined rule value.\n :param blacklist: List of blacklisted rses. (Do not put replicas on these sites)\n :param prioritze_order_over_weight: Prioritize the order of the preferred_rse_ids list over the picking done by weight.\n :existing_rse_size: Dictionary of size of files already present at each rse\n :returns: List of (RSE_id, staging_area, availability_write) tuples.\n :raises: InsufficientAccountLimit, InsufficientTargetRSEs\n \"\"\"\n\n result = []\n rses = self.rses\n count = self.copies if copies == 0 else copies\n\n # Remove blacklisted rses\n if blacklist:\n rses = [rse for rse in self.rses if rse['rse_id'] not in blacklist]\n if len(rses) < count:\n raise InsufficientTargetRSEs('There are not enough target RSEs to fulfil the request at this time.')\n\n # Remove rses which do not have enough space, accounting for the files already at each rse\n if existing_rse_size is None:\n existing_rse_size = {}\n rses = [rse for rse in rses if rse['space_left'] >= size - existing_rse_size.get(rse['rse_id'], 0)]\n if len(rses) < count:\n raise RSEOverQuota('There is insufficient space on any of the target RSE\\'s to fullfill the operation.')\n\n # Remove rses which do not have enough quota\n rses = [rse for rse in rses if rse['quota_left'] > size]\n if len(rses) < count:\n raise InsufficientAccountLimit('There is insufficient quota on any of the target RSE\\'s to fullfill the operation.')\n\n for copy in range(count):\n # Remove rses already in the result set\n rses = [rse for rse in rses if rse['rse_id'] not in [item[0] for item in result]]\n rses_dict = {}\n for rse in rses:\n rses_dict[rse['rse_id']] = rse\n # Prioritize the preffered rses\n preferred_rses = [rses_dict[rse_id] for rse_id in preferred_rse_ids if rse_id in rses_dict]\n if prioritize_order_over_weight and preferred_rses:\n rse = (preferred_rses[0]['rse_id'], preferred_rses[0]['staging_area'], preferred_rses[0]['availability_write'])\n elif preferred_rses:\n rse = self.__choose_rse(preferred_rses)\n else:\n rse = self.__choose_rse(rses)\n result.append(rse)\n self.__update_quota(rse, size)\n return result\n\n def get_rse_dictionary(self):\n \"\"\"\n Return the current dictionary of potential RSEs stored in the RSE selector\n\n :returns: List of RSE dictionaries\n \"\"\"\n rse_dict = {}\n for rse in self.rses:\n rse_dict[rse['rse_id']] = rse\n return rse_dict\n\n def __update_quota(self, rse, size):\n \"\"\"\n Update the internal quota value.\n\n :param rse: RSE tuple to update.\n :param size: Size to substract.\n \"\"\"\n\n for element in self.rses:\n if element['rse_id'] == rse[0]:\n element['quota_left'] -= size\n return\n\n def __choose_rse(self, rses):\n \"\"\"\n Choose an RSE based on weighting.\n\n :param rses: The rses to be considered for the choose.\n :return: The (rse_id, staging_area) tuple of the chosen RSE.\n \"\"\"\n\n shuffle(rses)\n pick = uniform(0, sum([rse['weight'] for rse in rses]))\n weight = 0\n for rse in rses:\n weight += rse['weight']\n if pick <= weight:\n return (rse['rse_id'], rse['staging_area'], rse['availability_write'])\n", "path": "lib/rucio/core/rse_selector.py"}]} | 3,703 | 710 |
gh_patches_debug_20807 | rasdani/github-patches | git_diff | google__flax-2540 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make RNG name configurable in Dropout
Currently `'dropout'` is hard-coded as an argument of `make_rng` inside `Dropout`. However, when implementing support for "recurrent dropout" in an LSTMCell or similar you need two kinds of dropout:
1. A regular dropout which is applied to the inputs with a different mask at each step.
2. A "recurrent dropout" that is applied to the state with the same mask at each step.
To implement `2` a possibility is to set the RNG name to `'recurrent_dropout'` on the Dropout layer applied to the state and guarantee that each step uses the same random state. From `nn.scan`'s perspective the would look like this:
```python
nn.scan(..., split_rngs={'dropout': True, 'recurrent_dropout': False})
```
The proposal is to add an `rng_name` (or similar) attribute to `Dropout` so we are able support these kind of use-cases. The alternative would be to create a separate `RecurrentDropout` layer with the same code but different hard-coded value.
</issue>
<code>
[start of flax/linen/stochastic.py]
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Stochastic modules."""
16
17 from typing import Optional, Sequence
18
19 from flax.linen.module import compact
20 from flax.linen.module import merge_param
21 from flax.linen.module import Module
22 from jax import lax
23 from jax import random
24 import jax.numpy as jnp
25
26
27 class Dropout(Module):
28 """Create a dropout layer.
29
30 Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure
31 to include an RNG seed named `'dropout'`. For example::
32
33 model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`
34
35 Attributes:
36 rate: the dropout probability. (_not_ the keep rate!)
37 broadcast_dims: dimensions that will share the same dropout mask
38 deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
39 masked, whereas if true, no mask is applied and the inputs are returned
40 as is.
41 """
42 rate: float
43 broadcast_dims: Sequence[int] = ()
44 deterministic: Optional[bool] = None
45
46 @compact
47 def __call__(self, inputs, deterministic: Optional[bool] = None):
48 """Applies a random dropout mask to the input.
49
50 Args:
51 inputs: the inputs that should be randomly masked.
52 deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
53 masked, whereas if true, no mask is applied and the inputs are returned
54 as is.
55
56 Returns:
57 The masked inputs reweighted to preserve mean.
58 """
59 deterministic = merge_param(
60 'deterministic', self.deterministic, deterministic)
61 if self.rate == 0.:
62 return inputs
63 # Prevent gradient NaNs in 1.0 edge-case.
64 if self.rate == 1.0:
65 return jnp.zeros_like(inputs)
66 keep_prob = 1. - self.rate
67 if deterministic:
68 return inputs
69 else:
70 rng = self.make_rng('dropout')
71 broadcast_shape = list(inputs.shape)
72 for dim in self.broadcast_dims:
73 broadcast_shape[dim] = 1
74 mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)
75 mask = jnp.broadcast_to(mask, inputs.shape)
76 return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))
77
[end of flax/linen/stochastic.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py
--- a/flax/linen/stochastic.py
+++ b/flax/linen/stochastic.py
@@ -38,10 +38,12 @@
deterministic: if false the inputs are scaled by `1 / (1 - rate)` and
masked, whereas if true, no mask is applied and the inputs are returned
as is.
+ rng_collection: the rng collection name to use when requesting an rng key.
"""
rate: float
broadcast_dims: Sequence[int] = ()
deterministic: Optional[bool] = None
+ rng_collection: str = 'dropout'
@compact
def __call__(self, inputs, deterministic: Optional[bool] = None):
@@ -67,7 +69,7 @@
if deterministic:
return inputs
else:
- rng = self.make_rng('dropout')
+ rng = self.make_rng(self.rng_collection)
broadcast_shape = list(inputs.shape)
for dim in self.broadcast_dims:
broadcast_shape[dim] = 1
| {"golden_diff": "diff --git a/flax/linen/stochastic.py b/flax/linen/stochastic.py\n--- a/flax/linen/stochastic.py\n+++ b/flax/linen/stochastic.py\n@@ -38,10 +38,12 @@\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n+ rng_collection: the rng collection name to use when requesting an rng key.\n \"\"\"\n rate: float\n broadcast_dims: Sequence[int] = ()\n deterministic: Optional[bool] = None\n+ rng_collection: str = 'dropout'\n \n @compact\n def __call__(self, inputs, deterministic: Optional[bool] = None):\n@@ -67,7 +69,7 @@\n if deterministic:\n return inputs\n else:\n- rng = self.make_rng('dropout')\n+ rng = self.make_rng(self.rng_collection)\n broadcast_shape = list(inputs.shape)\n for dim in self.broadcast_dims:\n broadcast_shape[dim] = 1\n", "issue": "Make RNG name configurable in Dropout\nCurrently `'dropout'` is hard-coded as an argument of `make_rng` inside `Dropout`. However, when implementing support for \"recurrent dropout\" in an LSTMCell or similar you need two kinds of dropout:\r\n1. A regular dropout which is applied to the inputs with a different mask at each step.\r\n2. A \"recurrent dropout\" that is applied to the state with the same mask at each step.\r\n\r\nTo implement `2` a possibility is to set the RNG name to `'recurrent_dropout'` on the Dropout layer applied to the state and guarantee that each step uses the same random state. From `nn.scan`'s perspective the would look like this:\r\n\r\n```python\r\nnn.scan(..., split_rngs={'dropout': True, 'recurrent_dropout': False})\r\n```\r\nThe proposal is to add an `rng_name` (or similar) attribute to `Dropout` so we are able support these kind of use-cases. The alternative would be to create a separate `RecurrentDropout` layer with the same code but different hard-coded value.\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Stochastic modules.\"\"\"\n\nfrom typing import Optional, Sequence\n\nfrom flax.linen.module import compact\nfrom flax.linen.module import merge_param\nfrom flax.linen.module import Module\nfrom jax import lax\nfrom jax import random\nimport jax.numpy as jnp\n\n\nclass Dropout(Module):\n \"\"\"Create a dropout layer.\n\n Note: When using :meth:`Module.apply() <flax.linen.Module.apply>`, make sure\n to include an RNG seed named `'dropout'`. For example::\n\n model.apply({'params': params}, inputs=inputs, train=True, rngs={'dropout': dropout_rng})`\n\n Attributes:\n rate: the dropout probability. (_not_ the keep rate!)\n broadcast_dims: dimensions that will share the same dropout mask\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n \"\"\"\n rate: float\n broadcast_dims: Sequence[int] = ()\n deterministic: Optional[bool] = None\n\n @compact\n def __call__(self, inputs, deterministic: Optional[bool] = None):\n \"\"\"Applies a random dropout mask to the input.\n\n Args:\n inputs: the inputs that should be randomly masked.\n deterministic: if false the inputs are scaled by `1 / (1 - rate)` and\n masked, whereas if true, no mask is applied and the inputs are returned\n as is.\n\n Returns:\n The masked inputs reweighted to preserve mean.\n \"\"\"\n deterministic = merge_param(\n 'deterministic', self.deterministic, deterministic)\n if self.rate == 0.:\n return inputs\n # Prevent gradient NaNs in 1.0 edge-case.\n if self.rate == 1.0:\n return jnp.zeros_like(inputs)\n keep_prob = 1. - self.rate\n if deterministic:\n return inputs\n else:\n rng = self.make_rng('dropout')\n broadcast_shape = list(inputs.shape)\n for dim in self.broadcast_dims:\n broadcast_shape[dim] = 1\n mask = random.bernoulli(rng, p=keep_prob, shape=broadcast_shape)\n mask = jnp.broadcast_to(mask, inputs.shape)\n return lax.select(mask, inputs / keep_prob, jnp.zeros_like(inputs))\n", "path": "flax/linen/stochastic.py"}]} | 1,560 | 245 |
gh_patches_debug_19569 | rasdani/github-patches | git_diff | mozilla__telemetry-analysis-service-989 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sort EMR versions naturally
The EMR versions are currently not sorted naturally but by alphabet, which breaks for example with EMR 5.11.0.
</issue>
<code>
[start of atmo/clusters/queries.py]
1 # This Source Code Form is subject to the terms of the Mozilla Public
2 # License, v. 2.0. If a copy of the MPL was not distributed with this
3 # file, you can obtain one at http://mozilla.org/MPL/2.0/.
4 from django.db import models
5
6
7 class EMRReleaseQuerySet(models.QuerySet):
8 """
9 A Django queryset for the :class:`~atmo.clusters.models.EMRRelease` model.
10 """
11
12 def active(self):
13 return self.filter(
14 is_active=True,
15 )
16
17 def stable(self):
18 """
19 The EMR releases that are considered stable.
20 """
21 return self.filter(
22 is_experimental=False,
23 is_deprecated=False,
24 is_active=True,
25 )
26
27 def experimental(self):
28 """
29 The EMR releases that are considered experimental.
30 """
31 return self.filter(
32 is_experimental=True,
33 is_active=True,
34 )
35
36 def deprecated(self):
37 """
38 The EMR releases that are deprecated.
39 """
40 return self.filter(
41 is_deprecated=True,
42 is_active=True,
43 )
44
45
46 class ClusterQuerySet(models.QuerySet):
47 """A Django queryset that filters by cluster status.
48
49 Used by the :class:`~atmo.clusters.models.Cluster` model.
50 """
51
52 def active(self):
53 """
54 The clusters that have an active status.
55 """
56 return self.filter(
57 most_recent_status__in=self.model.ACTIVE_STATUS_LIST,
58 )
59
60 def terminated(self):
61 """
62 The clusters that have an terminated status.
63 """
64 return self.filter(
65 most_recent_status__in=self.model.TERMINATED_STATUS_LIST,
66 )
67
68 def failed(self):
69 """
70 The clusters that have an failed status.
71 """
72 return self.filter(
73 most_recent_status__in=self.model.FAILED_STATUS_LIST,
74 )
75
[end of atmo/clusters/queries.py]
[start of atmo/clusters/forms.py]
1 # This Source Code Form is subject to the terms of the Mozilla Public
2 # License, v. 2.0. If a copy of the MPL was not distributed with this
3 # file, you can obtain one at http://mozilla.org/MPL/2.0/.
4 from django import forms
5 from django.conf import settings
6 from django.core.urlresolvers import reverse
7 from django.utils.safestring import mark_safe
8
9 from . import models
10 from ..forms.mixins import AutoClassFormMixin, CreatedByModelFormMixin
11 from ..keys.models import SSHKey
12
13
14 class EMRReleaseChoiceField(forms.ModelChoiceField):
15 """
16 A :class:`~django.forms.ModelChoiceField` subclass that uses
17 :class:`~atmo.clusters.models.EMRRelease` objects for the choices
18 and automatically uses a "radioset" rendering -- a horizontal button
19 group for easier selection.
20 """
21 def __init__(self, *args, **kwargs):
22 super().__init__(
23 label='EMR release',
24 queryset=models.EMRRelease.objects.active(),
25 required=True,
26 empty_label=None,
27 widget=forms.RadioSelect(attrs={
28 'required': 'required',
29 'class': 'radioset',
30 }),
31 help_text=models.Cluster.EMR_RELEASE_HELP,
32 )
33
34 def label_from_instance(self, obj):
35 """
36 Append the status of the EMR release if it's
37 experimental or deprecated.
38 """
39 label = obj.version
40 extra = []
41 if obj.is_experimental:
42 extra.append('<span class="label label-info">experimental</span>')
43 elif obj.is_deprecated:
44 extra.append('<span class="label label-warning">deprecated</span>')
45 if extra:
46 label = mark_safe('%s %s' % (label, ''.join(extra)))
47 return label
48
49
50 class NewClusterForm(AutoClassFormMixin, CreatedByModelFormMixin,
51 forms.ModelForm):
52 """
53 A form used for creating new clusters.
54 """
55 prefix = 'new'
56
57 identifier = forms.RegexField(
58 required=True,
59 label='Identifier',
60 regex=r'^[a-z0-9-]{1,100}$',
61 widget=forms.TextInput(attrs={
62 'pattern': r'[a-z0-9-]{1,100}',
63 'data-parsley-pattern-message': 'Identifier contains invalid characters.',
64 }),
65 help_text='A unique identifier for your cluster, visible in '
66 'the AWS management console. (Lowercase, use hyphens '
67 'instead of spaces.)'
68 )
69 size = forms.IntegerField(
70 label='Size',
71 required=True,
72 min_value=1,
73 max_value=settings.AWS_CONFIG['MAX_CLUSTER_SIZE'],
74 widget=forms.NumberInput(attrs={
75 'min': '1',
76 'max': str(settings.AWS_CONFIG['MAX_CLUSTER_SIZE']),
77 }),
78 help_text=('Number of workers to use in the cluster, between 1 and %s. '
79 'For testing or development 1 is recommended.' %
80 settings.AWS_CONFIG['MAX_CLUSTER_SIZE'])
81 )
82 lifetime = forms.IntegerField(
83 label='Lifetime',
84 required=True,
85 min_value=2,
86 max_value=settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'],
87 widget=forms.NumberInput(attrs={
88 'min': '2',
89 'max': str(settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME']),
90 }),
91 help_text=('Lifetime in hours after which the cluster is automatically '
92 'terminated, between 2 and %s.' %
93 settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'])
94 )
95 ssh_key = forms.ModelChoiceField(
96 label='SSH key',
97 queryset=SSHKey.objects.all(),
98 required=True,
99 empty_label=None,
100 )
101 emr_release = EMRReleaseChoiceField()
102
103 class Meta:
104 model = models.Cluster
105 fields = ['identifier', 'size', 'lifetime', 'ssh_key', 'emr_release']
106
107 def __init__(self, *args, **kwargs):
108 super().__init__(*args, **kwargs)
109 user_sshkeys = self.created_by.created_sshkeys.all()
110 self.fields['ssh_key'].queryset = user_sshkeys.all()
111 self.fields['ssh_key'].help_text = (
112 'The SSH key to deploy to the cluster. '
113 'See <a href="%s">your keys</a> or '
114 '<a href="%s">add a new one</a>.' %
115 (reverse('keys-list'), reverse('keys-new'))
116 )
117 # if there are fewer options we just show radio select buttons
118 if user_sshkeys.count() <= 6:
119 self.fields['ssh_key'].widget = forms.RadioSelect(
120 choices=self.fields['ssh_key'].choices,
121 attrs={
122 'class': 'radioset',
123 },
124 )
125
126
127 class ExtendClusterForm(AutoClassFormMixin, forms.Form):
128 prefix = 'extend'
129 extension = forms.IntegerField(
130 label='Lifetime extension in hours',
131 required=True,
132 min_value=2,
133 max_value=settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'],
134 widget=forms.NumberInput(attrs={
135 'min': '2',
136 'max': str(settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME']),
137 }),
138 help_text=("Number of hours to extend the cluster's lifetime with, between 2 and %s." %
139 settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'])
140 )
141
[end of atmo/clusters/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/atmo/clusters/forms.py b/atmo/clusters/forms.py
--- a/atmo/clusters/forms.py
+++ b/atmo/clusters/forms.py
@@ -21,7 +21,7 @@
def __init__(self, *args, **kwargs):
super().__init__(
label='EMR release',
- queryset=models.EMRRelease.objects.active(),
+ queryset=models.EMRRelease.objects.active().natural_sort_by_version(),
required=True,
empty_label=None,
widget=forms.RadioSelect(attrs={
diff --git a/atmo/clusters/queries.py b/atmo/clusters/queries.py
--- a/atmo/clusters/queries.py
+++ b/atmo/clusters/queries.py
@@ -8,6 +8,15 @@
"""
A Django queryset for the :class:`~atmo.clusters.models.EMRRelease` model.
"""
+ def natural_sort_by_version(self):
+ """
+ Sorts this queryset by the EMR version naturally (human-readable).
+ """
+ return self.extra(
+ select={
+ 'natural_version': "string_to_array(version, '.')::int[]",
+ },
+ ).order_by('-natural_version')
def active(self):
return self.filter(
| {"golden_diff": "diff --git a/atmo/clusters/forms.py b/atmo/clusters/forms.py\n--- a/atmo/clusters/forms.py\n+++ b/atmo/clusters/forms.py\n@@ -21,7 +21,7 @@\n def __init__(self, *args, **kwargs):\n super().__init__(\n label='EMR release',\n- queryset=models.EMRRelease.objects.active(),\n+ queryset=models.EMRRelease.objects.active().natural_sort_by_version(),\n required=True,\n empty_label=None,\n widget=forms.RadioSelect(attrs={\ndiff --git a/atmo/clusters/queries.py b/atmo/clusters/queries.py\n--- a/atmo/clusters/queries.py\n+++ b/atmo/clusters/queries.py\n@@ -8,6 +8,15 @@\n \"\"\"\n A Django queryset for the :class:`~atmo.clusters.models.EMRRelease` model.\n \"\"\"\n+ def natural_sort_by_version(self):\n+ \"\"\"\n+ Sorts this queryset by the EMR version naturally (human-readable).\n+ \"\"\"\n+ return self.extra(\n+ select={\n+ 'natural_version': \"string_to_array(version, '.')::int[]\",\n+ },\n+ ).order_by('-natural_version')\n \n def active(self):\n return self.filter(\n", "issue": "Sort EMR versions naturally\nThe EMR versions are currently not sorted naturally but by alphabet, which breaks for example with EMR 5.11.0.\n", "before_files": [{"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django.db import models\n\n\nclass EMRReleaseQuerySet(models.QuerySet):\n \"\"\"\n A Django queryset for the :class:`~atmo.clusters.models.EMRRelease` model.\n \"\"\"\n\n def active(self):\n return self.filter(\n is_active=True,\n )\n\n def stable(self):\n \"\"\"\n The EMR releases that are considered stable.\n \"\"\"\n return self.filter(\n is_experimental=False,\n is_deprecated=False,\n is_active=True,\n )\n\n def experimental(self):\n \"\"\"\n The EMR releases that are considered experimental.\n \"\"\"\n return self.filter(\n is_experimental=True,\n is_active=True,\n )\n\n def deprecated(self):\n \"\"\"\n The EMR releases that are deprecated.\n \"\"\"\n return self.filter(\n is_deprecated=True,\n is_active=True,\n )\n\n\nclass ClusterQuerySet(models.QuerySet):\n \"\"\"A Django queryset that filters by cluster status.\n\n Used by the :class:`~atmo.clusters.models.Cluster` model.\n \"\"\"\n\n def active(self):\n \"\"\"\n The clusters that have an active status.\n \"\"\"\n return self.filter(\n most_recent_status__in=self.model.ACTIVE_STATUS_LIST,\n )\n\n def terminated(self):\n \"\"\"\n The clusters that have an terminated status.\n \"\"\"\n return self.filter(\n most_recent_status__in=self.model.TERMINATED_STATUS_LIST,\n )\n\n def failed(self):\n \"\"\"\n The clusters that have an failed status.\n \"\"\"\n return self.filter(\n most_recent_status__in=self.model.FAILED_STATUS_LIST,\n )\n", "path": "atmo/clusters/queries.py"}, {"content": "# This Source Code Form is subject to the terms of the Mozilla Public\n# License, v. 2.0. If a copy of the MPL was not distributed with this\n# file, you can obtain one at http://mozilla.org/MPL/2.0/.\nfrom django import forms\nfrom django.conf import settings\nfrom django.core.urlresolvers import reverse\nfrom django.utils.safestring import mark_safe\n\nfrom . import models\nfrom ..forms.mixins import AutoClassFormMixin, CreatedByModelFormMixin\nfrom ..keys.models import SSHKey\n\n\nclass EMRReleaseChoiceField(forms.ModelChoiceField):\n \"\"\"\n A :class:`~django.forms.ModelChoiceField` subclass that uses\n :class:`~atmo.clusters.models.EMRRelease` objects for the choices\n and automatically uses a \"radioset\" rendering -- a horizontal button\n group for easier selection.\n \"\"\"\n def __init__(self, *args, **kwargs):\n super().__init__(\n label='EMR release',\n queryset=models.EMRRelease.objects.active(),\n required=True,\n empty_label=None,\n widget=forms.RadioSelect(attrs={\n 'required': 'required',\n 'class': 'radioset',\n }),\n help_text=models.Cluster.EMR_RELEASE_HELP,\n )\n\n def label_from_instance(self, obj):\n \"\"\"\n Append the status of the EMR release if it's\n experimental or deprecated.\n \"\"\"\n label = obj.version\n extra = []\n if obj.is_experimental:\n extra.append('<span class=\"label label-info\">experimental</span>')\n elif obj.is_deprecated:\n extra.append('<span class=\"label label-warning\">deprecated</span>')\n if extra:\n label = mark_safe('%s %s' % (label, ''.join(extra)))\n return label\n\n\nclass NewClusterForm(AutoClassFormMixin, CreatedByModelFormMixin,\n forms.ModelForm):\n \"\"\"\n A form used for creating new clusters.\n \"\"\"\n prefix = 'new'\n\n identifier = forms.RegexField(\n required=True,\n label='Identifier',\n regex=r'^[a-z0-9-]{1,100}$',\n widget=forms.TextInput(attrs={\n 'pattern': r'[a-z0-9-]{1,100}',\n 'data-parsley-pattern-message': 'Identifier contains invalid characters.',\n }),\n help_text='A unique identifier for your cluster, visible in '\n 'the AWS management console. (Lowercase, use hyphens '\n 'instead of spaces.)'\n )\n size = forms.IntegerField(\n label='Size',\n required=True,\n min_value=1,\n max_value=settings.AWS_CONFIG['MAX_CLUSTER_SIZE'],\n widget=forms.NumberInput(attrs={\n 'min': '1',\n 'max': str(settings.AWS_CONFIG['MAX_CLUSTER_SIZE']),\n }),\n help_text=('Number of workers to use in the cluster, between 1 and %s. '\n 'For testing or development 1 is recommended.' %\n settings.AWS_CONFIG['MAX_CLUSTER_SIZE'])\n )\n lifetime = forms.IntegerField(\n label='Lifetime',\n required=True,\n min_value=2,\n max_value=settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'],\n widget=forms.NumberInput(attrs={\n 'min': '2',\n 'max': str(settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME']),\n }),\n help_text=('Lifetime in hours after which the cluster is automatically '\n 'terminated, between 2 and %s.' %\n settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'])\n )\n ssh_key = forms.ModelChoiceField(\n label='SSH key',\n queryset=SSHKey.objects.all(),\n required=True,\n empty_label=None,\n )\n emr_release = EMRReleaseChoiceField()\n\n class Meta:\n model = models.Cluster\n fields = ['identifier', 'size', 'lifetime', 'ssh_key', 'emr_release']\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n user_sshkeys = self.created_by.created_sshkeys.all()\n self.fields['ssh_key'].queryset = user_sshkeys.all()\n self.fields['ssh_key'].help_text = (\n 'The SSH key to deploy to the cluster. '\n 'See <a href=\"%s\">your keys</a> or '\n '<a href=\"%s\">add a new one</a>.' %\n (reverse('keys-list'), reverse('keys-new'))\n )\n # if there are fewer options we just show radio select buttons\n if user_sshkeys.count() <= 6:\n self.fields['ssh_key'].widget = forms.RadioSelect(\n choices=self.fields['ssh_key'].choices,\n attrs={\n 'class': 'radioset',\n },\n )\n\n\nclass ExtendClusterForm(AutoClassFormMixin, forms.Form):\n prefix = 'extend'\n extension = forms.IntegerField(\n label='Lifetime extension in hours',\n required=True,\n min_value=2,\n max_value=settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'],\n widget=forms.NumberInput(attrs={\n 'min': '2',\n 'max': str(settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME']),\n }),\n help_text=(\"Number of hours to extend the cluster's lifetime with, between 2 and %s.\" %\n settings.AWS_CONFIG['MAX_CLUSTER_LIFETIME'])\n )\n", "path": "atmo/clusters/forms.py"}]} | 2,583 | 274 |
gh_patches_debug_2855 | rasdani/github-patches | git_diff | xonsh__xonsh-89 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
setup.py install fails if working directory not location of setup.py
C:\Python34>.\python.exe \temp\xonsh-0.1.2\setup.py install
...
Traceback (most recent call last):
File "\temp\xonsh-0.1.2\setup.py", line 114, in <module>
main()
File "\temp\xonsh-0.1.2\setup.py", line 50, in main
with open('README.rst', 'r') as f:
FileNotFoundError: [Errno 2] No such file or directory: 'README.rst'
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # coding=utf-8
3 """The xonsh installer."""
4 from __future__ import print_function, unicode_literals
5 import os
6 import sys
7 try:
8 from setuptools import setup
9 from setuptools.command.sdist import sdist
10 from setuptools.command.install import install
11 HAVE_SETUPTOOLS = True
12 except ImportError:
13 from distutils.core import setup
14 from distutils.command.sdist import sdist as sdist
15 from distutils.command.install import install as install
16 HAVE_SETUPTOOLS = False
17
18 VERSION = '0.1.2'
19
20 TABLES = ['xonsh/lexer_table.py', 'xonsh/parser_table.py']
21
22 def clean_tables():
23 for f in TABLES:
24 if os.path.isfile(f):
25 os.remove(f)
26 print('Remove ' + f)
27
28 def build_tables():
29 print('Building lexer and parser tables.')
30 sys.path.insert(0, os.path.dirname(__file__))
31 from xonsh.parser import Parser
32 Parser(lexer_table='lexer_table', yacc_table='parser_table',
33 outputdir='xonsh')
34 sys.path.pop(0)
35
36 class xinstall(install):
37 def run(self):
38 clean_tables()
39 build_tables()
40 install.run(self)
41
42 class xsdist(sdist):
43 def make_release_tree(self, basedir, files):
44 clean_tables()
45 build_tables()
46 sdist.make_release_tree(self, basedir, files)
47
48 def main():
49 if sys.version_info[0] < 3:
50 sys.exit('xonsh currently requires Python 3.4+')
51 print(logo)
52 with open('README.rst', 'r') as f:
53 readme = f.read()
54 skw = dict(
55 name='xonsh',
56 description='an exotic, usable shell',
57 long_description=readme,
58 license='BSD',
59 version=VERSION,
60 author='Anthony Scopatz',
61 maintainer='Anthony Scopatz',
62 author_email='[email protected]',
63 url='https://github.com/scopatz/xonsh',
64 platforms='Cross Platform',
65 classifiers = ['Programming Language :: Python :: 3'],
66 packages=['xonsh'],
67 scripts=['scripts/xonsh'],
68 cmdclass={'install': xinstall, 'sdist': xsdist},
69 )
70 if HAVE_SETUPTOOLS:
71 skw['setup_requires'] = ['ply']
72 skw['install_requires'] = ['ply']
73 setup(**skw)
74
75 logo = """
76 ββββ
77 βββββββ
78 βββ4Ξ¦ββββ
79 Β²βββββββββ^ββ
80 -ββββ©β " βββββββββ
81 βββββWβ€ββ ββββ ββ
82 βmβ%Ξ¦ββ β*" ,Ξ±βββββββ,ββΞ"ββ
83 "βββΒΌ" βββ¦βββββ β `^` ,β β’β
84 ,ββ«Mββjβββ βββββ 'β β β£ββ ββ
85 βmββββββ¬βββ ββ^ "ΓM jβββ«β¨β βββ‘ββ€βmββ
86 βΓ¦sΓ¦β β, ,βΓ β'βH LU βββ β«Γ
^2=β β
87 βΓ¦%Γ
ββββ βΒͺββββ $1β ββββ«β©*Tβ β
88 ββ%β,β¦βββββββ’M β MββββββββΏ"^ β«
89 ββ£βββββ@βββ βββ ` ββββββDβββΒ¬βΏ s ββ
90 ββ¬ββββββ` ββ β ββββββ£ββ*ββDΓΉ` ββ
91 ββ¬βββ¨Uβ β $ β βββββMββββ€β% ββ β
92 ββββββ β β βMβββMβββββ22, ββ
93 ββββ^β β β βMβββMββ€ββΒ¬ββ βM
94 βββ ββ β Hβ ββ MβββMβ"^^^^^"βΏ β
95 ββββ@β β ββ H' βββββββ=.,,, β
96 βββββββ β βββ ββ βββββ****βΏ - ββ
97 βββΒΌVββ β β β ββ β βββ βββ€β€β€β€β€β β
98 βββββββ β M ββ β« UUMββJ^^" ββ
99 βββββββ β$j βββ βββββΓ
%%%ββ β
100 ββ£ββ£β^β β ββ β ββ.DUββ^[""βΏ -ββ
101 ββββ`β βMβ ββ Γ
Β£ββββ%βββ""w βΏβ β
102 `βββ`β ββββββUWβββ%β€ββ*β =z βΞ
103 βββββββ βΓ
ββ£βββ¨%β€ΖβΏ= -` ββ
104 βββ’βββ W Β£6ββ€,"βΏ ` ββ‘βββ
105 β"ββββΓ
βββββ%` ββ
106 ββββ`βUβUβ%β* β
107 ββ«β βΓ
Γ
βββΒ¬` `"β
108 ββ« β«β ββ ~β¦%ββ₯4^
109 ββ "Mβ `ββ Xβ"βββ^
110 ββ βM ββeββ^
111 ββββββ
112 ^^
113 """
114
115 if __name__ == '__main__':
116 main()
117
118
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -49,7 +49,7 @@
if sys.version_info[0] < 3:
sys.exit('xonsh currently requires Python 3.4+')
print(logo)
- with open('README.rst', 'r') as f:
+ with open(os.path.join(os.path.dirname(__file__), 'README.rst'), 'r') as f:
readme = f.read()
skw = dict(
name='xonsh',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -49,7 +49,7 @@\n if sys.version_info[0] < 3:\n sys.exit('xonsh currently requires Python 3.4+')\n print(logo)\n- with open('README.rst', 'r') as f:\n+ with open(os.path.join(os.path.dirname(__file__), 'README.rst'), 'r') as f:\n readme = f.read()\n skw = dict(\n name='xonsh',\n", "issue": "setup.py install fails if working directory not location of setup.py\nC:\\Python34>.\\python.exe \\temp\\xonsh-0.1.2\\setup.py install\n...\nTraceback (most recent call last):\n File \"\\temp\\xonsh-0.1.2\\setup.py\", line 114, in <module>\n main()\n File \"\\temp\\xonsh-0.1.2\\setup.py\", line 50, in main\n with open('README.rst', 'r') as f:\nFileNotFoundError: [Errno 2] No such file or directory: 'README.rst'\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# coding=utf-8\n\"\"\"The xonsh installer.\"\"\"\nfrom __future__ import print_function, unicode_literals\nimport os\nimport sys\ntry:\n from setuptools import setup\n from setuptools.command.sdist import sdist\n from setuptools.command.install import install\n HAVE_SETUPTOOLS = True\nexcept ImportError:\n from distutils.core import setup\n from distutils.command.sdist import sdist as sdist\n from distutils.command.install import install as install\n HAVE_SETUPTOOLS = False\n\nVERSION = '0.1.2'\n\nTABLES = ['xonsh/lexer_table.py', 'xonsh/parser_table.py']\n\ndef clean_tables():\n for f in TABLES:\n if os.path.isfile(f):\n os.remove(f)\n print('Remove ' + f)\n\ndef build_tables():\n print('Building lexer and parser tables.')\n sys.path.insert(0, os.path.dirname(__file__))\n from xonsh.parser import Parser\n Parser(lexer_table='lexer_table', yacc_table='parser_table',\n outputdir='xonsh')\n sys.path.pop(0)\n\nclass xinstall(install):\n def run(self):\n clean_tables()\n build_tables()\n install.run(self)\n\nclass xsdist(sdist):\n def make_release_tree(self, basedir, files):\n clean_tables()\n build_tables()\n sdist.make_release_tree(self, basedir, files)\n\ndef main():\n if sys.version_info[0] < 3:\n sys.exit('xonsh currently requires Python 3.4+')\n print(logo)\n with open('README.rst', 'r') as f:\n readme = f.read()\n skw = dict(\n name='xonsh',\n description='an exotic, usable shell',\n long_description=readme,\n license='BSD',\n version=VERSION,\n author='Anthony Scopatz',\n maintainer='Anthony Scopatz',\n author_email='[email protected]',\n url='https://github.com/scopatz/xonsh',\n platforms='Cross Platform',\n classifiers = ['Programming Language :: Python :: 3'],\n packages=['xonsh'],\n scripts=['scripts/xonsh'],\n cmdclass={'install': xinstall, 'sdist': xsdist},\n )\n if HAVE_SETUPTOOLS:\n skw['setup_requires'] = ['ply']\n skw['install_requires'] = ['ply']\n setup(**skw)\n\nlogo = \"\"\"\n \u2553\u2588\u2588\u2584 \n \u2559\u2588\u2588\u2580\u2588\u2588\u2555 \n \u2590\u2588\u25884\u03a6\u2588\u2580\u2588\u258c \n \u00b2\u2588\u2588\u2584\u2588\u2588\u2588\u2580\u2588\u2588^\u2588\u2588 \n -\u2588\u2588\u2588\u2569\u2580 \" \u2552\u2584\u2588\u2588\u2588\u2588\u2588\u2580\u2588 \n \u2551\u2588\u2588\u2580\u2580W\u2564\u2584\u2580 \u2590\u2551\u2588\u2558 \u255d\u2588 \n \u2584m\u2580%\u03a6\u2580\u2580 \u255d*\" ,\u03b1\u2588\u2588\u2588\u2588\u2588\u2593\u2584,\u2584\u2580\u0393\"\u2580\u2555 \n \"\u2580\u2588\u2588\u00bc\" \u2584\u2550\u2566\u2588\u255f\u2551\u2588\u2580 \u2553 `^` ,\u2584 \u2562\u2555 \n ,\u2580\u256bM\u2588\u2590j\u2553\u255f\u2580 \u2554\u2593\u2584\u2588\u2580 '\u2551 \u2554 \u2563\u2551\u258c \u2580\u2584 \n \u2584m\u2580\u2580\u2588\u2588\u2588\u256c\u2588\u255d\u2580 \u2588\u2580^ \"\u00dcM j\u2590\u255f\u256b\u2568\u2592 \u2559\u2580\u2261\u2550\u2564\u2550m\u2580\u2557 \n \u2588\u00e6s\u00e6\u2553 \u2555, ,\u2584\u00c4 \u2590'\u2555H LU \u2551\u2551\u2560\u256b\u00c5^2=\u2310 \u2588 \n \u2584\u00e6%\u00c5\u2588\u2588\u2588\u2560\u2588\u00aa\u2559\u2584\u2588\u2580 $1\u2559 \u2551\u2551\u255f\u256b\u2569*T\u2584 \u258c \n \u2559\u2557%\u2584,\u2566\u2588\u2588\u258c\u2588\u258c\u2588\u2562M \u2555 M\u2551\u2551\u2551\u2551\u2588\u2550\u2310\u207f\"^ \u256b \n \u2559\u2563\u2580\u2588\u2588\u2588\u2588@\u2588\u2591\u2588 \u258c\u2555\u2555 ` \u258c\u2551\u2590\u2590\u2551\u2588D\u2550\u2248\u2310\u00ac\u207f s \u2551\u2310 \n \u2559\u256c\u2588\u2588\u2588\u2593\u2551\u2588` \u258c\u255a \u2555 \u2555\u258c\u2551\u2590\u2590\u2563\u258c\u2310*\u2592\u2592D\u00f9` \u2590\u258c \n \u2559\u256c\u2588\u2588\u2568U\u2588 \u255f $ \u258c \u258c\u258c\u2590\u2590\u2590M\u2588\u2584\u2550\u2264\u2310% \u2553\u2310 \u258c \n \u2551\u2551\u2588\u2584\u258c\u2551 \u255f \u258c \u258cM\u2590\u2590\u2590M\u2588\u2580\u2592\u2592\u259222, \u2590\u258c \n \u2588\u2588\u2588\u2559^\u258c \u2551 \u258c \u2310M\u2590\u2590\u2590M\u2588\u2264\u2310\u2310\u00ac\u2500\u2500 \u2590M \n \u2551\u2588\u2588 \u258c\u2559 \u2553 H\u2551 \u258c\u2552 M\u2551\u2590\u2590M\u2588\"^^^^^\"\u207f \u2551 \n \u2588\u2588\u2555\u2559@\u2593 \u2555 \u258c\u2551 H' \u2551\u2590\u2590\u2590\u2588\u2550\u2550=.,,, \u2588 \n \u2559\u2588\u2593\u2554\u255a\u255a\u2588 \u2560 \u258c\u2514\u2552 \u258c\u2590 \u255a\u2551\u2551\u2551\u2580****\u207f - \u2553\u258c \n \u2559\u2588\u258c\u00bcV\u255a\u258c \u258c \u2555 \u258c \u2551\u2552 \u2551 \u258c\u2592\u2560\u2588\u2580\u2264\u2264\u2264\u2264\u2264\u2310 \u2588 \n \u2559\u2588\u258c\u2554\u2588\u255a\u258c \u2518 M \u258c\u2551 \u256b UUM\u2588\u2588J^^\" \u2590\u258c \n \u2559\u2588\u2588\u2559\u2588\u2559\u258c \u2555$j \u2590\u2310\u258c \u258c\u2551\u255d\u255f\u2588\u00c5%%%\u2248\u2550 \u2588 \n \u2559\u2563\u2588\u2563\u2588^\u258c \u2560\u2551\u2590 \u2551 \u258c\u2590.DU\u2588\u2588^[\"\"\u207f -\u2552\u258c \n \u2580\u2588\u2584\u2588`\u258c \u2591M\u2580 \u258c\u2590 \u00c5\u00a3\u255d\u255d\u2588\u255c%\u2248\u2550\u2553\"\"w \u207f\u2310 \u2588 \n `\u2580\u2584\u2580`\u258c \u258c\u2588\u2590\u2310\u2551\u2590UW\u2556\u2588\u2588%\u2264\u2550\u2591*\u2500 =z \u2584\u0393 \n \u2559\u2588\u2588\u2559\u2584\u258c\u2588 \u258c\u00c5\u255b\u2563\u2588\u2588\u2568%\u2564\u0192\u207f= -` \u2584\u2518 \n \u2588\u258c\u2562\u2593\u258c\u258c W \u00a36\u2588\u2564,\"\u207f ` \u2584\u2261\u2580\u2580\u2580 \n \u2588\"\u2588\u258c\u258c\u255f\u00c5\u2553\u2588\u2553\u2588\u2580%` \u2584\u2580 \n \u2559\u258c\u2588\u2588`\u2592U\u2593U\u2588%\u2557* \u2588 \n \u258c\u256b\u2551 \u258c\u00c5\u00c5\u2551\u2580\u255b\u00ac` `\"\u2588 \n \u258c\u256b \u256b\u255f \u2588\u2584 ~\u2566%\u2592\u25654^ \n \u258c\u258c \"M\u2588 `\u2580\u2555 X\u2555\"\u2557\u2584\u2580^ \n \u2588\u258c \u2553M \u2559\u2580e\u2580\u2580^ \n \u2559\u2588\u2588\u2584\u2584\u2580 \n ^^ \n\"\"\"\n\nif __name__ == '__main__':\n main()\n\n", "path": "setup.py"}]} | 2,395 | 121 |
gh_patches_debug_21187 | rasdani/github-patches | git_diff | pypa__setuptools-2159 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Plugins using setuptools entry points resolve to system and not setup_requires
Assume that you have e.g. setuptools_scm 3 installed system-wide, and try to to run the following setup.py:
```python
from setuptools import setup
setup(name="foo", setup_requires=["setuptools_scm>=4"])
```
you get the warning
```
/usr/lib/python3.8/site-packages/setuptools/dist.py:701: UserWarning: Module setuptools_scm was already imported from /usr/lib/python3.8/site-packages/setuptools_scm/__init__.py, but /tmp/foo/.eggs/setuptools_scm-4.1.1-py3.8.egg is being added to sys.path
```
(i.e. the system-wide setuptools_scm was loaded, even though it is too old). One can confirm that the "correct" newer setuptools_scm indeed doesn't get used by changing setup.py to
```python
from setuptools import setup
setup(name="foo",
setup_requires=["setuptools_scm>=4"],
use_scm_version={"parentdir_prefix_version": "foo-"})
```
in which case one gets the same warning and then a traceback
```
TypeError: __init__() got an unexpected keyword argument 'parentdir_prefix_version'
```
I guess a solution may be e.g. to move the entry point (which is what gets eagerly loaded by setuptools) to a separate package (let's say `_setuptools_scm_helper`) which itself imports setuptools_scm only when actually needed with a pass-through shim so that setuptools only imports `_setuptools_scm_helper` and then the correct version of setuptools_scm can be loaded when needed.
Plugins using setuptools entry points resolve to system and not setup_requires
Assume that you have e.g. setuptools_scm 3 installed system-wide, and try to to run the following setup.py:
```python
from setuptools import setup
setup(name="foo", setup_requires=["setuptools_scm>=4"])
```
you get the warning
```
/usr/lib/python3.8/site-packages/setuptools/dist.py:701: UserWarning: Module setuptools_scm was already imported from /usr/lib/python3.8/site-packages/setuptools_scm/__init__.py, but /tmp/foo/.eggs/setuptools_scm-4.1.1-py3.8.egg is being added to sys.path
```
(i.e. the system-wide setuptools_scm was loaded, even though it is too old). One can confirm that the "correct" newer setuptools_scm indeed doesn't get used by changing setup.py to
```python
from setuptools import setup
setup(name="foo",
setup_requires=["setuptools_scm>=4"],
use_scm_version={"parentdir_prefix_version": "foo-"})
```
in which case one gets the same warning and then a traceback
```
TypeError: __init__() got an unexpected keyword argument 'parentdir_prefix_version'
```
I guess a solution may be e.g. to move the entry point (which is what gets eagerly loaded by setuptools) to a separate package (let's say `_setuptools_scm_helper`) which itself imports setuptools_scm only when actually needed with a pass-through shim so that setuptools only imports `_setuptools_scm_helper` and then the correct version of setuptools_scm can be loaded when needed.
</issue>
<code>
[start of setuptools/__init__.py]
1 """Extensions to the 'distutils' for large or complex distributions"""
2
3 import os
4 import sys
5 import functools
6 import distutils.core
7 import distutils.filelist
8 import re
9 from distutils.errors import DistutilsOptionError
10 from distutils.util import convert_path
11 from fnmatch import fnmatchcase
12
13 from ._deprecation_warning import SetuptoolsDeprecationWarning
14
15 from setuptools.extern.six import PY3, string_types
16 from setuptools.extern.six.moves import filter, map
17
18 import setuptools.version
19 from setuptools.extension import Extension
20 from setuptools.dist import Distribution, Feature
21 from setuptools.depends import Require
22 from . import monkey
23
24 __metaclass__ = type
25
26
27 __all__ = [
28 'setup', 'Distribution', 'Feature', 'Command', 'Extension', 'Require',
29 'SetuptoolsDeprecationWarning',
30 'find_packages'
31 ]
32
33 if PY3:
34 __all__.append('find_namespace_packages')
35
36 __version__ = setuptools.version.__version__
37
38 bootstrap_install_from = None
39
40 # If we run 2to3 on .py files, should we also convert docstrings?
41 # Default: yes; assume that we can detect doctests reliably
42 run_2to3_on_doctests = True
43 # Standard package names for fixer packages
44 lib2to3_fixer_packages = ['lib2to3.fixes']
45
46
47 class PackageFinder:
48 """
49 Generate a list of all Python packages found within a directory
50 """
51
52 @classmethod
53 def find(cls, where='.', exclude=(), include=('*',)):
54 """Return a list all Python packages found within directory 'where'
55
56 'where' is the root directory which will be searched for packages. It
57 should be supplied as a "cross-platform" (i.e. URL-style) path; it will
58 be converted to the appropriate local path syntax.
59
60 'exclude' is a sequence of package names to exclude; '*' can be used
61 as a wildcard in the names, such that 'foo.*' will exclude all
62 subpackages of 'foo' (but not 'foo' itself).
63
64 'include' is a sequence of package names to include. If it's
65 specified, only the named packages will be included. If it's not
66 specified, all found packages will be included. 'include' can contain
67 shell style wildcard patterns just like 'exclude'.
68 """
69
70 return list(cls._find_packages_iter(
71 convert_path(where),
72 cls._build_filter('ez_setup', '*__pycache__', *exclude),
73 cls._build_filter(*include)))
74
75 @classmethod
76 def _find_packages_iter(cls, where, exclude, include):
77 """
78 All the packages found in 'where' that pass the 'include' filter, but
79 not the 'exclude' filter.
80 """
81 for root, dirs, files in os.walk(where, followlinks=True):
82 # Copy dirs to iterate over it, then empty dirs.
83 all_dirs = dirs[:]
84 dirs[:] = []
85
86 for dir in all_dirs:
87 full_path = os.path.join(root, dir)
88 rel_path = os.path.relpath(full_path, where)
89 package = rel_path.replace(os.path.sep, '.')
90
91 # Skip directory trees that are not valid packages
92 if ('.' in dir or not cls._looks_like_package(full_path)):
93 continue
94
95 # Should this package be included?
96 if include(package) and not exclude(package):
97 yield package
98
99 # Keep searching subdirectories, as there may be more packages
100 # down there, even if the parent was excluded.
101 dirs.append(dir)
102
103 @staticmethod
104 def _looks_like_package(path):
105 """Does a directory look like a package?"""
106 return os.path.isfile(os.path.join(path, '__init__.py'))
107
108 @staticmethod
109 def _build_filter(*patterns):
110 """
111 Given a list of patterns, return a callable that will be true only if
112 the input matches at least one of the patterns.
113 """
114 return lambda name: any(fnmatchcase(name, pat=pat) for pat in patterns)
115
116
117 class PEP420PackageFinder(PackageFinder):
118 @staticmethod
119 def _looks_like_package(path):
120 return True
121
122
123 find_packages = PackageFinder.find
124
125 if PY3:
126 find_namespace_packages = PEP420PackageFinder.find
127
128
129 def _install_setup_requires(attrs):
130 # Note: do not use `setuptools.Distribution` directly, as
131 # our PEP 517 backend patch `distutils.core.Distribution`.
132 dist = distutils.core.Distribution(dict(
133 (k, v) for k, v in attrs.items()
134 if k in ('dependency_links', 'setup_requires')
135 ))
136 # Honor setup.cfg's options.
137 dist.parse_config_files(ignore_option_errors=True)
138 if dist.setup_requires:
139 dist.fetch_build_eggs(dist.setup_requires)
140
141
142 def setup(**attrs):
143 # Make sure we have any requirements needed to interpret 'attrs'.
144 _install_setup_requires(attrs)
145 return distutils.core.setup(**attrs)
146
147 setup.__doc__ = distutils.core.setup.__doc__
148
149
150 _Command = monkey.get_unpatched(distutils.core.Command)
151
152
153 class Command(_Command):
154 __doc__ = _Command.__doc__
155
156 command_consumes_arguments = False
157
158 def __init__(self, dist, **kw):
159 """
160 Construct the command for dist, updating
161 vars(self) with any keyword parameters.
162 """
163 _Command.__init__(self, dist)
164 vars(self).update(kw)
165
166 def _ensure_stringlike(self, option, what, default=None):
167 val = getattr(self, option)
168 if val is None:
169 setattr(self, option, default)
170 return default
171 elif not isinstance(val, string_types):
172 raise DistutilsOptionError("'%s' must be a %s (got `%s`)"
173 % (option, what, val))
174 return val
175
176 def ensure_string_list(self, option):
177 r"""Ensure that 'option' is a list of strings. If 'option' is
178 currently a string, we split it either on /,\s*/ or /\s+/, so
179 "foo bar baz", "foo,bar,baz", and "foo, bar baz" all become
180 ["foo", "bar", "baz"].
181 """
182 val = getattr(self, option)
183 if val is None:
184 return
185 elif isinstance(val, string_types):
186 setattr(self, option, re.split(r',\s*|\s+', val))
187 else:
188 if isinstance(val, list):
189 ok = all(isinstance(v, string_types) for v in val)
190 else:
191 ok = False
192 if not ok:
193 raise DistutilsOptionError(
194 "'%s' must be a list of strings (got %r)"
195 % (option, val))
196
197 def reinitialize_command(self, command, reinit_subcommands=0, **kw):
198 cmd = _Command.reinitialize_command(self, command, reinit_subcommands)
199 vars(cmd).update(kw)
200 return cmd
201
202
203 def _find_all_simple(path):
204 """
205 Find all files under 'path'
206 """
207 results = (
208 os.path.join(base, file)
209 for base, dirs, files in os.walk(path, followlinks=True)
210 for file in files
211 )
212 return filter(os.path.isfile, results)
213
214
215 def findall(dir=os.curdir):
216 """
217 Find all files under 'dir' and return the list of full filenames.
218 Unless dir is '.', return full filenames with dir prepended.
219 """
220 files = _find_all_simple(dir)
221 if dir == os.curdir:
222 make_rel = functools.partial(os.path.relpath, start=dir)
223 files = map(make_rel, files)
224 return list(files)
225
226
227 # Apply monkey patches
228 monkey.patch_all()
229
[end of setuptools/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/__init__.py b/setuptools/__init__.py
--- a/setuptools/__init__.py
+++ b/setuptools/__init__.py
@@ -129,10 +129,27 @@
def _install_setup_requires(attrs):
# Note: do not use `setuptools.Distribution` directly, as
# our PEP 517 backend patch `distutils.core.Distribution`.
- dist = distutils.core.Distribution(dict(
- (k, v) for k, v in attrs.items()
- if k in ('dependency_links', 'setup_requires')
- ))
+ class MinimalDistribution(distutils.core.Distribution):
+ """
+ A minimal version of a distribution for supporting the
+ fetch_build_eggs interface.
+ """
+ def __init__(self, attrs):
+ _incl = 'dependency_links', 'setup_requires'
+ filtered = {
+ k: attrs[k]
+ for k in set(_incl) & set(attrs)
+ }
+ distutils.core.Distribution.__init__(self, filtered)
+
+ def finalize_options(self):
+ """
+ Disable finalize_options to avoid building the working set.
+ Ref #2158.
+ """
+
+ dist = MinimalDistribution(attrs)
+
# Honor setup.cfg's options.
dist.parse_config_files(ignore_option_errors=True)
if dist.setup_requires:
| {"golden_diff": "diff --git a/setuptools/__init__.py b/setuptools/__init__.py\n--- a/setuptools/__init__.py\n+++ b/setuptools/__init__.py\n@@ -129,10 +129,27 @@\n def _install_setup_requires(attrs):\n # Note: do not use `setuptools.Distribution` directly, as\n # our PEP 517 backend patch `distutils.core.Distribution`.\n- dist = distutils.core.Distribution(dict(\n- (k, v) for k, v in attrs.items()\n- if k in ('dependency_links', 'setup_requires')\n- ))\n+ class MinimalDistribution(distutils.core.Distribution):\n+ \"\"\"\n+ A minimal version of a distribution for supporting the\n+ fetch_build_eggs interface.\n+ \"\"\"\n+ def __init__(self, attrs):\n+ _incl = 'dependency_links', 'setup_requires'\n+ filtered = {\n+ k: attrs[k]\n+ for k in set(_incl) & set(attrs)\n+ }\n+ distutils.core.Distribution.__init__(self, filtered)\n+\n+ def finalize_options(self):\n+ \"\"\"\n+ Disable finalize_options to avoid building the working set.\n+ Ref #2158.\n+ \"\"\"\n+\n+ dist = MinimalDistribution(attrs)\n+\n # Honor setup.cfg's options.\n dist.parse_config_files(ignore_option_errors=True)\n if dist.setup_requires:\n", "issue": "Plugins using setuptools entry points resolve to system and not setup_requires\nAssume that you have e.g. setuptools_scm 3 installed system-wide, and try to to run the following setup.py:\r\n```python\r\nfrom setuptools import setup\r\nsetup(name=\"foo\", setup_requires=[\"setuptools_scm>=4\"])\r\n```\r\nyou get the warning\r\n```\r\n/usr/lib/python3.8/site-packages/setuptools/dist.py:701: UserWarning: Module setuptools_scm was already imported from /usr/lib/python3.8/site-packages/setuptools_scm/__init__.py, but /tmp/foo/.eggs/setuptools_scm-4.1.1-py3.8.egg is being added to sys.path\r\n```\r\n(i.e. the system-wide setuptools_scm was loaded, even though it is too old). One can confirm that the \"correct\" newer setuptools_scm indeed doesn't get used by changing setup.py to\r\n```python\r\nfrom setuptools import setup\r\nsetup(name=\"foo\",\r\n setup_requires=[\"setuptools_scm>=4\"],\r\n use_scm_version={\"parentdir_prefix_version\": \"foo-\"})\r\n```\r\nin which case one gets the same warning and then a traceback\r\n```\r\nTypeError: __init__() got an unexpected keyword argument 'parentdir_prefix_version'\r\n```\r\n\r\nI guess a solution may be e.g. to move the entry point (which is what gets eagerly loaded by setuptools) to a separate package (let's say `_setuptools_scm_helper`) which itself imports setuptools_scm only when actually needed with a pass-through shim so that setuptools only imports `_setuptools_scm_helper` and then the correct version of setuptools_scm can be loaded when needed.\nPlugins using setuptools entry points resolve to system and not setup_requires\nAssume that you have e.g. setuptools_scm 3 installed system-wide, and try to to run the following setup.py:\r\n```python\r\nfrom setuptools import setup\r\nsetup(name=\"foo\", setup_requires=[\"setuptools_scm>=4\"])\r\n```\r\nyou get the warning\r\n```\r\n/usr/lib/python3.8/site-packages/setuptools/dist.py:701: UserWarning: Module setuptools_scm was already imported from /usr/lib/python3.8/site-packages/setuptools_scm/__init__.py, but /tmp/foo/.eggs/setuptools_scm-4.1.1-py3.8.egg is being added to sys.path\r\n```\r\n(i.e. the system-wide setuptools_scm was loaded, even though it is too old). One can confirm that the \"correct\" newer setuptools_scm indeed doesn't get used by changing setup.py to\r\n```python\r\nfrom setuptools import setup\r\nsetup(name=\"foo\",\r\n setup_requires=[\"setuptools_scm>=4\"],\r\n use_scm_version={\"parentdir_prefix_version\": \"foo-\"})\r\n```\r\nin which case one gets the same warning and then a traceback\r\n```\r\nTypeError: __init__() got an unexpected keyword argument 'parentdir_prefix_version'\r\n```\r\n\r\nI guess a solution may be e.g. to move the entry point (which is what gets eagerly loaded by setuptools) to a separate package (let's say `_setuptools_scm_helper`) which itself imports setuptools_scm only when actually needed with a pass-through shim so that setuptools only imports `_setuptools_scm_helper` and then the correct version of setuptools_scm can be loaded when needed.\n", "before_files": [{"content": "\"\"\"Extensions to the 'distutils' for large or complex distributions\"\"\"\n\nimport os\nimport sys\nimport functools\nimport distutils.core\nimport distutils.filelist\nimport re\nfrom distutils.errors import DistutilsOptionError\nfrom distutils.util import convert_path\nfrom fnmatch import fnmatchcase\n\nfrom ._deprecation_warning import SetuptoolsDeprecationWarning\n\nfrom setuptools.extern.six import PY3, string_types\nfrom setuptools.extern.six.moves import filter, map\n\nimport setuptools.version\nfrom setuptools.extension import Extension\nfrom setuptools.dist import Distribution, Feature\nfrom setuptools.depends import Require\nfrom . import monkey\n\n__metaclass__ = type\n\n\n__all__ = [\n 'setup', 'Distribution', 'Feature', 'Command', 'Extension', 'Require',\n 'SetuptoolsDeprecationWarning',\n 'find_packages'\n]\n\nif PY3:\n __all__.append('find_namespace_packages')\n\n__version__ = setuptools.version.__version__\n\nbootstrap_install_from = None\n\n# If we run 2to3 on .py files, should we also convert docstrings?\n# Default: yes; assume that we can detect doctests reliably\nrun_2to3_on_doctests = True\n# Standard package names for fixer packages\nlib2to3_fixer_packages = ['lib2to3.fixes']\n\n\nclass PackageFinder:\n \"\"\"\n Generate a list of all Python packages found within a directory\n \"\"\"\n\n @classmethod\n def find(cls, where='.', exclude=(), include=('*',)):\n \"\"\"Return a list all Python packages found within directory 'where'\n\n 'where' is the root directory which will be searched for packages. It\n should be supplied as a \"cross-platform\" (i.e. URL-style) path; it will\n be converted to the appropriate local path syntax.\n\n 'exclude' is a sequence of package names to exclude; '*' can be used\n as a wildcard in the names, such that 'foo.*' will exclude all\n subpackages of 'foo' (but not 'foo' itself).\n\n 'include' is a sequence of package names to include. If it's\n specified, only the named packages will be included. If it's not\n specified, all found packages will be included. 'include' can contain\n shell style wildcard patterns just like 'exclude'.\n \"\"\"\n\n return list(cls._find_packages_iter(\n convert_path(where),\n cls._build_filter('ez_setup', '*__pycache__', *exclude),\n cls._build_filter(*include)))\n\n @classmethod\n def _find_packages_iter(cls, where, exclude, include):\n \"\"\"\n All the packages found in 'where' that pass the 'include' filter, but\n not the 'exclude' filter.\n \"\"\"\n for root, dirs, files in os.walk(where, followlinks=True):\n # Copy dirs to iterate over it, then empty dirs.\n all_dirs = dirs[:]\n dirs[:] = []\n\n for dir in all_dirs:\n full_path = os.path.join(root, dir)\n rel_path = os.path.relpath(full_path, where)\n package = rel_path.replace(os.path.sep, '.')\n\n # Skip directory trees that are not valid packages\n if ('.' in dir or not cls._looks_like_package(full_path)):\n continue\n\n # Should this package be included?\n if include(package) and not exclude(package):\n yield package\n\n # Keep searching subdirectories, as there may be more packages\n # down there, even if the parent was excluded.\n dirs.append(dir)\n\n @staticmethod\n def _looks_like_package(path):\n \"\"\"Does a directory look like a package?\"\"\"\n return os.path.isfile(os.path.join(path, '__init__.py'))\n\n @staticmethod\n def _build_filter(*patterns):\n \"\"\"\n Given a list of patterns, return a callable that will be true only if\n the input matches at least one of the patterns.\n \"\"\"\n return lambda name: any(fnmatchcase(name, pat=pat) for pat in patterns)\n\n\nclass PEP420PackageFinder(PackageFinder):\n @staticmethod\n def _looks_like_package(path):\n return True\n\n\nfind_packages = PackageFinder.find\n\nif PY3:\n find_namespace_packages = PEP420PackageFinder.find\n\n\ndef _install_setup_requires(attrs):\n # Note: do not use `setuptools.Distribution` directly, as\n # our PEP 517 backend patch `distutils.core.Distribution`.\n dist = distutils.core.Distribution(dict(\n (k, v) for k, v in attrs.items()\n if k in ('dependency_links', 'setup_requires')\n ))\n # Honor setup.cfg's options.\n dist.parse_config_files(ignore_option_errors=True)\n if dist.setup_requires:\n dist.fetch_build_eggs(dist.setup_requires)\n\n\ndef setup(**attrs):\n # Make sure we have any requirements needed to interpret 'attrs'.\n _install_setup_requires(attrs)\n return distutils.core.setup(**attrs)\n\nsetup.__doc__ = distutils.core.setup.__doc__\n\n\n_Command = monkey.get_unpatched(distutils.core.Command)\n\n\nclass Command(_Command):\n __doc__ = _Command.__doc__\n\n command_consumes_arguments = False\n\n def __init__(self, dist, **kw):\n \"\"\"\n Construct the command for dist, updating\n vars(self) with any keyword parameters.\n \"\"\"\n _Command.__init__(self, dist)\n vars(self).update(kw)\n\n def _ensure_stringlike(self, option, what, default=None):\n val = getattr(self, option)\n if val is None:\n setattr(self, option, default)\n return default\n elif not isinstance(val, string_types):\n raise DistutilsOptionError(\"'%s' must be a %s (got `%s`)\"\n % (option, what, val))\n return val\n\n def ensure_string_list(self, option):\n r\"\"\"Ensure that 'option' is a list of strings. If 'option' is\n currently a string, we split it either on /,\\s*/ or /\\s+/, so\n \"foo bar baz\", \"foo,bar,baz\", and \"foo, bar baz\" all become\n [\"foo\", \"bar\", \"baz\"].\n \"\"\"\n val = getattr(self, option)\n if val is None:\n return\n elif isinstance(val, string_types):\n setattr(self, option, re.split(r',\\s*|\\s+', val))\n else:\n if isinstance(val, list):\n ok = all(isinstance(v, string_types) for v in val)\n else:\n ok = False\n if not ok:\n raise DistutilsOptionError(\n \"'%s' must be a list of strings (got %r)\"\n % (option, val))\n\n def reinitialize_command(self, command, reinit_subcommands=0, **kw):\n cmd = _Command.reinitialize_command(self, command, reinit_subcommands)\n vars(cmd).update(kw)\n return cmd\n\n\ndef _find_all_simple(path):\n \"\"\"\n Find all files under 'path'\n \"\"\"\n results = (\n os.path.join(base, file)\n for base, dirs, files in os.walk(path, followlinks=True)\n for file in files\n )\n return filter(os.path.isfile, results)\n\n\ndef findall(dir=os.curdir):\n \"\"\"\n Find all files under 'dir' and return the list of full filenames.\n Unless dir is '.', return full filenames with dir prepended.\n \"\"\"\n files = _find_all_simple(dir)\n if dir == os.curdir:\n make_rel = functools.partial(os.path.relpath, start=dir)\n files = map(make_rel, files)\n return list(files)\n\n\n# Apply monkey patches\nmonkey.patch_all()\n", "path": "setuptools/__init__.py"}]} | 3,508 | 307 |
gh_patches_debug_7091 | rasdani/github-patches | git_diff | mkdocs__mkdocs-397 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NoneType object is not callable
I'm not sure if this is my own stupidity, related to #283, or something I've ended up doing myself by accident.
I'm working on developing a mkdocs theme, and `mkdocs build` and `mkdocs build --clean` both run without any issues.
However, if I try and `mkdocs serve` then I get hit by an error.
```
Traceback (most recent call last):
File "/home/pi/.virtualenvs/face/bin/mkdocs", line 9, in <module>
load_entry_point('mkdocs==0.11.1', 'console_scripts', 'mkdocs')()
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/main.py", line 60, in run_main
main(cmd, args=sys.argv[2:], options=dict(opts))
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/main.py", line 33, in main
serve(config, options=options)
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/serve.py", line 96, in serve
observer.start()
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/api.py", line 255, in start
emitter.start()
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/utils/__init__.py", line 111, in start
self.on_thread_start()
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py", line 77, in on_thread_start
self._snapshot = self._take_snapshot()
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py", line 74, in <lambda>
self.watch.path, self.watch.is_recursive, stat=stat, listdir=listdir)
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/utils/dirsnapshot.py", line 207, in __init__
st = stat(path)
OSError: [Errno 2] No such file or directory: '/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/themes/face'
Exception in thread Thread-3 (most likely raised during interpreter shutdown):
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 552, in __bootstrap_inner
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/api.py", line 146, in run
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py", line 91, in queue_events
File "/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py", line 74, in <lambda>
<type 'exceptions.TypeError'>: 'NoneType' object is not callable
```
My `mkdocs.yml` file is simple, and I don't think it is the actual cause, but it might be in my own inept way:
```
site_name: Face Theme
theme_dir: face
theme: face
use_directory_urls: true
repo_url: https://bitbucket.org/shakna-israel/face
dev_addr: 0.0.0.0:8000
```
</issue>
<code>
[start of mkdocs/serve.py]
1 # coding: utf-8
2 from __future__ import print_function
3
4 from watchdog import events
5 from watchdog.observers.polling import PollingObserver
6 from mkdocs.build import build
7 from mkdocs.compat import httpserver, socketserver, urlunquote
8 from mkdocs.config import load_config
9 import os
10 import posixpath
11 import shutil
12 import sys
13 import tempfile
14
15
16 class BuildEventHandler(events.FileSystemEventHandler):
17 """
18 Perform a rebuild when anything in the theme or docs directory changes.
19 """
20 def __init__(self, options):
21 super(BuildEventHandler, self).__init__()
22 self.options = options
23
24 def on_any_event(self, event):
25 if not isinstance(event, events.DirModifiedEvent):
26 print('Rebuilding documentation...', end='')
27 config = load_config(options=self.options)
28 build(config, live_server=True)
29 print(' done')
30
31
32 class ConfigEventHandler(BuildEventHandler):
33 """
34 Perform a rebuild when the config file changes.
35 """
36 def on_any_event(self, event):
37 try:
38 if os.path.basename(event.src_path) == 'mkdocs.yml':
39 super(ConfigEventHandler, self).on_any_event(event)
40 except Exception as e:
41 print(e)
42
43
44 class FixedDirectoryHandler(httpserver.SimpleHTTPRequestHandler):
45 """
46 Override the default implementation to allow us to specify the served
47 directory, instead of being hardwired to the current working directory.
48 """
49 base_dir = os.getcwd()
50
51 def translate_path(self, path):
52 # abandon query parameters
53 path = path.split('?', 1)[0]
54 path = path.split('#', 1)[0]
55 path = posixpath.normpath(urlunquote(path))
56 words = path.split('/')
57 words = filter(None, words)
58 path = self.base_dir
59 for word in words:
60 drive, word = os.path.splitdrive(word)
61 head, word = os.path.split(word)
62 if word in (os.curdir, os.pardir):
63 continue
64 path = os.path.join(path, word)
65 return path
66
67 def log_message(self, format, *args):
68 date_str = self.log_date_time_string()
69 sys.stderr.write('[%s] %s\n' % (date_str, format % args))
70
71
72 def serve(config, options=None):
73 """
74 Start the devserver, and rebuild the docs whenever any changes take effect.
75 """
76 # Create a temporary build directory, and set some options to serve it
77 tempdir = tempfile.mkdtemp()
78 options['site_dir'] = tempdir
79
80 # Only use user-friendly URLs when running the live server
81 options['use_directory_urls'] = True
82
83 # Perform the initial build
84 config = load_config(options=options)
85 build(config, live_server=True)
86
87 # Note: We pass any command-line options through so that we
88 # can re-apply them if the config file is reloaded.
89 event_handler = BuildEventHandler(options)
90 config_event_handler = ConfigEventHandler(options)
91
92 # We could have used `Observer()`, which can be faster, but
93 # `PollingObserver()` works more universally.
94 observer = PollingObserver()
95 observer.schedule(event_handler, config['docs_dir'], recursive=True)
96 for theme_dir in config['theme_dir']:
97 observer.schedule(event_handler, theme_dir, recursive=True)
98 observer.schedule(config_event_handler, '.')
99 observer.start()
100
101 class TCPServer(socketserver.TCPServer):
102 allow_reuse_address = True
103
104 class DocsDirectoryHandler(FixedDirectoryHandler):
105 base_dir = config['site_dir']
106
107 host, port = config['dev_addr'].split(':', 1)
108 server = TCPServer((host, int(port)), DocsDirectoryHandler)
109
110 print('Running at: http://%s:%s/' % (host, port))
111 print('Live reload enabled.')
112 print('Hold ctrl+c to quit.')
113 try:
114 server.serve_forever()
115 except KeyboardInterrupt:
116 print('Stopping server...')
117
118 # Clean up
119 observer.stop()
120 observer.join()
121 shutil.rmtree(tempdir)
122 print('Quit complete')
123
[end of mkdocs/serve.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/serve.py b/mkdocs/serve.py
--- a/mkdocs/serve.py
+++ b/mkdocs/serve.py
@@ -94,6 +94,8 @@
observer = PollingObserver()
observer.schedule(event_handler, config['docs_dir'], recursive=True)
for theme_dir in config['theme_dir']:
+ if not os.path.exists(theme_dir):
+ continue
observer.schedule(event_handler, theme_dir, recursive=True)
observer.schedule(config_event_handler, '.')
observer.start()
| {"golden_diff": "diff --git a/mkdocs/serve.py b/mkdocs/serve.py\n--- a/mkdocs/serve.py\n+++ b/mkdocs/serve.py\n@@ -94,6 +94,8 @@\n observer = PollingObserver()\n observer.schedule(event_handler, config['docs_dir'], recursive=True)\n for theme_dir in config['theme_dir']:\n+ if not os.path.exists(theme_dir):\n+ continue\n observer.schedule(event_handler, theme_dir, recursive=True)\n observer.schedule(config_event_handler, '.')\n observer.start()\n", "issue": "NoneType object is not callable\nI'm not sure if this is my own stupidity, related to #283, or something I've ended up doing myself by accident.\n\nI'm working on developing a mkdocs theme, and `mkdocs build` and `mkdocs build --clean` both run without any issues.\n\nHowever, if I try and `mkdocs serve` then I get hit by an error.\n\n```\nTraceback (most recent call last):\n File \"/home/pi/.virtualenvs/face/bin/mkdocs\", line 9, in <module>\n load_entry_point('mkdocs==0.11.1', 'console_scripts', 'mkdocs')()\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/main.py\", line 60, in run_main\n main(cmd, args=sys.argv[2:], options=dict(opts))\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/main.py\", line 33, in main\n serve(config, options=options)\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/serve.py\", line 96, in serve\n observer.start()\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/api.py\", line 255, in start\n emitter.start()\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/utils/__init__.py\", line 111, in start\n self.on_thread_start()\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py\", line 77, in on_thread_start\n self._snapshot = self._take_snapshot()\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py\", line 74, in <lambda>\n self.watch.path, self.watch.is_recursive, stat=stat, listdir=listdir)\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/utils/dirsnapshot.py\", line 207, in __init__\n st = stat(path)\nOSError: [Errno 2] No such file or directory: '/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/mkdocs/themes/face'\nException in thread Thread-3 (most likely raised during interpreter shutdown):\nTraceback (most recent call last):\n File \"/usr/lib/python2.7/threading.py\", line 552, in __bootstrap_inner\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/api.py\", line 146, in run\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py\", line 91, in queue_events\n File \"/home/pi/.virtualenvs/face/local/lib/python2.7/site-packages/watchdog/observers/polling.py\", line 74, in <lambda>\n<type 'exceptions.TypeError'>: 'NoneType' object is not callable\n```\n\nMy `mkdocs.yml` file is simple, and I don't think it is the actual cause, but it might be in my own inept way:\n\n```\nsite_name: Face Theme\ntheme_dir: face\ntheme: face\nuse_directory_urls: true\nrepo_url: https://bitbucket.org/shakna-israel/face\ndev_addr: 0.0.0.0:8000\n```\n\n", "before_files": [{"content": "# coding: utf-8\nfrom __future__ import print_function\n\nfrom watchdog import events\nfrom watchdog.observers.polling import PollingObserver\nfrom mkdocs.build import build\nfrom mkdocs.compat import httpserver, socketserver, urlunquote\nfrom mkdocs.config import load_config\nimport os\nimport posixpath\nimport shutil\nimport sys\nimport tempfile\n\n\nclass BuildEventHandler(events.FileSystemEventHandler):\n \"\"\"\n Perform a rebuild when anything in the theme or docs directory changes.\n \"\"\"\n def __init__(self, options):\n super(BuildEventHandler, self).__init__()\n self.options = options\n\n def on_any_event(self, event):\n if not isinstance(event, events.DirModifiedEvent):\n print('Rebuilding documentation...', end='')\n config = load_config(options=self.options)\n build(config, live_server=True)\n print(' done')\n\n\nclass ConfigEventHandler(BuildEventHandler):\n \"\"\"\n Perform a rebuild when the config file changes.\n \"\"\"\n def on_any_event(self, event):\n try:\n if os.path.basename(event.src_path) == 'mkdocs.yml':\n super(ConfigEventHandler, self).on_any_event(event)\n except Exception as e:\n print(e)\n\n\nclass FixedDirectoryHandler(httpserver.SimpleHTTPRequestHandler):\n \"\"\"\n Override the default implementation to allow us to specify the served\n directory, instead of being hardwired to the current working directory.\n \"\"\"\n base_dir = os.getcwd()\n\n def translate_path(self, path):\n # abandon query parameters\n path = path.split('?', 1)[0]\n path = path.split('#', 1)[0]\n path = posixpath.normpath(urlunquote(path))\n words = path.split('/')\n words = filter(None, words)\n path = self.base_dir\n for word in words:\n drive, word = os.path.splitdrive(word)\n head, word = os.path.split(word)\n if word in (os.curdir, os.pardir):\n continue\n path = os.path.join(path, word)\n return path\n\n def log_message(self, format, *args):\n date_str = self.log_date_time_string()\n sys.stderr.write('[%s] %s\\n' % (date_str, format % args))\n\n\ndef serve(config, options=None):\n \"\"\"\n Start the devserver, and rebuild the docs whenever any changes take effect.\n \"\"\"\n # Create a temporary build directory, and set some options to serve it\n tempdir = tempfile.mkdtemp()\n options['site_dir'] = tempdir\n\n # Only use user-friendly URLs when running the live server\n options['use_directory_urls'] = True\n\n # Perform the initial build\n config = load_config(options=options)\n build(config, live_server=True)\n\n # Note: We pass any command-line options through so that we\n # can re-apply them if the config file is reloaded.\n event_handler = BuildEventHandler(options)\n config_event_handler = ConfigEventHandler(options)\n\n # We could have used `Observer()`, which can be faster, but\n # `PollingObserver()` works more universally.\n observer = PollingObserver()\n observer.schedule(event_handler, config['docs_dir'], recursive=True)\n for theme_dir in config['theme_dir']:\n observer.schedule(event_handler, theme_dir, recursive=True)\n observer.schedule(config_event_handler, '.')\n observer.start()\n\n class TCPServer(socketserver.TCPServer):\n allow_reuse_address = True\n\n class DocsDirectoryHandler(FixedDirectoryHandler):\n base_dir = config['site_dir']\n\n host, port = config['dev_addr'].split(':', 1)\n server = TCPServer((host, int(port)), DocsDirectoryHandler)\n\n print('Running at: http://%s:%s/' % (host, port))\n print('Live reload enabled.')\n print('Hold ctrl+c to quit.')\n try:\n server.serve_forever()\n except KeyboardInterrupt:\n print('Stopping server...')\n\n # Clean up\n observer.stop()\n observer.join()\n shutil.rmtree(tempdir)\n print('Quit complete')\n", "path": "mkdocs/serve.py"}]} | 2,478 | 118 |
gh_patches_debug_34360 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-2137 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DeprecationWarning: The distutils package is deprecated
## π Bug
TorchMetrics still uses distutils, but distutils is deprecated.
### To Reproduce
```console
> python3 -Wall
>>> import torchmetrics
lib/python3.11/site-packages/torchmetrics/utilities/imports.py:18: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives
from distutils.version import LooseVersion
```
### Expected behavior
I would not expect TorchMetrics to still be using distutils.
### Environment
- TorchMetrics version: 1.1, 1.2, master (spack)
- Python & PyTorch Version: 3.11.4 & 2.0.1
- Any other relevant information such as OS: macOS 13.5.2
### Additional context
I also sometimes see a different error message from pytest:
```
DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
```
This gives one possible solution.
</issue>
<code>
[start of src/torchmetrics/utilities/imports.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 """Import utilities."""
15 import operator
16 import shutil
17 import sys
18 from distutils.version import LooseVersion
19 from typing import Optional
20
21 from lightning_utilities.core.imports import compare_version, package_available
22
23 _PYTHON_VERSION = ".".join(map(str, [sys.version_info.major, sys.version_info.minor, sys.version_info.micro]))
24 _PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion("3.8")
25 _TORCH_LOWER_1_12_DEV: Optional[bool] = compare_version("torch", operator.lt, "1.12.0.dev")
26 _TORCH_GREATER_EQUAL_1_9: Optional[bool] = compare_version("torch", operator.ge, "1.9.0")
27 _TORCH_GREATER_EQUAL_1_10: Optional[bool] = compare_version("torch", operator.ge, "1.10.0")
28 _TORCH_GREATER_EQUAL_1_11: Optional[bool] = compare_version("torch", operator.ge, "1.11.0")
29 _TORCH_GREATER_EQUAL_1_12: Optional[bool] = compare_version("torch", operator.ge, "1.12.0")
30 _TORCH_GREATER_EQUAL_1_13: Optional[bool] = compare_version("torch", operator.ge, "1.13.0")
31
32 _JIWER_AVAILABLE: bool = package_available("jiwer")
33 _NLTK_AVAILABLE: bool = package_available("nltk")
34 _ROUGE_SCORE_AVAILABLE: bool = package_available("rouge_score")
35 _BERTSCORE_AVAILABLE: bool = package_available("bert_score")
36 _SCIPY_AVAILABLE: bool = package_available("scipy")
37 _TORCH_FIDELITY_AVAILABLE: bool = package_available("torch_fidelity")
38 _LPIPS_AVAILABLE: bool = package_available("lpips")
39 _PYCOCOTOOLS_AVAILABLE: bool = package_available("pycocotools")
40 _TORCHVISION_AVAILABLE: bool = package_available("torchvision")
41 _TORCHVISION_GREATER_EQUAL_0_8: Optional[bool] = compare_version("torchvision", operator.ge, "0.8.0")
42 _TORCHVISION_GREATER_EQUAL_0_13: Optional[bool] = compare_version("torchvision", operator.ge, "0.13.0")
43 _TQDM_AVAILABLE: bool = package_available("tqdm")
44 _TRANSFORMERS_AVAILABLE: bool = package_available("transformers")
45 _TRANSFORMERS_GREATER_EQUAL_4_4: Optional[bool] = compare_version("transformers", operator.ge, "4.4.0")
46 _TRANSFORMERS_GREATER_EQUAL_4_10: Optional[bool] = compare_version("transformers", operator.ge, "4.10.0")
47 _PESQ_AVAILABLE: bool = package_available("pesq")
48 _GAMMATONE_AVAILABLE: bool = package_available("gammatone")
49 _TORCHAUDIO_AVAILABLE: bool = package_available("torchaudio")
50 _TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0")
51 _SACREBLEU_AVAILABLE: bool = package_available("sacrebleu")
52 _REGEX_AVAILABLE: bool = package_available("regex")
53 _PYSTOI_AVAILABLE: bool = package_available("pystoi")
54 _FAST_BSS_EVAL_AVAILABLE: bool = package_available("fast_bss_eval")
55 _MATPLOTLIB_AVAILABLE: bool = package_available("matplotlib")
56 _SCIENCEPLOT_AVAILABLE: bool = package_available("scienceplots")
57 _MULTIPROCESSING_AVAILABLE: bool = package_available("multiprocessing")
58 _XLA_AVAILABLE: bool = package_available("torch_xla")
59 _PIQ_GREATER_EQUAL_0_8: Optional[bool] = compare_version("piq", operator.ge, "0.8.0")
60 _FASTER_COCO_EVAL_AVAILABLE: bool = package_available("faster_coco_eval")
61
62 _LATEX_AVAILABLE: bool = shutil.which("latex") is not None
63
[end of src/torchmetrics/utilities/imports.py]
[start of .github/assistant.py]
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 import glob
15 import json
16 import logging
17 import os
18 import re
19 import sys
20 import traceback
21 from distutils.version import LooseVersion
22 from typing import List, Optional, Tuple, Union
23
24 import fire
25 import requests
26 from pkg_resources import parse_requirements
27
28 _REQUEST_TIMEOUT = 10
29 _PATH_ROOT = os.path.dirname(os.path.dirname(__file__))
30 _PKG_WIDE_SUBPACKAGES = ("utilities", "helpers")
31 LUT_PYTHON_TORCH = {
32 "3.8": "1.4",
33 "3.9": "1.7.1",
34 "3.10": "1.11",
35 "3.11": "1.13",
36 }
37 _path = lambda *ds: os.path.join(_PATH_ROOT, *ds)
38 REQUIREMENTS_FILES = (*glob.glob(_path("requirements", "*.txt")), _path("requirements.txt"))
39
40
41 def request_url(url: str, auth_token: Optional[str] = None) -> Optional[dict]:
42 """General request with checking if request limit was reached."""
43 auth_header = {"Authorization": f"token {auth_token}"} if auth_token else {}
44 try:
45 req = requests.get(url, headers=auth_header, timeout=_REQUEST_TIMEOUT)
46 except requests.exceptions.Timeout:
47 traceback.print_exc()
48 return None
49 if req.status_code == 403:
50 return None
51 return json.loads(req.content.decode(req.encoding))
52
53
54 class AssistantCLI:
55 """CLI assistant for local CI."""
56
57 @staticmethod
58 def prune_packages(req_file: str, *pkgs: str) -> None:
59 """Prune packages from requirement file."""
60 with open(req_file) as fp:
61 lines = fp.readlines()
62
63 for pkg in pkgs:
64 lines = [ln for ln in lines if not ln.startswith(pkg)]
65 logging.info(lines)
66
67 with open(req_file, "w", encoding="utf-8") as fp:
68 fp.writelines(lines)
69
70 @staticmethod
71 def set_min_torch_by_python(fpath: str = "requirements/base.txt") -> None:
72 """Set minimal torch version according to Python actual version.
73
74 >>> AssistantCLI.set_min_torch_by_python("../requirements/base.txt")
75
76 """
77 py_ver = f"{sys.version_info.major}.{sys.version_info.minor}"
78 if py_ver not in LUT_PYTHON_TORCH:
79 return
80 with open(fpath) as fp:
81 reqs = parse_requirements(fp.readlines())
82 pkg_ver = next(p for p in reqs if p.name == "torch")
83 pt_ver = min([LooseVersion(v[1]) for v in pkg_ver.specs])
84 pt_ver = max(LooseVersion(LUT_PYTHON_TORCH[py_ver]), pt_ver)
85 with open(fpath) as fp:
86 requires = fp.read()
87 requires = re.sub(r"torch>=[\d\.]+", f"torch>={pt_ver}", requires)
88 with open(fpath, "w", encoding="utf-8") as fp:
89 fp.write(requires)
90
91 @staticmethod
92 def replace_min_requirements(fpath: str) -> None:
93 """Replace all `>=` by `==` in given file."""
94 logging.info(f"processing: {fpath}")
95 with open(fpath, encoding="utf-8") as fp:
96 req = fp.read()
97 req = req.replace(">=", "==")
98 with open(fpath, "w", encoding="utf-8") as fp:
99 fp.write(req)
100
101 @staticmethod
102 def set_oldest_versions(req_files: List[str] = REQUIREMENTS_FILES) -> None:
103 """Set the oldest version for requirements."""
104 AssistantCLI.set_min_torch_by_python()
105 for fpath in req_files:
106 AssistantCLI.replace_min_requirements(fpath)
107
108 @staticmethod
109 def changed_domains(
110 pr: Optional[int] = None,
111 auth_token: Optional[str] = None,
112 as_list: bool = False,
113 general_sub_pkgs: Tuple[str] = _PKG_WIDE_SUBPACKAGES,
114 ) -> Union[str, List[str]]:
115 """Determine what domains were changed in particular PR."""
116 if not pr:
117 return "unittests"
118 url = f"https://api.github.com/repos/Lightning-AI/torchmetrics/pulls/{pr}/files"
119 logging.debug(url)
120 data = request_url(url, auth_token)
121 if not data:
122 logging.debug("WARNING: No data was received -> test everything.")
123 return "unittests"
124 files = [d["filename"] for d in data]
125
126 # filter out all integrations as they run in separate suit
127 files = [fn for fn in files if not fn.startswith("tests/integrations")]
128 if not files:
129 logging.debug("Only integrations was changed so not reason for deep testing...")
130 return ""
131 # filter only docs files
132 files_ = [fn for fn in files if fn.startswith("docs")]
133 if len(files) == len(files_):
134 logging.debug("Only docs was changed so not reason for deep testing...")
135 return ""
136
137 # filter only package files and skip inits
138 _is_in_test = lambda fn: fn.startswith("tests")
139 _filter_pkg = lambda fn: _is_in_test(fn) or (fn.startswith("src/torchmetrics") and "__init__.py" not in fn)
140 files_pkg = [fn for fn in files if _filter_pkg(fn)]
141 if not files_pkg:
142 return "unittests"
143
144 # parse domains
145 def _crop_path(fname: str, paths: List[str]) -> str:
146 for p in paths:
147 fname = fname.replace(p, "")
148 return fname
149
150 files_pkg = [_crop_path(fn, ["src/torchmetrics/", "tests/unittests/", "functional/"]) for fn in files_pkg]
151 # filter domain names
152 tm_modules = [fn.split("/")[0] for fn in files_pkg if "/" in fn]
153 # filter general (used everywhere) sub-packages
154 tm_modules = [md for md in tm_modules if md not in general_sub_pkgs]
155 if len(files_pkg) > len(tm_modules):
156 logging.debug("Some more files was changed -> rather test everything...")
157 return "unittests"
158 # keep only unique
159 if as_list:
160 return list(tm_modules)
161 tm_modules = [f"unittests/{md}" for md in set(tm_modules)]
162 not_exists = [p for p in tm_modules if os.path.exists(p)]
163 if not_exists:
164 raise ValueError(f"Missing following paths: {not_exists}")
165 return " ".join(tm_modules)
166
167
168 if __name__ == "__main__":
169 logging.basicConfig(level=logging.INFO)
170 fire.Fire(AssistantCLI)
171
[end of .github/assistant.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.github/assistant.py b/.github/assistant.py
--- a/.github/assistant.py
+++ b/.github/assistant.py
@@ -18,11 +18,11 @@
import re
import sys
import traceback
-from distutils.version import LooseVersion
from typing import List, Optional, Tuple, Union
import fire
import requests
+from packaging.version import parse
from pkg_resources import parse_requirements
_REQUEST_TIMEOUT = 10
@@ -80,8 +80,8 @@
with open(fpath) as fp:
reqs = parse_requirements(fp.readlines())
pkg_ver = next(p for p in reqs if p.name == "torch")
- pt_ver = min([LooseVersion(v[1]) for v in pkg_ver.specs])
- pt_ver = max(LooseVersion(LUT_PYTHON_TORCH[py_ver]), pt_ver)
+ pt_ver = min([parse(v[1]) for v in pkg_ver.specs])
+ pt_ver = max(parse(LUT_PYTHON_TORCH[py_ver]), pt_ver)
with open(fpath) as fp:
requires = fp.read()
requires = re.sub(r"torch>=[\d\.]+", f"torch>={pt_ver}", requires)
diff --git a/src/torchmetrics/utilities/imports.py b/src/torchmetrics/utilities/imports.py
--- a/src/torchmetrics/utilities/imports.py
+++ b/src/torchmetrics/utilities/imports.py
@@ -15,13 +15,13 @@
import operator
import shutil
import sys
-from distutils.version import LooseVersion
from typing import Optional
from lightning_utilities.core.imports import compare_version, package_available
+from packaging.version import Version, parse
_PYTHON_VERSION = ".".join(map(str, [sys.version_info.major, sys.version_info.minor, sys.version_info.micro]))
-_PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion("3.8")
+_PYTHON_LOWER_3_8 = parse(_PYTHON_VERSION) < Version("3.8")
_TORCH_LOWER_1_12_DEV: Optional[bool] = compare_version("torch", operator.lt, "1.12.0.dev")
_TORCH_GREATER_EQUAL_1_9: Optional[bool] = compare_version("torch", operator.ge, "1.9.0")
_TORCH_GREATER_EQUAL_1_10: Optional[bool] = compare_version("torch", operator.ge, "1.10.0")
| {"golden_diff": "diff --git a/.github/assistant.py b/.github/assistant.py\n--- a/.github/assistant.py\n+++ b/.github/assistant.py\n@@ -18,11 +18,11 @@\n import re\n import sys\n import traceback\n-from distutils.version import LooseVersion\n from typing import List, Optional, Tuple, Union\n \n import fire\n import requests\n+from packaging.version import parse\n from pkg_resources import parse_requirements\n \n _REQUEST_TIMEOUT = 10\n@@ -80,8 +80,8 @@\n with open(fpath) as fp:\n reqs = parse_requirements(fp.readlines())\n pkg_ver = next(p for p in reqs if p.name == \"torch\")\n- pt_ver = min([LooseVersion(v[1]) for v in pkg_ver.specs])\n- pt_ver = max(LooseVersion(LUT_PYTHON_TORCH[py_ver]), pt_ver)\n+ pt_ver = min([parse(v[1]) for v in pkg_ver.specs])\n+ pt_ver = max(parse(LUT_PYTHON_TORCH[py_ver]), pt_ver)\n with open(fpath) as fp:\n requires = fp.read()\n requires = re.sub(r\"torch>=[\\d\\.]+\", f\"torch>={pt_ver}\", requires)\ndiff --git a/src/torchmetrics/utilities/imports.py b/src/torchmetrics/utilities/imports.py\n--- a/src/torchmetrics/utilities/imports.py\n+++ b/src/torchmetrics/utilities/imports.py\n@@ -15,13 +15,13 @@\n import operator\n import shutil\n import sys\n-from distutils.version import LooseVersion\n from typing import Optional\n \n from lightning_utilities.core.imports import compare_version, package_available\n+from packaging.version import Version, parse\n \n _PYTHON_VERSION = \".\".join(map(str, [sys.version_info.major, sys.version_info.minor, sys.version_info.micro]))\n-_PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion(\"3.8\")\n+_PYTHON_LOWER_3_8 = parse(_PYTHON_VERSION) < Version(\"3.8\")\n _TORCH_LOWER_1_12_DEV: Optional[bool] = compare_version(\"torch\", operator.lt, \"1.12.0.dev\")\n _TORCH_GREATER_EQUAL_1_9: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.9.0\")\n _TORCH_GREATER_EQUAL_1_10: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.10.0\")\n", "issue": "DeprecationWarning: The distutils package is deprecated\n## \ud83d\udc1b Bug\r\n\r\nTorchMetrics still uses distutils, but distutils is deprecated.\r\n\r\n### To Reproduce\r\n\r\n```console\r\n> python3 -Wall\r\n>>> import torchmetrics\r\nlib/python3.11/site-packages/torchmetrics/utilities/imports.py:18: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives\r\n from distutils.version import LooseVersion\r\n```\r\n\r\n### Expected behavior\r\n\r\nI would not expect TorchMetrics to still be using distutils.\r\n\r\n### Environment\r\n\r\n- TorchMetrics version: 1.1, 1.2, master (spack)\r\n- Python & PyTorch Version: 3.11.4 & 2.0.1\r\n- Any other relevant information such as OS: macOS 13.5.2\r\n\r\n### Additional context\r\n\r\nI also sometimes see a different error message from pytest:\r\n```\r\nDeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.\r\n```\r\nThis gives one possible solution.\n", "before_files": [{"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Import utilities.\"\"\"\nimport operator\nimport shutil\nimport sys\nfrom distutils.version import LooseVersion\nfrom typing import Optional\n\nfrom lightning_utilities.core.imports import compare_version, package_available\n\n_PYTHON_VERSION = \".\".join(map(str, [sys.version_info.major, sys.version_info.minor, sys.version_info.micro]))\n_PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion(\"3.8\")\n_TORCH_LOWER_1_12_DEV: Optional[bool] = compare_version(\"torch\", operator.lt, \"1.12.0.dev\")\n_TORCH_GREATER_EQUAL_1_9: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.9.0\")\n_TORCH_GREATER_EQUAL_1_10: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.10.0\")\n_TORCH_GREATER_EQUAL_1_11: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.11.0\")\n_TORCH_GREATER_EQUAL_1_12: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.12.0\")\n_TORCH_GREATER_EQUAL_1_13: Optional[bool] = compare_version(\"torch\", operator.ge, \"1.13.0\")\n\n_JIWER_AVAILABLE: bool = package_available(\"jiwer\")\n_NLTK_AVAILABLE: bool = package_available(\"nltk\")\n_ROUGE_SCORE_AVAILABLE: bool = package_available(\"rouge_score\")\n_BERTSCORE_AVAILABLE: bool = package_available(\"bert_score\")\n_SCIPY_AVAILABLE: bool = package_available(\"scipy\")\n_TORCH_FIDELITY_AVAILABLE: bool = package_available(\"torch_fidelity\")\n_LPIPS_AVAILABLE: bool = package_available(\"lpips\")\n_PYCOCOTOOLS_AVAILABLE: bool = package_available(\"pycocotools\")\n_TORCHVISION_AVAILABLE: bool = package_available(\"torchvision\")\n_TORCHVISION_GREATER_EQUAL_0_8: Optional[bool] = compare_version(\"torchvision\", operator.ge, \"0.8.0\")\n_TORCHVISION_GREATER_EQUAL_0_13: Optional[bool] = compare_version(\"torchvision\", operator.ge, \"0.13.0\")\n_TQDM_AVAILABLE: bool = package_available(\"tqdm\")\n_TRANSFORMERS_AVAILABLE: bool = package_available(\"transformers\")\n_TRANSFORMERS_GREATER_EQUAL_4_4: Optional[bool] = compare_version(\"transformers\", operator.ge, \"4.4.0\")\n_TRANSFORMERS_GREATER_EQUAL_4_10: Optional[bool] = compare_version(\"transformers\", operator.ge, \"4.10.0\")\n_PESQ_AVAILABLE: bool = package_available(\"pesq\")\n_GAMMATONE_AVAILABLE: bool = package_available(\"gammatone\")\n_TORCHAUDIO_AVAILABLE: bool = package_available(\"torchaudio\")\n_TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version(\"torchaudio\", operator.ge, \"0.10.0\")\n_SACREBLEU_AVAILABLE: bool = package_available(\"sacrebleu\")\n_REGEX_AVAILABLE: bool = package_available(\"regex\")\n_PYSTOI_AVAILABLE: bool = package_available(\"pystoi\")\n_FAST_BSS_EVAL_AVAILABLE: bool = package_available(\"fast_bss_eval\")\n_MATPLOTLIB_AVAILABLE: bool = package_available(\"matplotlib\")\n_SCIENCEPLOT_AVAILABLE: bool = package_available(\"scienceplots\")\n_MULTIPROCESSING_AVAILABLE: bool = package_available(\"multiprocessing\")\n_XLA_AVAILABLE: bool = package_available(\"torch_xla\")\n_PIQ_GREATER_EQUAL_0_8: Optional[bool] = compare_version(\"piq\", operator.ge, \"0.8.0\")\n_FASTER_COCO_EVAL_AVAILABLE: bool = package_available(\"faster_coco_eval\")\n\n_LATEX_AVAILABLE: bool = shutil.which(\"latex\") is not None\n", "path": "src/torchmetrics/utilities/imports.py"}, {"content": "# Copyright The Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport glob\nimport json\nimport logging\nimport os\nimport re\nimport sys\nimport traceback\nfrom distutils.version import LooseVersion\nfrom typing import List, Optional, Tuple, Union\n\nimport fire\nimport requests\nfrom pkg_resources import parse_requirements\n\n_REQUEST_TIMEOUT = 10\n_PATH_ROOT = os.path.dirname(os.path.dirname(__file__))\n_PKG_WIDE_SUBPACKAGES = (\"utilities\", \"helpers\")\nLUT_PYTHON_TORCH = {\n \"3.8\": \"1.4\",\n \"3.9\": \"1.7.1\",\n \"3.10\": \"1.11\",\n \"3.11\": \"1.13\",\n}\n_path = lambda *ds: os.path.join(_PATH_ROOT, *ds)\nREQUIREMENTS_FILES = (*glob.glob(_path(\"requirements\", \"*.txt\")), _path(\"requirements.txt\"))\n\n\ndef request_url(url: str, auth_token: Optional[str] = None) -> Optional[dict]:\n \"\"\"General request with checking if request limit was reached.\"\"\"\n auth_header = {\"Authorization\": f\"token {auth_token}\"} if auth_token else {}\n try:\n req = requests.get(url, headers=auth_header, timeout=_REQUEST_TIMEOUT)\n except requests.exceptions.Timeout:\n traceback.print_exc()\n return None\n if req.status_code == 403:\n return None\n return json.loads(req.content.decode(req.encoding))\n\n\nclass AssistantCLI:\n \"\"\"CLI assistant for local CI.\"\"\"\n\n @staticmethod\n def prune_packages(req_file: str, *pkgs: str) -> None:\n \"\"\"Prune packages from requirement file.\"\"\"\n with open(req_file) as fp:\n lines = fp.readlines()\n\n for pkg in pkgs:\n lines = [ln for ln in lines if not ln.startswith(pkg)]\n logging.info(lines)\n\n with open(req_file, \"w\", encoding=\"utf-8\") as fp:\n fp.writelines(lines)\n\n @staticmethod\n def set_min_torch_by_python(fpath: str = \"requirements/base.txt\") -> None:\n \"\"\"Set minimal torch version according to Python actual version.\n\n >>> AssistantCLI.set_min_torch_by_python(\"../requirements/base.txt\")\n\n \"\"\"\n py_ver = f\"{sys.version_info.major}.{sys.version_info.minor}\"\n if py_ver not in LUT_PYTHON_TORCH:\n return\n with open(fpath) as fp:\n reqs = parse_requirements(fp.readlines())\n pkg_ver = next(p for p in reqs if p.name == \"torch\")\n pt_ver = min([LooseVersion(v[1]) for v in pkg_ver.specs])\n pt_ver = max(LooseVersion(LUT_PYTHON_TORCH[py_ver]), pt_ver)\n with open(fpath) as fp:\n requires = fp.read()\n requires = re.sub(r\"torch>=[\\d\\.]+\", f\"torch>={pt_ver}\", requires)\n with open(fpath, \"w\", encoding=\"utf-8\") as fp:\n fp.write(requires)\n\n @staticmethod\n def replace_min_requirements(fpath: str) -> None:\n \"\"\"Replace all `>=` by `==` in given file.\"\"\"\n logging.info(f\"processing: {fpath}\")\n with open(fpath, encoding=\"utf-8\") as fp:\n req = fp.read()\n req = req.replace(\">=\", \"==\")\n with open(fpath, \"w\", encoding=\"utf-8\") as fp:\n fp.write(req)\n\n @staticmethod\n def set_oldest_versions(req_files: List[str] = REQUIREMENTS_FILES) -> None:\n \"\"\"Set the oldest version for requirements.\"\"\"\n AssistantCLI.set_min_torch_by_python()\n for fpath in req_files:\n AssistantCLI.replace_min_requirements(fpath)\n\n @staticmethod\n def changed_domains(\n pr: Optional[int] = None,\n auth_token: Optional[str] = None,\n as_list: bool = False,\n general_sub_pkgs: Tuple[str] = _PKG_WIDE_SUBPACKAGES,\n ) -> Union[str, List[str]]:\n \"\"\"Determine what domains were changed in particular PR.\"\"\"\n if not pr:\n return \"unittests\"\n url = f\"https://api.github.com/repos/Lightning-AI/torchmetrics/pulls/{pr}/files\"\n logging.debug(url)\n data = request_url(url, auth_token)\n if not data:\n logging.debug(\"WARNING: No data was received -> test everything.\")\n return \"unittests\"\n files = [d[\"filename\"] for d in data]\n\n # filter out all integrations as they run in separate suit\n files = [fn for fn in files if not fn.startswith(\"tests/integrations\")]\n if not files:\n logging.debug(\"Only integrations was changed so not reason for deep testing...\")\n return \"\"\n # filter only docs files\n files_ = [fn for fn in files if fn.startswith(\"docs\")]\n if len(files) == len(files_):\n logging.debug(\"Only docs was changed so not reason for deep testing...\")\n return \"\"\n\n # filter only package files and skip inits\n _is_in_test = lambda fn: fn.startswith(\"tests\")\n _filter_pkg = lambda fn: _is_in_test(fn) or (fn.startswith(\"src/torchmetrics\") and \"__init__.py\" not in fn)\n files_pkg = [fn for fn in files if _filter_pkg(fn)]\n if not files_pkg:\n return \"unittests\"\n\n # parse domains\n def _crop_path(fname: str, paths: List[str]) -> str:\n for p in paths:\n fname = fname.replace(p, \"\")\n return fname\n\n files_pkg = [_crop_path(fn, [\"src/torchmetrics/\", \"tests/unittests/\", \"functional/\"]) for fn in files_pkg]\n # filter domain names\n tm_modules = [fn.split(\"/\")[0] for fn in files_pkg if \"/\" in fn]\n # filter general (used everywhere) sub-packages\n tm_modules = [md for md in tm_modules if md not in general_sub_pkgs]\n if len(files_pkg) > len(tm_modules):\n logging.debug(\"Some more files was changed -> rather test everything...\")\n return \"unittests\"\n # keep only unique\n if as_list:\n return list(tm_modules)\n tm_modules = [f\"unittests/{md}\" for md in set(tm_modules)]\n not_exists = [p for p in tm_modules if os.path.exists(p)]\n if not_exists:\n raise ValueError(f\"Missing following paths: {not_exists}\")\n return \" \".join(tm_modules)\n\n\nif __name__ == \"__main__\":\n logging.basicConfig(level=logging.INFO)\n fire.Fire(AssistantCLI)\n", "path": ".github/assistant.py"}]} | 3,872 | 549 |
gh_patches_debug_3497 | rasdani/github-patches | git_diff | ManimCommunity__manim-646 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The config value `html_favicon' has type `PosixPath'; expected `str'.
BTW when building the docs now I get
```
WARNING: The config value `html_favicon' has type `PosixPath'; expected `str'.
```
_Originally posted by @leotrs in https://github.com/ManimCommunity/manim/pull/631#issuecomment-719075737_
</issue>
<code>
[start of docs/source/conf.py]
1 # Configuration file for the Sphinx documentation builder.
2 #
3 # This file only contains a selection of the most common options. For a full
4 # list see the documentation:
5 # https://www.sphinx-doc.org/en/master/usage/configuration.html
6
7 # -- Path setup --------------------------------------------------------------
8
9 # If extensions (or modules to document with autodoc) are in another directory,
10 # add these directories to sys.path here. If the directory is relative to the
11 # documentation root, use os.path.abspath to make it absolute, like shown here.
12
13 import os
14 import subprocess
15 import sys
16 from distutils.sysconfig import get_python_lib
17 from pathlib import Path
18
19 sys.path.insert(0, os.path.abspath("."))
20
21
22 if os.environ.get("READTHEDOCS") == "True":
23 site_path = get_python_lib()
24 # bindings for pangocffi, cairocffi, pangocairocffi need to be generated
25 subprocess.run(["python", "pangocffi/ffi_build.py"], cwd=site_path)
26 subprocess.run(["python", "cairocffi/ffi_build.py"], cwd=site_path)
27 subprocess.run(["python", "pangocairocffi/ffi_build.py"], cwd=site_path)
28 # we need to add ffmpeg to the path
29 ffmpeg_path = os.path.join(site_path, "imageio_ffmpeg", "binaries")
30 # the included binary is named ffmpeg-linux..., create a symlink
31 [ffmpeg_bin] = [
32 file for file in os.listdir(ffmpeg_path) if file.startswith("ffmpeg-")
33 ]
34 os.symlink(
35 os.path.join(ffmpeg_path, ffmpeg_bin), os.path.join(ffmpeg_path, "ffmpeg")
36 )
37 os.environ["PATH"] += os.pathsep + ffmpeg_path
38
39
40 # -- Project information -----------------------------------------------------
41
42 project = "Manim"
43 copyright = "2020, The Manim Community Dev Team"
44 author = "The Manim Community Dev Team"
45
46
47 # -- General configuration ---------------------------------------------------
48
49 # Add any Sphinx extension module names here, as strings. They can be
50 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
51 # ones.
52 extensions = [
53 "sphinx.ext.autodoc",
54 "recommonmark",
55 "sphinx_copybutton",
56 "sphinx.ext.napoleon",
57 "sphinx.ext.autosummary",
58 "sphinx.ext.doctest",
59 "manim_directive",
60 ]
61
62 # Automatically generate stub pages when using the .. autosummary directive
63 autosummary_generate = True
64
65 # Add any paths that contain templates here, relative to this directory.
66 templates_path = ["_templates"]
67
68 # List of patterns, relative to source directory, that match files and
69 # directories to ignore when looking for source files.
70 # This pattern also affects html_static_path and html_extra_path.
71 exclude_patterns = []
72
73
74 # -- Options for HTML output -------------------------------------------------
75
76 # The theme to use for HTML and HTML Help pages. See the documentation for
77 # a list of builtin themes.
78 #
79 import guzzle_sphinx_theme
80
81 html_theme_path = guzzle_sphinx_theme.html_theme_path()
82 html_theme = "guzzle_sphinx_theme"
83 html_favicon = Path("_static/favicon.ico")
84
85 # There's a standing issue with Sphinx's new-style sidebars. This is a
86 # workaround. Taken from
87 # https://github.com/guzzle/guzzle_sphinx_theme/issues/33#issuecomment-637081826
88 html_sidebars = {"**": ["logo-text.html", "globaltoc.html", "searchbox.html"]}
89
90 # Register the theme as an extension to generate a sitemap.xml
91 extensions.append("guzzle_sphinx_theme")
92
93 # Add any paths that contain custom static files (such as style sheets) here,
94 # relative to this directory. They are copied after the builtin static files,
95 # so a file named "default.css" will overwrite the builtin "default.css".
96 html_static_path = ["_static"]
97
98 # This specifies any additional css files that will override the theme's
99 html_css_files = ["custom.css"]
100
[end of docs/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/source/conf.py b/docs/source/conf.py
--- a/docs/source/conf.py
+++ b/docs/source/conf.py
@@ -80,7 +80,7 @@
html_theme_path = guzzle_sphinx_theme.html_theme_path()
html_theme = "guzzle_sphinx_theme"
-html_favicon = Path("_static/favicon.ico")
+html_favicon = str(Path("_static/favicon.ico"))
# There's a standing issue with Sphinx's new-style sidebars. This is a
# workaround. Taken from
| {"golden_diff": "diff --git a/docs/source/conf.py b/docs/source/conf.py\n--- a/docs/source/conf.py\n+++ b/docs/source/conf.py\n@@ -80,7 +80,7 @@\n \n html_theme_path = guzzle_sphinx_theme.html_theme_path()\n html_theme = \"guzzle_sphinx_theme\"\n-html_favicon = Path(\"_static/favicon.ico\")\n+html_favicon = str(Path(\"_static/favicon.ico\"))\n \n # There's a standing issue with Sphinx's new-style sidebars. This is a\n # workaround. Taken from\n", "issue": "The config value `html_favicon' has type `PosixPath'; expected `str'.\nBTW when building the docs now I get\r\n```\r\nWARNING: The config value `html_favicon' has type `PosixPath'; expected `str'.\r\n```\r\n\r\n_Originally posted by @leotrs in https://github.com/ManimCommunity/manim/pull/631#issuecomment-719075737_\n", "before_files": [{"content": "# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n\nimport os\nimport subprocess\nimport sys\nfrom distutils.sysconfig import get_python_lib\nfrom pathlib import Path\n\nsys.path.insert(0, os.path.abspath(\".\"))\n\n\nif os.environ.get(\"READTHEDOCS\") == \"True\":\n site_path = get_python_lib()\n # bindings for pangocffi, cairocffi, pangocairocffi need to be generated\n subprocess.run([\"python\", \"pangocffi/ffi_build.py\"], cwd=site_path)\n subprocess.run([\"python\", \"cairocffi/ffi_build.py\"], cwd=site_path)\n subprocess.run([\"python\", \"pangocairocffi/ffi_build.py\"], cwd=site_path)\n # we need to add ffmpeg to the path\n ffmpeg_path = os.path.join(site_path, \"imageio_ffmpeg\", \"binaries\")\n # the included binary is named ffmpeg-linux..., create a symlink\n [ffmpeg_bin] = [\n file for file in os.listdir(ffmpeg_path) if file.startswith(\"ffmpeg-\")\n ]\n os.symlink(\n os.path.join(ffmpeg_path, ffmpeg_bin), os.path.join(ffmpeg_path, \"ffmpeg\")\n )\n os.environ[\"PATH\"] += os.pathsep + ffmpeg_path\n\n\n# -- Project information -----------------------------------------------------\n\nproject = \"Manim\"\ncopyright = \"2020, The Manim Community Dev Team\"\nauthor = \"The Manim Community Dev Team\"\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n \"sphinx.ext.autodoc\",\n \"recommonmark\",\n \"sphinx_copybutton\",\n \"sphinx.ext.napoleon\",\n \"sphinx.ext.autosummary\",\n \"sphinx.ext.doctest\",\n \"manim_directive\",\n]\n\n# Automatically generate stub pages when using the .. autosummary directive\nautosummary_generate = True\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = [\"_templates\"]\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nimport guzzle_sphinx_theme\n\nhtml_theme_path = guzzle_sphinx_theme.html_theme_path()\nhtml_theme = \"guzzle_sphinx_theme\"\nhtml_favicon = Path(\"_static/favicon.ico\")\n\n# There's a standing issue with Sphinx's new-style sidebars. This is a\n# workaround. Taken from\n# https://github.com/guzzle/guzzle_sphinx_theme/issues/33#issuecomment-637081826\nhtml_sidebars = {\"**\": [\"logo-text.html\", \"globaltoc.html\", \"searchbox.html\"]}\n\n# Register the theme as an extension to generate a sitemap.xml\nextensions.append(\"guzzle_sphinx_theme\")\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = [\"_static\"]\n\n# This specifies any additional css files that will override the theme's\nhtml_css_files = [\"custom.css\"]\n", "path": "docs/source/conf.py"}]} | 1,683 | 113 |
gh_patches_debug_10794 | rasdani/github-patches | git_diff | jupyterhub__zero-to-jupyterhub-k8s-1564 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Document the fact that access to cloud metadata is disabled by default
It's not immediately obvious that `iptables` are overridden with an init container, blocking access to EC2 metadata.
This should be documented somewhere. Perhaps in https://zero-to-jupyterhub.readthedocs.io/en/latest/amazon/step-zero-aws.html.
</issue>
<code>
[start of doc/source/conf.py]
1
2 # Configuration file for the Sphinx documentation builder.
3 #
4 # This file only contains a selection of the most common options. For a full
5 # list see the documentation:
6 # https://www.sphinx-doc.org/en/master/usage/configuration.html
7
8 # -- Path setup --------------------------------------------------------------
9
10 # If extensions (or modules to document with autodoc) are in another directory,
11 # add these directories to sys.path here. If the directory is relative to the
12 # documentation root, use os.path.abspath to make it absolute, like shown here.
13 #
14 # import os
15 # import sys
16 # sys.path.insert(0, os.path.abspath('.'))
17
18
19 # -- Project specific imports ------------------------------------------------
20
21 from datetime import date
22
23 import yaml
24
25
26 # -- Sphinx setup function ---------------------------------------------------
27 # ref: http://www.sphinx-doc.org/en/latest/extdev/tutorial.html#the-setup-function
28
29 def setup(app):
30 app.add_css_file('custom.css')
31
32
33 # -- Project information -----------------------------------------------------
34 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#project-information
35
36 project = 'Zero to JupyterHub with Kubernetes'
37 copyright = '{year}, Project Jupyter Contributors'.format(year=date.today().year)
38 author = 'Project Jupyter Contributors'
39
40 # The version info for the project you're documenting, acts as replacement for
41 # |version| and |release|, also used in various other places throughout the
42 # built documents.
43 #
44 with open('../../jupyterhub/Chart.yaml') as f:
45 chart = yaml.safe_load(f)
46 version = chart['version'].split('-', 1)[0]
47 release = chart['version']
48
49 # Project specific variables
50 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#confval-rst_epilog
51 rst_epilog = """
52 .. |hub_version| replace:: {v}
53 """.format(v=chart['appVersion'])
54
55
56 # -- General configuration ---------------------------------------------------
57 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#general-configuration
58
59 # Set the default role so we can use `foo` instead of ``foo``
60 default_role = 'literal'
61
62 # Add any Sphinx extension module names here, as strings. They can be
63 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
64 # ones.
65 extensions = ['sphinx.ext.mathjax',
66 'sphinx_copybutton',
67 'myst_parser']
68
69 # List of patterns, relative to source directory, that match files and
70 # directories to ignore when looking for source files.
71 # This pattern also affects html_static_path and html_extra_path.
72 exclude_patterns = []
73
74 # The master toctree document.
75 master_doc = 'index'
76
77 # The suffix(es) of source filenames.
78 source_suffix = ['.rst', '.md']
79
80
81 # -- Generate the Helm chart configuration reference from a schema file ------
82
83 # header
84 with open('reference/reference.txt', 'r') as f:
85 header_md = f.readlines()
86 header_md = header_md[1:]
87 header_md = [ln.strip('\n') for ln in header_md]
88
89 # schema
90 with open('../../jupyterhub/schema.yaml', 'r') as f:
91 data = yaml.safe_load(f)
92 def parse_schema(d, md=[], depth=0, pre=''):
93 """
94 Generate markdown headers from a passed python dictionary created by
95 parsing a schema.yaml file.
96 """
97 if 'properties' in d:
98 depth += 1
99 # Create markdown headers for each schema level
100 for key, val in d['properties'].items():
101 md.append('#'*(depth + 1) + ' ' + pre + key)
102 md.append('')
103 if 'description' in val:
104 for ln in val['description'].split('\n'):
105 md.append(ln)
106 md.append('')
107
108 parse_schema(val, md, depth, pre+'{}.'.format(key))
109 depth -= 1
110 return md
111 schema_md = parse_schema(data)
112
113 # reference = header + schema
114 reference_md = header_md + schema_md
115 with open('reference/reference.md', 'w') as f:
116 f.write('\n'.join(reference_md))
117
118
119 # -- Options for linkcheck builder -------------------------------------------
120 # ref: http://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder
121 linkcheck_ignore = [
122 r'(.*)github\.com(.*)#', # javascript based anchors
123 r'(.*)/#%21(.*)/(.*)', # /#!forum/jupyter - encoded anchor edge case
124 "https://your-domain.com", # example
125 "https://your-domain-name.com", # example
126 "https://kubernetes.io/docs/tutorials/kubernetes-basics/", # works
127 "https://cloud.ibm.com/kubernetes/catalog/create", # works
128 ]
129 linkcheck_anchors_ignore = [
130 "/#!",
131 "/#%21",
132 ]
133
134
135 # -- Options for HTML output -------------------------------------------------
136 # ref: http://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-html-output
137
138 # The theme to use for HTML and HTML Help pages. See the documentation for
139 # a list of builtin themes.
140 #
141 html_theme = 'pydata_sphinx_theme'
142 html_favicon = '_static/images/logo/favicon.ico'
143 html_logo = '_static/images/logo/logo.png'
144
145 # Add any paths that contain custom static files (such as style sheets) here,
146 # relative to this directory. They are copied after the builtin static files,
147 # so a file named "default.css" will overwrite the builtin "default.css".
148 html_static_path = ['_static']
149
150
151
152
153 # -- Below are options for more esoteric output -------------------------------
154 # -----------------------------------------------------------------------------
155
156
157 # -- Options for HTML help output ---------------------------------------------
158 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-html-help-output
159
160 # Output file base name for HTML help builder.
161 htmlhelp_basename = 'ZeroToJupyterhubDoc'
162
163
164 # -- Options for LaTeX output ------------------------------------------------
165 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-latex-output
166
167 latex_elements = {
168 # The paper size ('letterpaper' or 'a4paper').
169 #
170 # 'papersize': 'letterpaper',
171
172 # The font size ('10pt', '11pt' or '12pt').
173 #
174 # 'pointsize': '10pt',
175
176 # Additional stuff for the LaTeX preamble.
177 #
178 # 'preamble': '',
179
180 # Latex figure (float) alignment
181 #
182 # 'figure_align': 'htbp',
183 }
184
185 # Grouping the document tree into LaTeX files.
186 latex_documents = [
187 (
188 master_doc, # source start file
189 'ZeroToJupyterhubDoc.tex', # target name
190 'Zero to JupyterHub', # title
191 author, # author
192 'manual' # documentclass [howto, manual, or own class]
193 ),
194 ]
195
196
197 # -- Options for manual page output ------------------------------------------
198 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-manual-page-output
199
200 # One entry per manual page.
201 man_pages = [
202 (
203 master_doc, # source start file
204 'zerotojupyterhub', # name
205 'Zero to JupyterHub', # description
206 [author], # authors
207 1, # manual section
208 ),
209 ]
210
211
212 # -- Options for Texinfo output ----------------------------------------------
213 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-texinfo-output
214
215 # Grouping the document tree into Texinfo files.
216 texinfo_documents = [
217 (
218 master_doc, # source start file
219 'ZeroToJupyterhubDoc', # target name
220 'Zero to JupyterHub', # title
221 author, # author
222 'ZeroToJupyterhubDoc', # dir menu entry
223 'One line description of project.', # description
224 'Miscellaneous' # category
225 ),
226 ]
227
228
229 # -- Options for epub output -------------------------------------------------
230 # ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-epub-output
231
232 # Bibliographic Dublin Core info.
233 epub_title = project
234 epub_author = author
235 epub_publisher = author
236 epub_copyright = copyright
237
238 # The unique identifier of the text. This can be a ISBN number
239 # or the project homepage.
240 #
241 # epub_identifier = ''
242
243 # A unique identification for the text.
244 #
245 # epub_uid = ''
246
247 # A list of files that should not be packed into the epub file.
248 epub_exclude_files = ['search.html']
249
[end of doc/source/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/source/conf.py b/doc/source/conf.py
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -138,7 +138,19 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
+
html_theme = 'pydata_sphinx_theme'
+html_theme_options = {
+ "github_url": "https://github.com/jupyterhub/zero-to-jupyterhub-k8s/",
+ "use_edit_page_button": True,
+}
+html_context = {
+ "github_user": "jupyterhub",
+ "github_repo": "zero-to-jupyterhub-k8s",
+ "github_version": "master",
+ "doc_path":"doc",
+}
+
html_favicon = '_static/images/logo/favicon.ico'
html_logo = '_static/images/logo/logo.png'
| {"golden_diff": "diff --git a/doc/source/conf.py b/doc/source/conf.py\n--- a/doc/source/conf.py\n+++ b/doc/source/conf.py\n@@ -138,7 +138,19 @@\n # The theme to use for HTML and HTML Help pages. See the documentation for\n # a list of builtin themes.\n #\n+\n html_theme = 'pydata_sphinx_theme'\n+html_theme_options = {\n+ \"github_url\": \"https://github.com/jupyterhub/zero-to-jupyterhub-k8s/\",\n+ \"use_edit_page_button\": True,\n+}\n+html_context = {\n+ \"github_user\": \"jupyterhub\",\n+ \"github_repo\": \"zero-to-jupyterhub-k8s\",\n+ \"github_version\": \"master\",\n+ \"doc_path\":\"doc\",\n+}\n+\n html_favicon = '_static/images/logo/favicon.ico'\n html_logo = '_static/images/logo/logo.png'\n", "issue": "Document the fact that access to cloud metadata is disabled by default\nIt's not immediately obvious that `iptables` are overridden with an init container, blocking access to EC2 metadata.\r\n\r\nThis should be documented somewhere. Perhaps in https://zero-to-jupyterhub.readthedocs.io/en/latest/amazon/step-zero-aws.html.\n", "before_files": [{"content": "\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n\n# -- Project specific imports ------------------------------------------------\n\nfrom datetime import date\n\nimport yaml\n\n\n# -- Sphinx setup function ---------------------------------------------------\n# ref: http://www.sphinx-doc.org/en/latest/extdev/tutorial.html#the-setup-function\n\ndef setup(app):\n app.add_css_file('custom.css')\n\n\n# -- Project information -----------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#project-information\n\nproject = 'Zero to JupyterHub with Kubernetes'\ncopyright = '{year}, Project Jupyter Contributors'.format(year=date.today().year)\nauthor = 'Project Jupyter Contributors'\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\n#\nwith open('../../jupyterhub/Chart.yaml') as f:\n chart = yaml.safe_load(f)\nversion = chart['version'].split('-', 1)[0]\nrelease = chart['version']\n\n# Project specific variables\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#confval-rst_epilog\nrst_epilog = \"\"\"\n.. |hub_version| replace:: {v}\n\"\"\".format(v=chart['appVersion'])\n\n\n# -- General configuration ---------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#general-configuration\n\n# Set the default role so we can use `foo` instead of ``foo``\ndefault_role = 'literal'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = ['sphinx.ext.mathjax',\n 'sphinx_copybutton',\n 'myst_parser']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# The suffix(es) of source filenames.\nsource_suffix = ['.rst', '.md']\n\n\n# -- Generate the Helm chart configuration reference from a schema file ------\n\n# header\nwith open('reference/reference.txt', 'r') as f:\n header_md = f.readlines()\nheader_md = header_md[1:]\nheader_md = [ln.strip('\\n') for ln in header_md]\n\n# schema\nwith open('../../jupyterhub/schema.yaml', 'r') as f:\n data = yaml.safe_load(f)\ndef parse_schema(d, md=[], depth=0, pre=''):\n \"\"\"\n Generate markdown headers from a passed python dictionary created by\n parsing a schema.yaml file.\n \"\"\"\n if 'properties' in d:\n depth += 1\n # Create markdown headers for each schema level\n for key, val in d['properties'].items():\n md.append('#'*(depth + 1) + ' ' + pre + key)\n md.append('')\n if 'description' in val:\n for ln in val['description'].split('\\n'):\n md.append(ln)\n md.append('')\n\n parse_schema(val, md, depth, pre+'{}.'.format(key))\n depth -= 1\n return md\nschema_md = parse_schema(data)\n\n# reference = header + schema\nreference_md = header_md + schema_md\nwith open('reference/reference.md', 'w') as f:\n f.write('\\n'.join(reference_md))\n\n\n# -- Options for linkcheck builder -------------------------------------------\n# ref: http://www.sphinx-doc.org/en/master/usage/configuration.html#options-for-the-linkcheck-builder\nlinkcheck_ignore = [\n r'(.*)github\\.com(.*)#', # javascript based anchors\n r'(.*)/#%21(.*)/(.*)', # /#!forum/jupyter - encoded anchor edge case\n \"https://your-domain.com\", # example\n \"https://your-domain-name.com\", # example\n \"https://kubernetes.io/docs/tutorials/kubernetes-basics/\", # works\n \"https://cloud.ibm.com/kubernetes/catalog/create\", # works\n]\nlinkcheck_anchors_ignore = [\n \"/#!\",\n \"/#%21\",\n]\n\n\n# -- Options for HTML output -------------------------------------------------\n# ref: http://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-html-output\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'pydata_sphinx_theme'\nhtml_favicon = '_static/images/logo/favicon.ico'\nhtml_logo = '_static/images/logo/logo.png'\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n\n\n\n# -- Below are options for more esoteric output -------------------------------\n# -----------------------------------------------------------------------------\n\n\n# -- Options for HTML help output ---------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-html-help-output\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'ZeroToJupyterhubDoc'\n\n\n# -- Options for LaTeX output ------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-latex-output\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n # 'papersize': 'letterpaper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n # 'pointsize': '10pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files.\nlatex_documents = [\n (\n master_doc, # source start file\n 'ZeroToJupyterhubDoc.tex', # target name\n 'Zero to JupyterHub', # title\n author, # author\n 'manual' # documentclass [howto, manual, or own class]\n ),\n]\n\n\n# -- Options for manual page output ------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-manual-page-output\n\n# One entry per manual page.\nman_pages = [\n (\n master_doc, # source start file\n 'zerotojupyterhub', # name\n 'Zero to JupyterHub', # description\n [author], # authors\n 1, # manual section\n ),\n]\n\n\n# -- Options for Texinfo output ----------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-texinfo-output\n\n# Grouping the document tree into Texinfo files.\ntexinfo_documents = [\n (\n master_doc, # source start file\n 'ZeroToJupyterhubDoc', # target name\n 'Zero to JupyterHub', # title\n author, # author\n 'ZeroToJupyterhubDoc', # dir menu entry\n 'One line description of project.', # description\n 'Miscellaneous' # category\n ),\n]\n\n\n# -- Options for epub output -------------------------------------------------\n# ref: https://www.sphinx-doc.org/en/latest/usage/configuration.html#options-for-epub-output\n\n# Bibliographic Dublin Core info.\nepub_title = project\nepub_author = author\nepub_publisher = author\nepub_copyright = copyright\n\n# The unique identifier of the text. This can be a ISBN number\n# or the project homepage.\n#\n# epub_identifier = ''\n\n# A unique identification for the text.\n#\n# epub_uid = ''\n\n# A list of files that should not be packed into the epub file.\nepub_exclude_files = ['search.html']\n", "path": "doc/source/conf.py"}]} | 3,086 | 195 |
gh_patches_debug_11838 | rasdani/github-patches | git_diff | sopel-irc__sopel-419 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tell module errors when you don't give it enough arguments
```
<Flyte> .tell
<Lurk> AttributeError: 'NoneType' object has no attribute 'split' (file "/home/flyte/workspace/willie/ve/local/lib/python2.7/site-packages/willie/modules/tell.py", line 93, in f_remind)
<Flyte> .tell Flyte
<Lurk> ValueError: need more than 1 value to unpack (file "/home/flyte/workspace/willie/ve/local/lib/python2.7/site-packages/willie/modules/tell.py", line 93, in f_remind)
```
</issue>
<code>
[start of willie/modules/tell.py]
1 """
2 tell.py - Willie Tell and Ask Module
3 Copyright 2008, Sean B. Palmer, inamidst.com
4 Licensed under the Eiffel Forum License 2.
5
6 http://willie.dftba.net
7 """
8
9 import os
10 import time
11 import datetime
12 import pytz
13 import threading
14 from willie.tools import Nick
15 from willie.module import commands, nickname_commands, rule, priority, example
16
17 maximum = 4
18
19
20 def loadReminders(fn, lock):
21 lock.acquire()
22 try:
23 result = {}
24 f = open(fn)
25 for line in f:
26 line = line.strip()
27 if line:
28 try:
29 tellee, teller, verb, timenow, msg = line.split('\t', 4)
30 except ValueError:
31 continue # @@ hmm
32 result.setdefault(tellee, []).append((teller, verb, timenow, msg))
33 f.close()
34 finally:
35 lock.release()
36 return result
37
38
39 def dumpReminders(fn, data, lock):
40 lock.acquire()
41 try:
42 f = open(fn, 'w')
43 for tellee in data.iterkeys():
44 for remindon in data[tellee]:
45 line = '\t'.join((tellee,) + remindon)
46 try:
47 f.write((line + '\n').encode('utf-8'))
48 except IOError:
49 break
50 try:
51 f.close()
52 except IOError:
53 pass
54 finally:
55 lock.release()
56 return True
57
58
59 def setup(self):
60 fn = self.nick + '-' + self.config.host + '.tell.db'
61 self.tell_filename = os.path.join(self.config.dotdir, fn)
62 if not os.path.exists(self.tell_filename):
63 try:
64 f = open(self.tell_filename, 'w')
65 except OSError:
66 pass
67 else:
68 f.write('')
69 f.close()
70 self.memory['tell_lock'] = threading.Lock()
71 self.memory['reminders'] = loadReminders(self.tell_filename, self.memory['tell_lock'])
72
73
74 def get_user_time(bot, nick):
75 tz = 'UTC'
76 tformat = None
77 if bot.db and nick in bot.db.preferences:
78 tz = bot.db.preferences.get(nick, 'tz') or 'UTC'
79 tformat = bot.db.preferences.get(nick, 'time_format')
80 if tz not in pytz.all_timezones_set:
81 tz = 'UTC'
82 return (pytz.timezone(tz.strip()), tformat or '%Y-%m-%d %H:%M:%S %Z')
83
84
85 @commands('tell', 'ask')
86 @nickname_commands('tell', 'ask')
87 @example('Willie, tell Embolalia he broke something again.')
88 def f_remind(bot, trigger):
89 """Give someone a message the next time they're seen"""
90 teller = trigger.nick
91
92 verb = trigger.group(1)
93 tellee, msg = trigger.group(2).split(None, 1)
94
95 tellee = Nick(tellee.rstrip('.,:;'))
96
97 if not os.path.exists(bot.tell_filename):
98 return
99
100 if len(tellee) > 20:
101 return bot.reply('That nickname is too long.')
102 if tellee == bot.nick:
103 return bot.reply("I'm here now, you can tell me whatever you want!")
104
105 tz, tformat = get_user_time(bot, tellee)
106 timenow = datetime.datetime.now(tz).strftime(tformat)
107 if not tellee in (Nick(teller), bot.nick, 'me'):
108 bot.memory['tell_lock'].acquire()
109 try:
110 if not tellee in bot.memory['reminders']:
111 bot.memory['reminders'][tellee] = [(teller, verb, timenow, msg)]
112 else:
113 bot.memory['reminders'][tellee].append((teller, verb, timenow, msg))
114 finally:
115 bot.memory['tell_lock'].release()
116
117 response = "I'll pass that on when %s is around." % tellee
118
119 bot.reply(response)
120 elif Nick(teller) == tellee:
121 bot.say('You can %s yourself that.' % verb)
122 else:
123 bot.say("Hey, I'm not as stupid as Monty you know!")
124
125 dumpReminders(bot.tell_filename, bot.memory['reminders'], bot.memory['tell_lock']) # @@ tell
126
127
128 def getReminders(bot, channel, key, tellee):
129 lines = []
130 template = "%s: %s <%s> %s %s %s"
131 today = time.strftime('%d %b', time.gmtime())
132
133 bot.memory['tell_lock'].acquire()
134 try:
135 for (teller, verb, datetime, msg) in bot.memory['reminders'][key]:
136 if datetime.startswith(today):
137 datetime = datetime[len(today) + 1:]
138 lines.append(template % (tellee, datetime, teller, verb, tellee, msg))
139
140 try:
141 del bot.memory['reminders'][key]
142 except KeyError:
143 bot.msg(channel, 'Er...')
144 finally:
145 bot.memory['tell_lock'].release()
146 return lines
147
148
149 @rule('(.*)')
150 @priority('low')
151 def message(bot, trigger):
152
153 tellee = trigger.nick
154 channel = trigger.sender
155
156 if not os.path.exists(bot.tell_filename):
157 return
158
159 reminders = []
160 remkeys = list(reversed(sorted(bot.memory['reminders'].keys())))
161
162 for remkey in remkeys:
163 if not remkey.endswith('*') or remkey.endswith(':'):
164 if tellee == remkey:
165 reminders.extend(getReminders(bot, channel, remkey, tellee))
166 elif tellee.startswith(remkey.rstrip('*:')):
167 reminders.extend(getReminders(bot, channel, remkey, tellee))
168
169 for line in reminders[:maximum]:
170 bot.say(line)
171
172 if reminders[maximum:]:
173 bot.say('Further messages sent privately')
174 for line in reminders[maximum:]:
175 bot.msg(tellee, line)
176
177 if len(bot.memory['reminders'].keys()) != remkeys:
178 dumpReminders(bot.tell_filename, bot.memory['reminders'], bot.memory['tell_lock']) # @@ tell
179
[end of willie/modules/tell.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/willie/modules/tell.py b/willie/modules/tell.py
--- a/willie/modules/tell.py
+++ b/willie/modules/tell.py
@@ -88,11 +88,20 @@
def f_remind(bot, trigger):
"""Give someone a message the next time they're seen"""
teller = trigger.nick
-
verb = trigger.group(1)
- tellee, msg = trigger.group(2).split(None, 1)
- tellee = Nick(tellee.rstrip('.,:;'))
+ if not trigger.group(3):
+ bot.reply("%s whom?" % verb)
+ return
+
+ tellee = trigger.group(3).rstrip('.,:;')
+ msg = trigger.group(2).lstrip(tellee).lstrip()
+
+ if not msg:
+ bot.reply("%s %s what?" % (verb, tellee))
+ return
+
+ tellee = Nick(tellee)
if not os.path.exists(bot.tell_filename):
return
| {"golden_diff": "diff --git a/willie/modules/tell.py b/willie/modules/tell.py\n--- a/willie/modules/tell.py\n+++ b/willie/modules/tell.py\n@@ -88,11 +88,20 @@\n def f_remind(bot, trigger):\n \"\"\"Give someone a message the next time they're seen\"\"\"\n teller = trigger.nick\n-\n verb = trigger.group(1)\n- tellee, msg = trigger.group(2).split(None, 1)\n \n- tellee = Nick(tellee.rstrip('.,:;'))\n+ if not trigger.group(3):\n+ bot.reply(\"%s whom?\" % verb)\n+ return\n+\n+ tellee = trigger.group(3).rstrip('.,:;')\n+ msg = trigger.group(2).lstrip(tellee).lstrip()\n+\n+ if not msg:\n+ bot.reply(\"%s %s what?\" % (verb, tellee))\n+ return\n+\n+ tellee = Nick(tellee)\n \n if not os.path.exists(bot.tell_filename):\n return\n", "issue": "Tell module errors when you don't give it enough arguments\n```\n<Flyte> .tell\n<Lurk> AttributeError: 'NoneType' object has no attribute 'split' (file \"/home/flyte/workspace/willie/ve/local/lib/python2.7/site-packages/willie/modules/tell.py\", line 93, in f_remind)\n<Flyte> .tell Flyte\n<Lurk> ValueError: need more than 1 value to unpack (file \"/home/flyte/workspace/willie/ve/local/lib/python2.7/site-packages/willie/modules/tell.py\", line 93, in f_remind)\n```\n\n", "before_files": [{"content": "\"\"\"\ntell.py - Willie Tell and Ask Module\nCopyright 2008, Sean B. Palmer, inamidst.com\nLicensed under the Eiffel Forum License 2.\n\nhttp://willie.dftba.net\n\"\"\"\n\nimport os\nimport time\nimport datetime\nimport pytz\nimport threading\nfrom willie.tools import Nick\nfrom willie.module import commands, nickname_commands, rule, priority, example\n\nmaximum = 4\n\n\ndef loadReminders(fn, lock):\n lock.acquire()\n try:\n result = {}\n f = open(fn)\n for line in f:\n line = line.strip()\n if line:\n try:\n tellee, teller, verb, timenow, msg = line.split('\\t', 4)\n except ValueError:\n continue # @@ hmm\n result.setdefault(tellee, []).append((teller, verb, timenow, msg))\n f.close()\n finally:\n lock.release()\n return result\n\n\ndef dumpReminders(fn, data, lock):\n lock.acquire()\n try:\n f = open(fn, 'w')\n for tellee in data.iterkeys():\n for remindon in data[tellee]:\n line = '\\t'.join((tellee,) + remindon)\n try:\n f.write((line + '\\n').encode('utf-8'))\n except IOError:\n break\n try:\n f.close()\n except IOError:\n pass\n finally:\n lock.release()\n return True\n\n\ndef setup(self):\n fn = self.nick + '-' + self.config.host + '.tell.db'\n self.tell_filename = os.path.join(self.config.dotdir, fn)\n if not os.path.exists(self.tell_filename):\n try:\n f = open(self.tell_filename, 'w')\n except OSError:\n pass\n else:\n f.write('')\n f.close()\n self.memory['tell_lock'] = threading.Lock()\n self.memory['reminders'] = loadReminders(self.tell_filename, self.memory['tell_lock'])\n\n\ndef get_user_time(bot, nick):\n tz = 'UTC'\n tformat = None\n if bot.db and nick in bot.db.preferences:\n tz = bot.db.preferences.get(nick, 'tz') or 'UTC'\n tformat = bot.db.preferences.get(nick, 'time_format')\n if tz not in pytz.all_timezones_set:\n tz = 'UTC'\n return (pytz.timezone(tz.strip()), tformat or '%Y-%m-%d %H:%M:%S %Z')\n\n\n@commands('tell', 'ask')\n@nickname_commands('tell', 'ask')\n@example('Willie, tell Embolalia he broke something again.')\ndef f_remind(bot, trigger):\n \"\"\"Give someone a message the next time they're seen\"\"\"\n teller = trigger.nick\n\n verb = trigger.group(1)\n tellee, msg = trigger.group(2).split(None, 1)\n\n tellee = Nick(tellee.rstrip('.,:;'))\n\n if not os.path.exists(bot.tell_filename):\n return\n\n if len(tellee) > 20:\n return bot.reply('That nickname is too long.')\n if tellee == bot.nick:\n return bot.reply(\"I'm here now, you can tell me whatever you want!\")\n\n tz, tformat = get_user_time(bot, tellee)\n timenow = datetime.datetime.now(tz).strftime(tformat)\n if not tellee in (Nick(teller), bot.nick, 'me'):\n bot.memory['tell_lock'].acquire()\n try:\n if not tellee in bot.memory['reminders']:\n bot.memory['reminders'][tellee] = [(teller, verb, timenow, msg)]\n else:\n bot.memory['reminders'][tellee].append((teller, verb, timenow, msg))\n finally:\n bot.memory['tell_lock'].release()\n\n response = \"I'll pass that on when %s is around.\" % tellee\n\n bot.reply(response)\n elif Nick(teller) == tellee:\n bot.say('You can %s yourself that.' % verb)\n else:\n bot.say(\"Hey, I'm not as stupid as Monty you know!\")\n\n dumpReminders(bot.tell_filename, bot.memory['reminders'], bot.memory['tell_lock']) # @@ tell\n\n\ndef getReminders(bot, channel, key, tellee):\n lines = []\n template = \"%s: %s <%s> %s %s %s\"\n today = time.strftime('%d %b', time.gmtime())\n\n bot.memory['tell_lock'].acquire()\n try:\n for (teller, verb, datetime, msg) in bot.memory['reminders'][key]:\n if datetime.startswith(today):\n datetime = datetime[len(today) + 1:]\n lines.append(template % (tellee, datetime, teller, verb, tellee, msg))\n\n try:\n del bot.memory['reminders'][key]\n except KeyError:\n bot.msg(channel, 'Er...')\n finally:\n bot.memory['tell_lock'].release()\n return lines\n\n\n@rule('(.*)')\n@priority('low')\ndef message(bot, trigger):\n\n tellee = trigger.nick\n channel = trigger.sender\n\n if not os.path.exists(bot.tell_filename):\n return\n\n reminders = []\n remkeys = list(reversed(sorted(bot.memory['reminders'].keys())))\n\n for remkey in remkeys:\n if not remkey.endswith('*') or remkey.endswith(':'):\n if tellee == remkey:\n reminders.extend(getReminders(bot, channel, remkey, tellee))\n elif tellee.startswith(remkey.rstrip('*:')):\n reminders.extend(getReminders(bot, channel, remkey, tellee))\n\n for line in reminders[:maximum]:\n bot.say(line)\n\n if reminders[maximum:]:\n bot.say('Further messages sent privately')\n for line in reminders[maximum:]:\n bot.msg(tellee, line)\n\n if len(bot.memory['reminders'].keys()) != remkeys:\n dumpReminders(bot.tell_filename, bot.memory['reminders'], bot.memory['tell_lock']) # @@ tell\n", "path": "willie/modules/tell.py"}]} | 2,467 | 242 |
gh_patches_debug_41816 | rasdani/github-patches | git_diff | fonttools__fonttools-804 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[T2CharStringPen] Rounding coordinates?
Currently, [T2CharStringPen](https://github.com/fonttools/fonttools/blob/fcd697a328b439165b3e9d04904eb73f065b6838/Lib/fontTools/pens/t2CharStringPen.py#L78) always rounds glyph coordinates and casts to int, but it does not seem a good idea. I think we should remove this behavior, or make it optional.
cf. TTGlyphPen (#526)
</issue>
<code>
[start of Lib/fontTools/pens/t2CharStringPen.py]
1 # Copyright (c) 2009 Type Supply LLC
2 # Author: Tal Leming
3
4
5 from __future__ import print_function, division, absolute_import
6
7 from fontTools.misc.py23 import *
8 from fontTools.misc.psCharStrings import T2CharString
9 from fontTools.pens.basePen import BasePen
10
11
12 def roundInt(v):
13 return int(round(v))
14
15
16 def roundIntPoint(point):
17 x, y = point
18 return roundInt(x), roundInt(y)
19
20
21 class RelativeCoordinatePen(BasePen):
22
23 def __init__(self, glyphSet):
24 BasePen.__init__(self, glyphSet)
25 self._lastX = None
26 self._lastY = None
27 self._heldAbsoluteMove = None
28
29 def _makePointRelative(self, pt):
30 absX, absY = pt
31 absX = absX
32 absY = absY
33 # no points have been added
34 # so no conversion is needed
35 if self._lastX is None:
36 relX, relY = absX, absY
37 # otherwise calculate the relative coordinates
38 else:
39 relX = absX - self._lastX
40 relY = absY - self._lastY
41 # store the absolute coordinates
42 self._lastX = absX
43 self._lastY = absY
44 # now return the relative coordinates
45 return relX, relY
46
47 def _moveTo(self, pt):
48 self._heldAbsoluteMove = pt
49
50 def _releaseHeldMove(self):
51 if self._heldAbsoluteMove is not None:
52 pt = self._makePointRelative(self._heldAbsoluteMove)
53 self._relativeMoveTo(pt)
54 self._heldAbsoluteMove = None
55
56 def _relativeMoveTo(self, pt):
57 raise NotImplementedError
58
59 def _lineTo(self, pt):
60 self._releaseHeldMove()
61 pt = self._makePointRelative(pt)
62 self._relativeLineTo(pt)
63
64 def _relativeLineTo(self, pt):
65 raise NotImplementedError
66
67 def _curveToOne(self, pt1, pt2, pt3):
68 self._releaseHeldMove()
69 pt1 = self._makePointRelative(pt1)
70 pt2 = self._makePointRelative(pt2)
71 pt3 = self._makePointRelative(pt3)
72 self._relativeCurveToOne(pt1, pt2, pt3)
73
74 def _relativeCurveToOne(self, pt1, pt2, pt3):
75 raise NotImplementedError
76
77
78 class T2CharStringPen(RelativeCoordinatePen):
79
80 def __init__(self, width, glyphSet):
81 RelativeCoordinatePen.__init__(self, glyphSet)
82 self._heldMove = None
83 self._program = []
84 if width is not None:
85 self._program.append(roundInt(width))
86
87 def _moveTo(self, pt):
88 RelativeCoordinatePen._moveTo(self, roundIntPoint(pt))
89
90 def _relativeMoveTo(self, pt):
91 pt = roundIntPoint(pt)
92 x, y = pt
93 self._heldMove = [x, y, "rmoveto"]
94
95 def _storeHeldMove(self):
96 if self._heldMove is not None:
97 self._program.extend(self._heldMove)
98 self._heldMove = None
99
100 def _lineTo(self, pt):
101 RelativeCoordinatePen._lineTo(self, roundIntPoint(pt))
102
103 def _relativeLineTo(self, pt):
104 self._storeHeldMove()
105 pt = roundIntPoint(pt)
106 x, y = pt
107 self._program.extend([x, y, "rlineto"])
108
109 def _curveToOne(self, pt1, pt2, pt3):
110 RelativeCoordinatePen._curveToOne(self, roundIntPoint(pt1), roundIntPoint(pt2), roundIntPoint(pt3))
111
112 def _relativeCurveToOne(self, pt1, pt2, pt3):
113 self._storeHeldMove()
114 pt1 = roundIntPoint(pt1)
115 pt2 = roundIntPoint(pt2)
116 pt3 = roundIntPoint(pt3)
117 x1, y1 = pt1
118 x2, y2 = pt2
119 x3, y3 = pt3
120 self._program.extend([x1, y1, x2, y2, x3, y3, "rrcurveto"])
121
122 def _closePath(self):
123 pass
124
125 def _endPath(self):
126 pass
127
128 def getCharString(self, private=None, globalSubrs=None):
129 program = self._program + ["endchar"]
130 charString = T2CharString(program=program, private=private, globalSubrs=globalSubrs)
131 return charString
132
[end of Lib/fontTools/pens/t2CharStringPen.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Lib/fontTools/pens/t2CharStringPen.py b/Lib/fontTools/pens/t2CharStringPen.py
--- a/Lib/fontTools/pens/t2CharStringPen.py
+++ b/Lib/fontTools/pens/t2CharStringPen.py
@@ -9,15 +9,6 @@
from fontTools.pens.basePen import BasePen
-def roundInt(v):
- return int(round(v))
-
-
-def roundIntPoint(point):
- x, y = point
- return roundInt(x), roundInt(y)
-
-
class RelativeCoordinatePen(BasePen):
def __init__(self, glyphSet):
@@ -75,20 +66,45 @@
raise NotImplementedError
+def makeRoundFunc(tolerance):
+ if tolerance < 0:
+ raise ValueError("Rounding tolerance must be positive")
+
+ def _round(number):
+ if tolerance == 0:
+ return number # no-op
+ rounded = round(number)
+ # return rounded integer if the tolerance >= 0.5, or if the absolute
+ # difference between the original float and the rounded integer is
+ # within the tolerance
+ if tolerance >= .5 or abs(rounded - number) <= tolerance:
+ return rounded
+ else:
+ # else return the value un-rounded
+ return number
+
+ def roundPoint(point):
+ x, y = point
+ return _round(x), _round(y)
+
+ return roundPoint
+
+
class T2CharStringPen(RelativeCoordinatePen):
- def __init__(self, width, glyphSet):
+ def __init__(self, width, glyphSet, roundTolerance=0.5):
RelativeCoordinatePen.__init__(self, glyphSet)
+ self.roundPoint = makeRoundFunc(roundTolerance)
self._heldMove = None
self._program = []
if width is not None:
- self._program.append(roundInt(width))
+ self._program.append(round(width))
def _moveTo(self, pt):
- RelativeCoordinatePen._moveTo(self, roundIntPoint(pt))
+ RelativeCoordinatePen._moveTo(self, self.roundPoint(pt))
def _relativeMoveTo(self, pt):
- pt = roundIntPoint(pt)
+ pt = self.roundPoint(pt)
x, y = pt
self._heldMove = [x, y, "rmoveto"]
@@ -98,22 +114,25 @@
self._heldMove = None
def _lineTo(self, pt):
- RelativeCoordinatePen._lineTo(self, roundIntPoint(pt))
+ RelativeCoordinatePen._lineTo(self, self.roundPoint(pt))
def _relativeLineTo(self, pt):
self._storeHeldMove()
- pt = roundIntPoint(pt)
+ pt = self.roundPoint(pt)
x, y = pt
self._program.extend([x, y, "rlineto"])
def _curveToOne(self, pt1, pt2, pt3):
- RelativeCoordinatePen._curveToOne(self, roundIntPoint(pt1), roundIntPoint(pt2), roundIntPoint(pt3))
+ RelativeCoordinatePen._curveToOne(self,
+ self.roundPoint(pt1),
+ self.roundPoint(pt2),
+ self.roundPoint(pt3))
def _relativeCurveToOne(self, pt1, pt2, pt3):
self._storeHeldMove()
- pt1 = roundIntPoint(pt1)
- pt2 = roundIntPoint(pt2)
- pt3 = roundIntPoint(pt3)
+ pt1 = self.roundPoint(pt1)
+ pt2 = self.roundPoint(pt2)
+ pt3 = self.roundPoint(pt3)
x1, y1 = pt1
x2, y2 = pt2
x3, y3 = pt3
@@ -127,5 +146,6 @@
def getCharString(self, private=None, globalSubrs=None):
program = self._program + ["endchar"]
- charString = T2CharString(program=program, private=private, globalSubrs=globalSubrs)
+ charString = T2CharString(
+ program=program, private=private, globalSubrs=globalSubrs)
return charString
| {"golden_diff": "diff --git a/Lib/fontTools/pens/t2CharStringPen.py b/Lib/fontTools/pens/t2CharStringPen.py\n--- a/Lib/fontTools/pens/t2CharStringPen.py\n+++ b/Lib/fontTools/pens/t2CharStringPen.py\n@@ -9,15 +9,6 @@\n from fontTools.pens.basePen import BasePen\n \n \n-def roundInt(v):\n- return int(round(v))\n-\n-\n-def roundIntPoint(point):\n- x, y = point\n- return roundInt(x), roundInt(y)\n-\n-\n class RelativeCoordinatePen(BasePen):\n \n def __init__(self, glyphSet):\n@@ -75,20 +66,45 @@\n raise NotImplementedError\n \n \n+def makeRoundFunc(tolerance):\n+ if tolerance < 0:\n+ raise ValueError(\"Rounding tolerance must be positive\")\n+\n+ def _round(number):\n+ if tolerance == 0:\n+ return number # no-op\n+ rounded = round(number)\n+ # return rounded integer if the tolerance >= 0.5, or if the absolute\n+ # difference between the original float and the rounded integer is\n+ # within the tolerance\n+ if tolerance >= .5 or abs(rounded - number) <= tolerance:\n+ return rounded\n+ else:\n+ # else return the value un-rounded\n+ return number\n+\n+ def roundPoint(point):\n+ x, y = point\n+ return _round(x), _round(y)\n+\n+ return roundPoint\n+\n+\n class T2CharStringPen(RelativeCoordinatePen):\n \n- def __init__(self, width, glyphSet):\n+ def __init__(self, width, glyphSet, roundTolerance=0.5):\n RelativeCoordinatePen.__init__(self, glyphSet)\n+ self.roundPoint = makeRoundFunc(roundTolerance)\n self._heldMove = None\n self._program = []\n if width is not None:\n- self._program.append(roundInt(width))\n+ self._program.append(round(width))\n \n def _moveTo(self, pt):\n- RelativeCoordinatePen._moveTo(self, roundIntPoint(pt))\n+ RelativeCoordinatePen._moveTo(self, self.roundPoint(pt))\n \n def _relativeMoveTo(self, pt):\n- pt = roundIntPoint(pt)\n+ pt = self.roundPoint(pt)\n x, y = pt\n self._heldMove = [x, y, \"rmoveto\"]\n \n@@ -98,22 +114,25 @@\n self._heldMove = None\n \n def _lineTo(self, pt):\n- RelativeCoordinatePen._lineTo(self, roundIntPoint(pt))\n+ RelativeCoordinatePen._lineTo(self, self.roundPoint(pt))\n \n def _relativeLineTo(self, pt):\n self._storeHeldMove()\n- pt = roundIntPoint(pt)\n+ pt = self.roundPoint(pt)\n x, y = pt\n self._program.extend([x, y, \"rlineto\"])\n \n def _curveToOne(self, pt1, pt2, pt3):\n- RelativeCoordinatePen._curveToOne(self, roundIntPoint(pt1), roundIntPoint(pt2), roundIntPoint(pt3))\n+ RelativeCoordinatePen._curveToOne(self,\n+ self.roundPoint(pt1),\n+ self.roundPoint(pt2),\n+ self.roundPoint(pt3))\n \n def _relativeCurveToOne(self, pt1, pt2, pt3):\n self._storeHeldMove()\n- pt1 = roundIntPoint(pt1)\n- pt2 = roundIntPoint(pt2)\n- pt3 = roundIntPoint(pt3)\n+ pt1 = self.roundPoint(pt1)\n+ pt2 = self.roundPoint(pt2)\n+ pt3 = self.roundPoint(pt3)\n x1, y1 = pt1\n x2, y2 = pt2\n x3, y3 = pt3\n@@ -127,5 +146,6 @@\n \n def getCharString(self, private=None, globalSubrs=None):\n program = self._program + [\"endchar\"]\n- charString = T2CharString(program=program, private=private, globalSubrs=globalSubrs)\n+ charString = T2CharString(\n+ program=program, private=private, globalSubrs=globalSubrs)\n return charString\n", "issue": "[T2CharStringPen] Rounding coordinates?\nCurrently, [T2CharStringPen](https://github.com/fonttools/fonttools/blob/fcd697a328b439165b3e9d04904eb73f065b6838/Lib/fontTools/pens/t2CharStringPen.py#L78) always rounds glyph coordinates and casts to int, but it does not seem a good idea. I think we should remove this behavior, or make it optional.\r\n\r\ncf. TTGlyphPen (#526)\n", "before_files": [{"content": "# Copyright (c) 2009 Type Supply LLC\n# Author: Tal Leming\n\n\nfrom __future__ import print_function, division, absolute_import\n\nfrom fontTools.misc.py23 import *\nfrom fontTools.misc.psCharStrings import T2CharString\nfrom fontTools.pens.basePen import BasePen\n\n\ndef roundInt(v):\n return int(round(v))\n\n\ndef roundIntPoint(point):\n x, y = point\n return roundInt(x), roundInt(y)\n\n\nclass RelativeCoordinatePen(BasePen):\n\n def __init__(self, glyphSet):\n BasePen.__init__(self, glyphSet)\n self._lastX = None\n self._lastY = None\n self._heldAbsoluteMove = None\n\n def _makePointRelative(self, pt):\n absX, absY = pt\n absX = absX\n absY = absY\n # no points have been added\n # so no conversion is needed\n if self._lastX is None:\n relX, relY = absX, absY\n # otherwise calculate the relative coordinates\n else:\n relX = absX - self._lastX\n relY = absY - self._lastY\n # store the absolute coordinates\n self._lastX = absX\n self._lastY = absY\n # now return the relative coordinates\n return relX, relY\n\n def _moveTo(self, pt):\n self._heldAbsoluteMove = pt\n\n def _releaseHeldMove(self):\n if self._heldAbsoluteMove is not None:\n pt = self._makePointRelative(self._heldAbsoluteMove)\n self._relativeMoveTo(pt)\n self._heldAbsoluteMove = None\n\n def _relativeMoveTo(self, pt):\n raise NotImplementedError\n\n def _lineTo(self, pt):\n self._releaseHeldMove()\n pt = self._makePointRelative(pt)\n self._relativeLineTo(pt)\n\n def _relativeLineTo(self, pt):\n raise NotImplementedError\n\n def _curveToOne(self, pt1, pt2, pt3):\n self._releaseHeldMove()\n pt1 = self._makePointRelative(pt1)\n pt2 = self._makePointRelative(pt2)\n pt3 = self._makePointRelative(pt3)\n self._relativeCurveToOne(pt1, pt2, pt3)\n\n def _relativeCurveToOne(self, pt1, pt2, pt3):\n raise NotImplementedError\n\n\nclass T2CharStringPen(RelativeCoordinatePen):\n\n def __init__(self, width, glyphSet):\n RelativeCoordinatePen.__init__(self, glyphSet)\n self._heldMove = None\n self._program = []\n if width is not None:\n self._program.append(roundInt(width))\n\n def _moveTo(self, pt):\n RelativeCoordinatePen._moveTo(self, roundIntPoint(pt))\n\n def _relativeMoveTo(self, pt):\n pt = roundIntPoint(pt)\n x, y = pt\n self._heldMove = [x, y, \"rmoveto\"]\n\n def _storeHeldMove(self):\n if self._heldMove is not None:\n self._program.extend(self._heldMove)\n self._heldMove = None\n\n def _lineTo(self, pt):\n RelativeCoordinatePen._lineTo(self, roundIntPoint(pt))\n\n def _relativeLineTo(self, pt):\n self._storeHeldMove()\n pt = roundIntPoint(pt)\n x, y = pt\n self._program.extend([x, y, \"rlineto\"])\n\n def _curveToOne(self, pt1, pt2, pt3):\n RelativeCoordinatePen._curveToOne(self, roundIntPoint(pt1), roundIntPoint(pt2), roundIntPoint(pt3))\n\n def _relativeCurveToOne(self, pt1, pt2, pt3):\n self._storeHeldMove()\n pt1 = roundIntPoint(pt1)\n pt2 = roundIntPoint(pt2)\n pt3 = roundIntPoint(pt3)\n x1, y1 = pt1\n x2, y2 = pt2\n x3, y3 = pt3\n self._program.extend([x1, y1, x2, y2, x3, y3, \"rrcurveto\"])\n\n def _closePath(self):\n pass\n\n def _endPath(self):\n pass\n\n def getCharString(self, private=None, globalSubrs=None):\n program = self._program + [\"endchar\"]\n charString = T2CharString(program=program, private=private, globalSubrs=globalSubrs)\n return charString\n", "path": "Lib/fontTools/pens/t2CharStringPen.py"}]} | 1,990 | 965 |
gh_patches_debug_3882 | rasdani/github-patches | git_diff | explosion__spaCy-3032 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
parameter averaging
You recommend (https://spacy.io/usage/training#section-tips):
> Then when we want to actually use the model, we use the averages, not the most recent value.
> with nlp.use_params(optimizer.averages):
> nlp.to_disk('/model')
But in the classification example https://spacy.io/usage/training#section-textcat model is saved only not in nlp.use_params(optimizer.averages) scope. Is it correct?
parameter averaging
You recommend (https://spacy.io/usage/training#section-tips):
> Then when we want to actually use the model, we use the averages, not the most recent value.
> with nlp.use_params(optimizer.averages):
> nlp.to_disk('/model')
But in the classification example https://spacy.io/usage/training#section-textcat model is saved only not in nlp.use_params(optimizer.averages) scope. Is it correct?
</issue>
<code>
[start of examples/training/train_textcat.py]
1 #!/usr/bin/env python
2 # coding: utf8
3 """Train a convolutional neural network text classifier on the
4 IMDB dataset, using the TextCategorizer component. The dataset will be loaded
5 automatically via Thinc's built-in dataset loader. The model is added to
6 spacy.pipeline, and predictions are available via `doc.cats`. For more details,
7 see the documentation:
8 * Training: https://spacy.io/usage/training
9
10 Compatible with: spaCy v2.0.0+
11 """
12 from __future__ import unicode_literals, print_function
13 import plac
14 import random
15 from pathlib import Path
16 import thinc.extra.datasets
17
18 import spacy
19 from spacy.util import minibatch, compounding
20
21
22 @plac.annotations(
23 model=("Model name. Defaults to blank 'en' model.", "option", "m", str),
24 output_dir=("Optional output directory", "option", "o", Path),
25 n_texts=("Number of texts to train from", "option", "t", int),
26 n_iter=("Number of training iterations", "option", "n", int))
27 def main(model=None, output_dir=None, n_iter=20, n_texts=2000):
28 if model is not None:
29 nlp = spacy.load(model) # load existing spaCy model
30 print("Loaded model '%s'" % model)
31 else:
32 nlp = spacy.blank('en') # create blank Language class
33 print("Created blank 'en' model")
34
35 # add the text classifier to the pipeline if it doesn't exist
36 # nlp.create_pipe works for built-ins that are registered with spaCy
37 if 'textcat' not in nlp.pipe_names:
38 textcat = nlp.create_pipe('textcat')
39 nlp.add_pipe(textcat, last=True)
40 # otherwise, get it, so we can add labels to it
41 else:
42 textcat = nlp.get_pipe('textcat')
43
44 # add label to text classifier
45 textcat.add_label('POSITIVE')
46
47 # load the IMDB dataset
48 print("Loading IMDB data...")
49 (train_texts, train_cats), (dev_texts, dev_cats) = load_data(limit=n_texts)
50 print("Using {} examples ({} training, {} evaluation)"
51 .format(n_texts, len(train_texts), len(dev_texts)))
52 train_data = list(zip(train_texts,
53 [{'cats': cats} for cats in train_cats]))
54
55 # get names of other pipes to disable them during training
56 other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'textcat']
57 with nlp.disable_pipes(*other_pipes): # only train textcat
58 optimizer = nlp.begin_training()
59 print("Training the model...")
60 print('{:^5}\t{:^5}\t{:^5}\t{:^5}'.format('LOSS', 'P', 'R', 'F'))
61 for i in range(n_iter):
62 losses = {}
63 # batch up the examples using spaCy's minibatch
64 batches = minibatch(train_data, size=compounding(4., 32., 1.001))
65 for batch in batches:
66 texts, annotations = zip(*batch)
67 nlp.update(texts, annotations, sgd=optimizer, drop=0.2,
68 losses=losses)
69 with textcat.model.use_params(optimizer.averages):
70 # evaluate on the dev data split off in load_data()
71 scores = evaluate(nlp.tokenizer, textcat, dev_texts, dev_cats)
72 print('{0:.3f}\t{1:.3f}\t{2:.3f}\t{3:.3f}' # print a simple table
73 .format(losses['textcat'], scores['textcat_p'],
74 scores['textcat_r'], scores['textcat_f']))
75
76 # test the trained model
77 test_text = "This movie sucked"
78 doc = nlp(test_text)
79 print(test_text, doc.cats)
80
81 if output_dir is not None:
82 output_dir = Path(output_dir)
83 if not output_dir.exists():
84 output_dir.mkdir()
85 nlp.to_disk(output_dir)
86 print("Saved model to", output_dir)
87
88 # test the saved model
89 print("Loading from", output_dir)
90 nlp2 = spacy.load(output_dir)
91 doc2 = nlp2(test_text)
92 print(test_text, doc2.cats)
93
94
95 def load_data(limit=0, split=0.8):
96 """Load data from the IMDB dataset."""
97 # Partition off part of the train data for evaluation
98 train_data, _ = thinc.extra.datasets.imdb()
99 random.shuffle(train_data)
100 train_data = train_data[-limit:]
101 texts, labels = zip(*train_data)
102 cats = [{'POSITIVE': bool(y)} for y in labels]
103 split = int(len(train_data) * split)
104 return (texts[:split], cats[:split]), (texts[split:], cats[split:])
105
106
107 def evaluate(tokenizer, textcat, texts, cats):
108 docs = (tokenizer(text) for text in texts)
109 tp = 0.0 # True positives
110 fp = 1e-8 # False positives
111 fn = 1e-8 # False negatives
112 tn = 0.0 # True negatives
113 for i, doc in enumerate(textcat.pipe(docs)):
114 gold = cats[i]
115 for label, score in doc.cats.items():
116 if label not in gold:
117 continue
118 if score >= 0.5 and gold[label] >= 0.5:
119 tp += 1.
120 elif score >= 0.5 and gold[label] < 0.5:
121 fp += 1.
122 elif score < 0.5 and gold[label] < 0.5:
123 tn += 1
124 elif score < 0.5 and gold[label] >= 0.5:
125 fn += 1
126 precision = tp / (tp + fp)
127 recall = tp / (tp + fn)
128 f_score = 2 * (precision * recall) / (precision + recall)
129 return {'textcat_p': precision, 'textcat_r': recall, 'textcat_f': f_score}
130
131
132 if __name__ == '__main__':
133 plac.call(main)
134
[end of examples/training/train_textcat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/training/train_textcat.py b/examples/training/train_textcat.py
--- a/examples/training/train_textcat.py
+++ b/examples/training/train_textcat.py
@@ -82,7 +82,8 @@
output_dir = Path(output_dir)
if not output_dir.exists():
output_dir.mkdir()
- nlp.to_disk(output_dir)
+ with nlp.use_params(optimizer.averages):
+ nlp.to_disk(output_dir)
print("Saved model to", output_dir)
# test the saved model
| {"golden_diff": "diff --git a/examples/training/train_textcat.py b/examples/training/train_textcat.py\n--- a/examples/training/train_textcat.py\n+++ b/examples/training/train_textcat.py\n@@ -82,7 +82,8 @@\n output_dir = Path(output_dir)\n if not output_dir.exists():\n output_dir.mkdir()\n- nlp.to_disk(output_dir)\n+ with nlp.use_params(optimizer.averages):\n+ nlp.to_disk(output_dir)\n print(\"Saved model to\", output_dir)\n \n # test the saved model\n", "issue": "parameter averaging\nYou recommend (https://spacy.io/usage/training#section-tips):\r\n\r\n> Then when we want to actually use the model, we use the averages, not the most recent value. \r\n> with nlp.use_params(optimizer.averages):\r\n> nlp.to_disk('/model')\r\n\r\nBut in the classification example https://spacy.io/usage/training#section-textcat model is saved only not in nlp.use_params(optimizer.averages) scope. Is it correct?\nparameter averaging\nYou recommend (https://spacy.io/usage/training#section-tips):\r\n\r\n> Then when we want to actually use the model, we use the averages, not the most recent value. \r\n> with nlp.use_params(optimizer.averages):\r\n> nlp.to_disk('/model')\r\n\r\nBut in the classification example https://spacy.io/usage/training#section-textcat model is saved only not in nlp.use_params(optimizer.averages) scope. Is it correct?\n", "before_files": [{"content": "#!/usr/bin/env python\n# coding: utf8\n\"\"\"Train a convolutional neural network text classifier on the\nIMDB dataset, using the TextCategorizer component. The dataset will be loaded\nautomatically via Thinc's built-in dataset loader. The model is added to\nspacy.pipeline, and predictions are available via `doc.cats`. For more details,\nsee the documentation:\n* Training: https://spacy.io/usage/training\n\nCompatible with: spaCy v2.0.0+\n\"\"\"\nfrom __future__ import unicode_literals, print_function\nimport plac\nimport random\nfrom pathlib import Path\nimport thinc.extra.datasets\n\nimport spacy\nfrom spacy.util import minibatch, compounding\n\n\[email protected](\n model=(\"Model name. Defaults to blank 'en' model.\", \"option\", \"m\", str),\n output_dir=(\"Optional output directory\", \"option\", \"o\", Path),\n n_texts=(\"Number of texts to train from\", \"option\", \"t\", int),\n n_iter=(\"Number of training iterations\", \"option\", \"n\", int))\ndef main(model=None, output_dir=None, n_iter=20, n_texts=2000):\n if model is not None:\n nlp = spacy.load(model) # load existing spaCy model\n print(\"Loaded model '%s'\" % model)\n else:\n nlp = spacy.blank('en') # create blank Language class\n print(\"Created blank 'en' model\")\n\n # add the text classifier to the pipeline if it doesn't exist\n # nlp.create_pipe works for built-ins that are registered with spaCy\n if 'textcat' not in nlp.pipe_names:\n textcat = nlp.create_pipe('textcat')\n nlp.add_pipe(textcat, last=True)\n # otherwise, get it, so we can add labels to it\n else:\n textcat = nlp.get_pipe('textcat')\n\n # add label to text classifier\n textcat.add_label('POSITIVE')\n\n # load the IMDB dataset\n print(\"Loading IMDB data...\")\n (train_texts, train_cats), (dev_texts, dev_cats) = load_data(limit=n_texts)\n print(\"Using {} examples ({} training, {} evaluation)\"\n .format(n_texts, len(train_texts), len(dev_texts)))\n train_data = list(zip(train_texts,\n [{'cats': cats} for cats in train_cats]))\n\n # get names of other pipes to disable them during training\n other_pipes = [pipe for pipe in nlp.pipe_names if pipe != 'textcat']\n with nlp.disable_pipes(*other_pipes): # only train textcat\n optimizer = nlp.begin_training()\n print(\"Training the model...\")\n print('{:^5}\\t{:^5}\\t{:^5}\\t{:^5}'.format('LOSS', 'P', 'R', 'F'))\n for i in range(n_iter):\n losses = {}\n # batch up the examples using spaCy's minibatch\n batches = minibatch(train_data, size=compounding(4., 32., 1.001))\n for batch in batches:\n texts, annotations = zip(*batch)\n nlp.update(texts, annotations, sgd=optimizer, drop=0.2,\n losses=losses)\n with textcat.model.use_params(optimizer.averages):\n # evaluate on the dev data split off in load_data()\n scores = evaluate(nlp.tokenizer, textcat, dev_texts, dev_cats)\n print('{0:.3f}\\t{1:.3f}\\t{2:.3f}\\t{3:.3f}' # print a simple table\n .format(losses['textcat'], scores['textcat_p'],\n scores['textcat_r'], scores['textcat_f']))\n\n # test the trained model\n test_text = \"This movie sucked\"\n doc = nlp(test_text)\n print(test_text, doc.cats)\n\n if output_dir is not None:\n output_dir = Path(output_dir)\n if not output_dir.exists():\n output_dir.mkdir()\n nlp.to_disk(output_dir)\n print(\"Saved model to\", output_dir)\n\n # test the saved model\n print(\"Loading from\", output_dir)\n nlp2 = spacy.load(output_dir)\n doc2 = nlp2(test_text)\n print(test_text, doc2.cats)\n\n\ndef load_data(limit=0, split=0.8):\n \"\"\"Load data from the IMDB dataset.\"\"\"\n # Partition off part of the train data for evaluation\n train_data, _ = thinc.extra.datasets.imdb()\n random.shuffle(train_data)\n train_data = train_data[-limit:]\n texts, labels = zip(*train_data)\n cats = [{'POSITIVE': bool(y)} for y in labels]\n split = int(len(train_data) * split)\n return (texts[:split], cats[:split]), (texts[split:], cats[split:])\n\n\ndef evaluate(tokenizer, textcat, texts, cats):\n docs = (tokenizer(text) for text in texts)\n tp = 0.0 # True positives\n fp = 1e-8 # False positives\n fn = 1e-8 # False negatives\n tn = 0.0 # True negatives\n for i, doc in enumerate(textcat.pipe(docs)):\n gold = cats[i]\n for label, score in doc.cats.items():\n if label not in gold:\n continue\n if score >= 0.5 and gold[label] >= 0.5:\n tp += 1.\n elif score >= 0.5 and gold[label] < 0.5:\n fp += 1.\n elif score < 0.5 and gold[label] < 0.5:\n tn += 1\n elif score < 0.5 and gold[label] >= 0.5:\n fn += 1\n precision = tp / (tp + fp)\n recall = tp / (tp + fn)\n f_score = 2 * (precision * recall) / (precision + recall)\n return {'textcat_p': precision, 'textcat_r': recall, 'textcat_f': f_score}\n\n\nif __name__ == '__main__':\n plac.call(main)\n", "path": "examples/training/train_textcat.py"}]} | 2,402 | 120 |
gh_patches_debug_3948 | rasdani/github-patches | git_diff | conan-io__conan-center-index-1822 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[package] thrift/0.13.0: build fails using --profile but works using --profile:host and --profile:build
### Package and Environment Details (include every applicable attribute)
* Package Name/Version: **thrift/0.13.0**
* Operating System+version: **Linux Ubuntu 18.04**
* Compiler+version: **GCC 7**
* Conan version: **conan 1.25.2**
* Python version: **Python 3.6.9**
### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)
```
Configuration for profile linux-x64:
[settings]
os=Linux
os_build=Linux
arch=x86_64
arch_build=x86_64
compiler=gcc
compiler.version=7
compiler.libcxx=libstdc++11
build_type=Debug
[options]
*:fPIC=True
*:shared=True
[build_requires]
[env]
```
### Steps to reproduce (Include if Applicable)
I have a package that uses _thrift/0.13.0_ as build requirement (thrift compiler) and as normal requirement (thrift libs). When I build and create my package with
```conan create . --profile:host linux-x64 --profile:build linux-x64```
the build works perfectly, the thrift sources are compiled with the thrift compiler. When I use
```conan create . --profile linux-x64```
the build fails when using the thrift compiler. The difference is that in the first case, the binary directories of all build requirement packages are added to PATH environment during build and in the second case, nothing is added to PATH. The cmake build of my package expects the thrift compiler being in PATH.
I was not sure if this is really a package bug and not a conan bug. Sorry if this was the wrong place.
</issue>
<code>
[start of recipes/thrift/all/conanfile.py]
1 import os
2 from conans import tools, CMake, ConanFile
3
4
5 class ConanFileDefault(ConanFile):
6 name = "thrift"
7 description = "Thrift is an associated code generation mechanism for RPC"
8 topics = ("conan", "thrift", "serialization", "rpc")
9 url = "https://github.com/conan-io/conan-center-index"
10 homepage = "https://github.com/apache/thrift"
11 license = "Apache-2.0"
12 exports_sources = ["CMakeLists.txt", "patches/*.diff"]
13 generators = "cmake", "cmake_find_package"
14
15 settings = "os", "arch", "compiler", "build_type"
16 options = {
17 "shared": [True, False],
18 "fPIC": [True, False],
19 "with_zlib": [True, False],
20 "with_libevent": [True, False],
21 "with_openssl": [True, False],
22 "with_boost_functional": [True, False],
23 "with_boost_smart_ptr": [True, False],
24 "with_boost_static": [True, False],
25 "with_boostthreads": [True, False],
26 "with_stdthreads": [True, False],
27 "with_c_glib": [True, False],
28 "with_cpp": [True, False],
29 "with_java": [True, False],
30 "with_python": [True, False],
31 "with_haskell": [True, False],
32 "with_plugin": [True, False]
33 }
34 default_options = {
35 "shared": False,
36 "fPIC": True,
37 "with_zlib": True,
38 "with_libevent": True,
39 "with_openssl": True,
40 "with_boost_functional": False,
41 "with_boost_smart_ptr": False,
42 "with_boost_static": False,
43 "with_boostthreads": False,
44 "with_stdthreads": True,
45 "with_c_glib": False,
46 "with_cpp": True,
47 "with_java": False,
48 "with_python": False,
49 "with_haskell": False,
50 "with_plugin": False
51 }
52 _source_subfolder = "source_subfolder"
53 _build_subfolder = "build_subfolder"
54
55 _cmake = None
56
57 def config_options(self):
58 if self.settings.os == 'Windows':
59 del self.options.fPIC
60
61 def source(self):
62 tools.get(**self.conan_data["sources"][self.version])
63 extracted_dir = "thrift-" + self.version
64 os.rename(extracted_dir, self._source_subfolder)
65
66 def build(self):
67 for p in self.conan_data["patches"][self.version]:
68 tools.patch(**p)
69 for f in ["Findflex.cmake", "Findbison.cmake"]:
70 if os.path.isfile(f):
71 os.unlink(f)
72 cmake = self._configure_cmake()
73 cmake.build()
74
75 def requirements(self):
76 self.requires("boost/1.73.0")
77 if self.settings.os == 'Windows':
78 self.requires("winflexbison/2.5.22")
79 else:
80 self.requires("flex/2.6.4")
81 self.requires("bison/3.5.3")
82
83 if self.options.with_openssl:
84 self.requires("openssl/1.1.1g")
85 if self.options.with_zlib:
86 self.requires("zlib/1.2.11")
87 if self.options.with_libevent:
88 self.requires("libevent/2.1.11")
89
90 def _configure_cmake(self):
91 if self._cmake:
92 return self._cmake
93 self._cmake = CMake(self)
94 for option, value in self.options.items():
95 self._cmake.definitions[option.upper()] = value
96
97 # Make thrift use correct thread lib (see repo/build/cmake/config.h.in)
98 self._cmake.definitions["USE_STD_THREAD"] = self.options.with_stdthreads
99 self._cmake.definitions["USE_BOOST_THREAD"] = self.options.with_boostthreads
100 self._cmake.definitions["WITH_SHARED_LIB"] = self.options.shared
101 self._cmake.definitions["WITH_STATIC_LIB"] = not self.options.shared
102 self._cmake.definitions["BOOST_ROOT"] = self.deps_cpp_info['boost'].rootpath
103 self._cmake.definitions["BUILD_TESTING"] = False
104 self._cmake.definitions["BUILD_COMPILER"] = True
105 self._cmake.definitions["BUILD_LIBRARIES"] = True
106 self._cmake.definitions["BUILD_EXAMPLES"] = False
107 self._cmake.definitions["BUILD_TUTORIALS"] = False
108
109 # Make optional libs "findable"
110 if self.options.with_openssl:
111 self._cmake.definitions["OPENSSL_ROOT_DIR"] = self.deps_cpp_info['openssl'].rootpath
112 if self.options.with_zlib:
113 self._cmake.definitions["ZLIB_ROOT"] = self.deps_cpp_info['zlib'].rootpath
114 if self.options.with_libevent:
115 self._cmake.definitions["LIBEVENT_ROOT"] = self.deps_cpp_info['libevent'].rootpath
116
117 self._cmake.configure(build_folder=self._build_subfolder)
118 return self._cmake
119
120 def package(self):
121 self.copy(pattern="LICENSE", dst="licenses", src=self._source_subfolder)
122 cmake = self._configure_cmake()
123 cmake.install()
124 # Copy generated headers from build tree
125 build_source_dir = os.path.join(self._build_subfolder, self._source_subfolder)
126 self.copy(pattern="*.h", dst="include", src=build_source_dir, keep_path=True)
127 tools.rmdir(os.path.join(self.package_folder, "lib", "cmake"))
128 tools.rmdir(os.path.join(self.package_folder, "lib", "pkgconfig"))
129
130 def package_info(self):
131 self.cpp_info.libs = tools.collect_libs(self)
132 # Make sure libs are link in correct order. Important thing is that libthrift/thrift is last
133 # (a little naive to sort, but libthrift/thrift should end up last since rest of the libs extend it with an abbrevation: 'thriftnb', 'thriftz')
134 # The library that needs symbols must be first, then the library that resolves the symbols should come after.
135 self.cpp_info.libs.sort(reverse = True)
136
137 if self.settings.os == "Windows":
138 # To avoid error C2589: '(' : illegal token on right side of '::'
139 self.cpp_info.defines.append("NOMINMAX")
140 elif self.settings.os == "Linux":
141 self.cpp_info.system_libs.extend(["m", "pthread"])
142
[end of recipes/thrift/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/thrift/all/conanfile.py b/recipes/thrift/all/conanfile.py
--- a/recipes/thrift/all/conanfile.py
+++ b/recipes/thrift/all/conanfile.py
@@ -139,3 +139,7 @@
self.cpp_info.defines.append("NOMINMAX")
elif self.settings.os == "Linux":
self.cpp_info.system_libs.extend(["m", "pthread"])
+
+ bin_path = os.path.join(self.package_folder, "bin")
+ self.output.info("Appending PATH env var with : {}".format(bin_path))
+ self.env_info.PATH.append(bin_path)
| {"golden_diff": "diff --git a/recipes/thrift/all/conanfile.py b/recipes/thrift/all/conanfile.py\n--- a/recipes/thrift/all/conanfile.py\n+++ b/recipes/thrift/all/conanfile.py\n@@ -139,3 +139,7 @@\n self.cpp_info.defines.append(\"NOMINMAX\")\n elif self.settings.os == \"Linux\":\n self.cpp_info.system_libs.extend([\"m\", \"pthread\"])\n+\n+ bin_path = os.path.join(self.package_folder, \"bin\")\n+ self.output.info(\"Appending PATH env var with : {}\".format(bin_path))\n+ self.env_info.PATH.append(bin_path)\n", "issue": "[package] thrift/0.13.0: build fails using --profile but works using --profile:host and --profile:build\n### Package and Environment Details (include every applicable attribute)\r\n * Package Name/Version: **thrift/0.13.0**\r\n * Operating System+version: **Linux Ubuntu 18.04**\r\n * Compiler+version: **GCC 7**\r\n * Conan version: **conan 1.25.2**\r\n * Python version: **Python 3.6.9**\r\n\r\n\r\n### Conan profile (output of `conan profile show default` or `conan profile show <profile>` if custom profile is in use)\r\n```\r\nConfiguration for profile linux-x64:\r\n[settings]\r\nos=Linux\r\nos_build=Linux\r\narch=x86_64\r\narch_build=x86_64\r\ncompiler=gcc\r\ncompiler.version=7\r\ncompiler.libcxx=libstdc++11\r\nbuild_type=Debug\r\n[options]\r\n*:fPIC=True\r\n*:shared=True\r\n[build_requires]\r\n[env]\r\n```\r\n\r\n\r\n### Steps to reproduce (Include if Applicable)\r\nI have a package that uses _thrift/0.13.0_ as build requirement (thrift compiler) and as normal requirement (thrift libs). When I build and create my package with\r\n\r\n```conan create . --profile:host linux-x64 --profile:build linux-x64```\r\n\r\nthe build works perfectly, the thrift sources are compiled with the thrift compiler. When I use\r\n\r\n```conan create . --profile linux-x64```\r\n\r\nthe build fails when using the thrift compiler. The difference is that in the first case, the binary directories of all build requirement packages are added to PATH environment during build and in the second case, nothing is added to PATH. The cmake build of my package expects the thrift compiler being in PATH.\r\n\r\nI was not sure if this is really a package bug and not a conan bug. Sorry if this was the wrong place.\r\n\r\n\n", "before_files": [{"content": "import os\nfrom conans import tools, CMake, ConanFile\n\n\nclass ConanFileDefault(ConanFile):\n name = \"thrift\"\n description = \"Thrift is an associated code generation mechanism for RPC\"\n topics = (\"conan\", \"thrift\", \"serialization\", \"rpc\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/apache/thrift\"\n license = \"Apache-2.0\"\n exports_sources = [\"CMakeLists.txt\", \"patches/*.diff\"]\n generators = \"cmake\", \"cmake_find_package\"\n\n settings = \"os\", \"arch\", \"compiler\", \"build_type\"\n options = {\n \"shared\": [True, False],\n \"fPIC\": [True, False],\n \"with_zlib\": [True, False],\n \"with_libevent\": [True, False],\n \"with_openssl\": [True, False],\n \"with_boost_functional\": [True, False],\n \"with_boost_smart_ptr\": [True, False],\n \"with_boost_static\": [True, False],\n \"with_boostthreads\": [True, False],\n \"with_stdthreads\": [True, False],\n \"with_c_glib\": [True, False],\n \"with_cpp\": [True, False],\n \"with_java\": [True, False],\n \"with_python\": [True, False],\n \"with_haskell\": [True, False],\n \"with_plugin\": [True, False]\n }\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"with_zlib\": True,\n \"with_libevent\": True,\n \"with_openssl\": True,\n \"with_boost_functional\": False,\n \"with_boost_smart_ptr\": False,\n \"with_boost_static\": False,\n \"with_boostthreads\": False,\n \"with_stdthreads\": True,\n \"with_c_glib\": False,\n \"with_cpp\": True,\n \"with_java\": False,\n \"with_python\": False,\n \"with_haskell\": False,\n \"with_plugin\": False\n }\n _source_subfolder = \"source_subfolder\"\n _build_subfolder = \"build_subfolder\"\n\n _cmake = None\n\n def config_options(self):\n if self.settings.os == 'Windows':\n del self.options.fPIC\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = \"thrift-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def build(self):\n for p in self.conan_data[\"patches\"][self.version]:\n tools.patch(**p)\n for f in [\"Findflex.cmake\", \"Findbison.cmake\"]:\n if os.path.isfile(f):\n os.unlink(f)\n cmake = self._configure_cmake()\n cmake.build()\n\n def requirements(self):\n self.requires(\"boost/1.73.0\")\n if self.settings.os == 'Windows':\n self.requires(\"winflexbison/2.5.22\")\n else:\n self.requires(\"flex/2.6.4\")\n self.requires(\"bison/3.5.3\")\n\n if self.options.with_openssl:\n self.requires(\"openssl/1.1.1g\")\n if self.options.with_zlib:\n self.requires(\"zlib/1.2.11\")\n if self.options.with_libevent:\n self.requires(\"libevent/2.1.11\")\n\n def _configure_cmake(self):\n if self._cmake:\n return self._cmake\n self._cmake = CMake(self)\n for option, value in self.options.items():\n self._cmake.definitions[option.upper()] = value\n\n # Make thrift use correct thread lib (see repo/build/cmake/config.h.in)\n self._cmake.definitions[\"USE_STD_THREAD\"] = self.options.with_stdthreads\n self._cmake.definitions[\"USE_BOOST_THREAD\"] = self.options.with_boostthreads\n self._cmake.definitions[\"WITH_SHARED_LIB\"] = self.options.shared\n self._cmake.definitions[\"WITH_STATIC_LIB\"] = not self.options.shared\n self._cmake.definitions[\"BOOST_ROOT\"] = self.deps_cpp_info['boost'].rootpath\n self._cmake.definitions[\"BUILD_TESTING\"] = False\n self._cmake.definitions[\"BUILD_COMPILER\"] = True\n self._cmake.definitions[\"BUILD_LIBRARIES\"] = True\n self._cmake.definitions[\"BUILD_EXAMPLES\"] = False\n self._cmake.definitions[\"BUILD_TUTORIALS\"] = False\n\n # Make optional libs \"findable\"\n if self.options.with_openssl:\n self._cmake.definitions[\"OPENSSL_ROOT_DIR\"] = self.deps_cpp_info['openssl'].rootpath\n if self.options.with_zlib:\n self._cmake.definitions[\"ZLIB_ROOT\"] = self.deps_cpp_info['zlib'].rootpath\n if self.options.with_libevent:\n self._cmake.definitions[\"LIBEVENT_ROOT\"] = self.deps_cpp_info['libevent'].rootpath\n\n self._cmake.configure(build_folder=self._build_subfolder)\n return self._cmake\n\n def package(self):\n self.copy(pattern=\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n cmake = self._configure_cmake()\n cmake.install()\n # Copy generated headers from build tree\n build_source_dir = os.path.join(self._build_subfolder, self._source_subfolder)\n self.copy(pattern=\"*.h\", dst=\"include\", src=build_source_dir, keep_path=True)\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"cmake\"))\n tools.rmdir(os.path.join(self.package_folder, \"lib\", \"pkgconfig\"))\n\n def package_info(self):\n self.cpp_info.libs = tools.collect_libs(self)\n # Make sure libs are link in correct order. Important thing is that libthrift/thrift is last\n # (a little naive to sort, but libthrift/thrift should end up last since rest of the libs extend it with an abbrevation: 'thriftnb', 'thriftz')\n # The library that needs symbols must be first, then the library that resolves the symbols should come after.\n self.cpp_info.libs.sort(reverse = True)\n\n if self.settings.os == \"Windows\":\n # To avoid error C2589: '(' : illegal token on right side of '::'\n self.cpp_info.defines.append(\"NOMINMAX\")\n elif self.settings.os == \"Linux\":\n self.cpp_info.system_libs.extend([\"m\", \"pthread\"])\n", "path": "recipes/thrift/all/conanfile.py"}]} | 2,739 | 142 |
gh_patches_debug_17604 | rasdani/github-patches | git_diff | DataDog__dd-trace-py-4250 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot call sqlite3.backup(db) on a TracedSQLite object
Thanks for taking the time for reporting an issue!
Before reporting an issue on dd-trace-py, please be sure to provide all
necessary information.
If you're hitting a bug, make sure that you're using the latest version of this
library.
### Which version of dd-trace-py are you using?
1.5.0
### Which version of pip are you using?
21.1.1
_ddtrace requires pip>=18 to install one of our pre-built wheels_
### Which version of the libraries are you using?
You can copy/paste the output of `pip freeze` here.
```
ddtrace==1.5.0
```
### How can we reproduce your problem?
```
from ddtrace import config, patch_all
import sqlite3
config.env = "test" # the environment the application is in
config.service = "app" # name of your application
config.version = "v1" # version of your application
patch_all()
src = sqlite3.connect("1.db")
dst = sqlite3.connect("2.db")
with dst:
src.backup(dst, pages=1)
dst.close()
src.close()
```
### What is the result that you get?
The following TypeError
```
TypeError: backup() argument 1 must be sqlite3.Connection, not TracedSQLite
```
### What is the result that you expected?
The function should succeed without error.
</issue>
<code>
[start of ddtrace/contrib/sqlite3/patch.py]
1 import os
2 import sqlite3
3 import sqlite3.dbapi2
4
5 from ddtrace import config
6 from ddtrace.vendor import wrapt
7
8 from ...contrib.dbapi import FetchTracedCursor
9 from ...contrib.dbapi import TracedConnection
10 from ...contrib.dbapi import TracedCursor
11 from ...internal.utils.formats import asbool
12 from ...pin import Pin
13
14
15 # Original connect method
16 _connect = sqlite3.connect
17
18 config._add(
19 "sqlite",
20 dict(
21 _default_service="sqlite",
22 _dbapi_span_name_prefix="sqlite",
23 trace_fetch_methods=asbool(os.getenv("DD_SQLITE_TRACE_FETCH_METHODS", default=False)),
24 ),
25 )
26
27
28 def patch():
29 wrapped = wrapt.FunctionWrapper(_connect, traced_connect)
30
31 setattr(sqlite3, "connect", wrapped)
32 setattr(sqlite3.dbapi2, "connect", wrapped)
33
34
35 def unpatch():
36 sqlite3.connect = _connect
37 sqlite3.dbapi2.connect = _connect
38
39
40 def traced_connect(func, _, args, kwargs):
41 conn = func(*args, **kwargs)
42 return patch_conn(conn)
43
44
45 def patch_conn(conn):
46 wrapped = TracedSQLite(conn)
47 Pin().onto(wrapped)
48 return wrapped
49
50
51 class TracedSQLiteCursor(TracedCursor):
52 def executemany(self, *args, **kwargs):
53 # DEV: SQLite3 Cursor.execute always returns back the cursor instance
54 super(TracedSQLiteCursor, self).executemany(*args, **kwargs)
55 return self
56
57 def execute(self, *args, **kwargs):
58 # DEV: SQLite3 Cursor.execute always returns back the cursor instance
59 super(TracedSQLiteCursor, self).execute(*args, **kwargs)
60 return self
61
62
63 class TracedSQLiteFetchCursor(TracedSQLiteCursor, FetchTracedCursor):
64 pass
65
66
67 class TracedSQLite(TracedConnection):
68 def __init__(self, conn, pin=None, cursor_cls=None):
69 if not cursor_cls:
70 # Do not trace `fetch*` methods by default
71 cursor_cls = TracedSQLiteFetchCursor if config.sqlite.trace_fetch_methods else TracedSQLiteCursor
72
73 super(TracedSQLite, self).__init__(conn, pin=pin, cfg=config.sqlite, cursor_cls=cursor_cls)
74
75 def execute(self, *args, **kwargs):
76 # sqlite has a few extra sugar functions
77 return self.cursor().execute(*args, **kwargs)
78
[end of ddtrace/contrib/sqlite3/patch.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ddtrace/contrib/sqlite3/patch.py b/ddtrace/contrib/sqlite3/patch.py
--- a/ddtrace/contrib/sqlite3/patch.py
+++ b/ddtrace/contrib/sqlite3/patch.py
@@ -1,6 +1,7 @@
import os
import sqlite3
import sqlite3.dbapi2
+import sys
from ddtrace import config
from ddtrace.vendor import wrapt
@@ -75,3 +76,13 @@
def execute(self, *args, **kwargs):
# sqlite has a few extra sugar functions
return self.cursor().execute(*args, **kwargs)
+
+ # backup was added in Python 3.7
+ if sys.version_info >= (3, 7, 0):
+
+ def backup(self, target, *args, **kwargs):
+ # sqlite3 checks the type of `target`, it cannot be a wrapped connection
+ # https://github.com/python/cpython/blob/4652093e1b816b78e9a585d671a807ce66427417/Modules/_sqlite/connection.c#L1897-L1899
+ if isinstance(target, TracedConnection):
+ target = target.__wrapped__
+ return self.__wrapped__.backup(target, *args, **kwargs)
| {"golden_diff": "diff --git a/ddtrace/contrib/sqlite3/patch.py b/ddtrace/contrib/sqlite3/patch.py\n--- a/ddtrace/contrib/sqlite3/patch.py\n+++ b/ddtrace/contrib/sqlite3/patch.py\n@@ -1,6 +1,7 @@\n import os\n import sqlite3\n import sqlite3.dbapi2\n+import sys\n \n from ddtrace import config\n from ddtrace.vendor import wrapt\n@@ -75,3 +76,13 @@\n def execute(self, *args, **kwargs):\n # sqlite has a few extra sugar functions\n return self.cursor().execute(*args, **kwargs)\n+\n+ # backup was added in Python 3.7\n+ if sys.version_info >= (3, 7, 0):\n+\n+ def backup(self, target, *args, **kwargs):\n+ # sqlite3 checks the type of `target`, it cannot be a wrapped connection\n+ # https://github.com/python/cpython/blob/4652093e1b816b78e9a585d671a807ce66427417/Modules/_sqlite/connection.c#L1897-L1899\n+ if isinstance(target, TracedConnection):\n+ target = target.__wrapped__\n+ return self.__wrapped__.backup(target, *args, **kwargs)\n", "issue": "Cannot call sqlite3.backup(db) on a TracedSQLite object\nThanks for taking the time for reporting an issue!\r\n\r\nBefore reporting an issue on dd-trace-py, please be sure to provide all\r\nnecessary information.\r\n\r\nIf you're hitting a bug, make sure that you're using the latest version of this\r\nlibrary.\r\n\r\n### Which version of dd-trace-py are you using?\r\n1.5.0\r\n### Which version of pip are you using?\r\n21.1.1\r\n_ddtrace requires pip>=18 to install one of our pre-built wheels_\r\n\r\n### Which version of the libraries are you using?\r\n\r\nYou can copy/paste the output of `pip freeze` here.\r\n\r\n```\r\nddtrace==1.5.0\r\n```\r\n\r\n### How can we reproduce your problem?\r\n\r\n```\r\nfrom ddtrace import config, patch_all\r\nimport sqlite3\r\n\r\nconfig.env = \"test\" # the environment the application is in\r\nconfig.service = \"app\" # name of your application\r\nconfig.version = \"v1\" # version of your application\r\npatch_all()\r\n\r\nsrc = sqlite3.connect(\"1.db\")\r\ndst = sqlite3.connect(\"2.db\")\r\nwith dst:\r\n src.backup(dst, pages=1)\r\ndst.close()\r\nsrc.close()\r\n```\r\n\r\n### What is the result that you get?\r\n\r\nThe following TypeError\r\n```\r\nTypeError: backup() argument 1 must be sqlite3.Connection, not TracedSQLite\r\n```\r\n\r\n### What is the result that you expected?\r\n\r\nThe function should succeed without error.\r\n\n", "before_files": [{"content": "import os\nimport sqlite3\nimport sqlite3.dbapi2\n\nfrom ddtrace import config\nfrom ddtrace.vendor import wrapt\n\nfrom ...contrib.dbapi import FetchTracedCursor\nfrom ...contrib.dbapi import TracedConnection\nfrom ...contrib.dbapi import TracedCursor\nfrom ...internal.utils.formats import asbool\nfrom ...pin import Pin\n\n\n# Original connect method\n_connect = sqlite3.connect\n\nconfig._add(\n \"sqlite\",\n dict(\n _default_service=\"sqlite\",\n _dbapi_span_name_prefix=\"sqlite\",\n trace_fetch_methods=asbool(os.getenv(\"DD_SQLITE_TRACE_FETCH_METHODS\", default=False)),\n ),\n)\n\n\ndef patch():\n wrapped = wrapt.FunctionWrapper(_connect, traced_connect)\n\n setattr(sqlite3, \"connect\", wrapped)\n setattr(sqlite3.dbapi2, \"connect\", wrapped)\n\n\ndef unpatch():\n sqlite3.connect = _connect\n sqlite3.dbapi2.connect = _connect\n\n\ndef traced_connect(func, _, args, kwargs):\n conn = func(*args, **kwargs)\n return patch_conn(conn)\n\n\ndef patch_conn(conn):\n wrapped = TracedSQLite(conn)\n Pin().onto(wrapped)\n return wrapped\n\n\nclass TracedSQLiteCursor(TracedCursor):\n def executemany(self, *args, **kwargs):\n # DEV: SQLite3 Cursor.execute always returns back the cursor instance\n super(TracedSQLiteCursor, self).executemany(*args, **kwargs)\n return self\n\n def execute(self, *args, **kwargs):\n # DEV: SQLite3 Cursor.execute always returns back the cursor instance\n super(TracedSQLiteCursor, self).execute(*args, **kwargs)\n return self\n\n\nclass TracedSQLiteFetchCursor(TracedSQLiteCursor, FetchTracedCursor):\n pass\n\n\nclass TracedSQLite(TracedConnection):\n def __init__(self, conn, pin=None, cursor_cls=None):\n if not cursor_cls:\n # Do not trace `fetch*` methods by default\n cursor_cls = TracedSQLiteFetchCursor if config.sqlite.trace_fetch_methods else TracedSQLiteCursor\n\n super(TracedSQLite, self).__init__(conn, pin=pin, cfg=config.sqlite, cursor_cls=cursor_cls)\n\n def execute(self, *args, **kwargs):\n # sqlite has a few extra sugar functions\n return self.cursor().execute(*args, **kwargs)\n", "path": "ddtrace/contrib/sqlite3/patch.py"}]} | 1,540 | 307 |
gh_patches_debug_25631 | rasdani/github-patches | git_diff | pytorch__vision-1149 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
A small bug in ImageNet Dataset Implementation
Currently, all values (i.e., index) in the `class_to_idx` and `wnid_to_idx` of `ImageNet` are 0.
The current code is as follows where `idcs` is wrong since it will fetch the indices of images instead of classes.
```python
idcs = [idx for _, idx in self.imgs]
self.wnids = self.classes
self.wnid_to_idx = {wnid: idx for idx, wnid in zip(idcs, self.wnids)}
self.classes = [wnid_to_classes[wnid] for wnid in self.wnids]
self.class_to_idx = {cls: idx
for clss, idx in zip(self.classes, idcs)
for cls in clss}
```
A simple fix will be,
```python
self.wnids = self.classes
self.wnid_to_idx = self.class_to_idx # ImageFolder handles it well
self.classes, self.class_to_idx = [], {}
for wnid, idx in self.wnid_to_idx.items():
classes = wnid_to_classes[wnid]
self.classes.append(classes)
for c in classes:
self.class_to_idx[c] = idx
```
</issue>
<code>
[start of torchvision/datasets/imagenet.py]
1 from __future__ import print_function
2 import os
3 import shutil
4 import tempfile
5 import torch
6 from .folder import ImageFolder
7 from .utils import check_integrity, download_and_extract_archive, extract_archive
8
9 ARCHIVE_DICT = {
10 'train': {
11 'url': 'http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_train.tar',
12 'md5': '1d675b47d978889d74fa0da5fadfb00e',
13 },
14 'val': {
15 'url': 'http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_val.tar',
16 'md5': '29b22e2961454d5413ddabcf34fc5622',
17 },
18 'devkit': {
19 'url': 'http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_devkit_t12.tar.gz',
20 'md5': 'fa75699e90414af021442c21a62c3abf',
21 }
22 }
23
24
25 class ImageNet(ImageFolder):
26 """`ImageNet <http://image-net.org/>`_ 2012 Classification Dataset.
27
28 Args:
29 root (string): Root directory of the ImageNet Dataset.
30 split (string, optional): The dataset split, supports ``train``, or ``val``.
31 download (bool, optional): If true, downloads the dataset from the internet and
32 puts it in root directory. If dataset is already downloaded, it is not
33 downloaded again.
34 transform (callable, optional): A function/transform that takes in an PIL image
35 and returns a transformed version. E.g, ``transforms.RandomCrop``
36 target_transform (callable, optional): A function/transform that takes in the
37 target and transforms it.
38 loader (callable, optional): A function to load an image given its path.
39
40 Attributes:
41 classes (list): List of the class names.
42 class_to_idx (dict): Dict with items (class_name, class_index).
43 wnids (list): List of the WordNet IDs.
44 wnid_to_idx (dict): Dict with items (wordnet_id, class_index).
45 imgs (list): List of (image path, class_index) tuples
46 targets (list): The class_index value for each image in the dataset
47 """
48
49 def __init__(self, root, split='train', download=False, **kwargs):
50 root = self.root = os.path.expanduser(root)
51 self.split = self._verify_split(split)
52
53 if download:
54 self.download()
55 wnid_to_classes = self._load_meta_file()[0]
56
57 super(ImageNet, self).__init__(self.split_folder, **kwargs)
58 self.root = root
59
60 idcs = [idx for _, idx in self.imgs]
61 self.wnids = self.classes
62 self.wnid_to_idx = {wnid: idx for idx, wnid in zip(idcs, self.wnids)}
63 self.classes = [wnid_to_classes[wnid] for wnid in self.wnids]
64 self.class_to_idx = {cls: idx
65 for clss, idx in zip(self.classes, idcs)
66 for cls in clss}
67
68 def download(self):
69 if not check_integrity(self.meta_file):
70 tmp_dir = tempfile.mkdtemp()
71
72 archive_dict = ARCHIVE_DICT['devkit']
73 download_and_extract_archive(archive_dict['url'], self.root,
74 extract_root=tmp_dir,
75 md5=archive_dict['md5'])
76 devkit_folder = _splitexts(os.path.basename(archive_dict['url']))[0]
77 meta = parse_devkit(os.path.join(tmp_dir, devkit_folder))
78 self._save_meta_file(*meta)
79
80 shutil.rmtree(tmp_dir)
81
82 if not os.path.isdir(self.split_folder):
83 archive_dict = ARCHIVE_DICT[self.split]
84 download_and_extract_archive(archive_dict['url'], self.root,
85 extract_root=self.split_folder,
86 md5=archive_dict['md5'])
87
88 if self.split == 'train':
89 prepare_train_folder(self.split_folder)
90 elif self.split == 'val':
91 val_wnids = self._load_meta_file()[1]
92 prepare_val_folder(self.split_folder, val_wnids)
93 else:
94 msg = ("You set download=True, but a folder '{}' already exist in "
95 "the root directory. If you want to re-download or re-extract the "
96 "archive, delete the folder.")
97 print(msg.format(self.split))
98
99 @property
100 def meta_file(self):
101 return os.path.join(self.root, 'meta.bin')
102
103 def _load_meta_file(self):
104 if check_integrity(self.meta_file):
105 return torch.load(self.meta_file)
106 else:
107 raise RuntimeError("Meta file not found or corrupted.",
108 "You can use download=True to create it.")
109
110 def _save_meta_file(self, wnid_to_class, val_wnids):
111 torch.save((wnid_to_class, val_wnids), self.meta_file)
112
113 def _verify_split(self, split):
114 if split not in self.valid_splits:
115 msg = "Unknown split {} .".format(split)
116 msg += "Valid splits are {{}}.".format(", ".join(self.valid_splits))
117 raise ValueError(msg)
118 return split
119
120 @property
121 def valid_splits(self):
122 return 'train', 'val'
123
124 @property
125 def split_folder(self):
126 return os.path.join(self.root, self.split)
127
128 def extra_repr(self):
129 return "Split: {split}".format(**self.__dict__)
130
131
132 def parse_devkit(root):
133 idx_to_wnid, wnid_to_classes = parse_meta(root)
134 val_idcs = parse_val_groundtruth(root)
135 val_wnids = [idx_to_wnid[idx] for idx in val_idcs]
136 return wnid_to_classes, val_wnids
137
138
139 def parse_meta(devkit_root, path='data', filename='meta.mat'):
140 import scipy.io as sio
141
142 metafile = os.path.join(devkit_root, path, filename)
143 meta = sio.loadmat(metafile, squeeze_me=True)['synsets']
144 nums_children = list(zip(*meta))[4]
145 meta = [meta[idx] for idx, num_children in enumerate(nums_children)
146 if num_children == 0]
147 idcs, wnids, classes = list(zip(*meta))[:3]
148 classes = [tuple(clss.split(', ')) for clss in classes]
149 idx_to_wnid = {idx: wnid for idx, wnid in zip(idcs, wnids)}
150 wnid_to_classes = {wnid: clss for wnid, clss in zip(wnids, classes)}
151 return idx_to_wnid, wnid_to_classes
152
153
154 def parse_val_groundtruth(devkit_root, path='data',
155 filename='ILSVRC2012_validation_ground_truth.txt'):
156 with open(os.path.join(devkit_root, path, filename), 'r') as txtfh:
157 val_idcs = txtfh.readlines()
158 return [int(val_idx) for val_idx in val_idcs]
159
160
161 def prepare_train_folder(folder):
162 for archive in [os.path.join(folder, archive) for archive in os.listdir(folder)]:
163 extract_archive(archive, os.path.splitext(archive)[0], remove_finished=True)
164
165
166 def prepare_val_folder(folder, wnids):
167 img_files = sorted([os.path.join(folder, file) for file in os.listdir(folder)])
168
169 for wnid in set(wnids):
170 os.mkdir(os.path.join(folder, wnid))
171
172 for wnid, img_file in zip(wnids, img_files):
173 shutil.move(img_file, os.path.join(folder, wnid, os.path.basename(img_file)))
174
175
176 def _splitexts(root):
177 exts = []
178 ext = '.'
179 while ext:
180 root, ext = os.path.splitext(root)
181 exts.append(ext)
182 return root, ''.join(reversed(exts))
183
[end of torchvision/datasets/imagenet.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/torchvision/datasets/imagenet.py b/torchvision/datasets/imagenet.py
--- a/torchvision/datasets/imagenet.py
+++ b/torchvision/datasets/imagenet.py
@@ -38,7 +38,7 @@
loader (callable, optional): A function to load an image given its path.
Attributes:
- classes (list): List of the class names.
+ classes (list): List of the class name tuples.
class_to_idx (dict): Dict with items (class_name, class_index).
wnids (list): List of the WordNet IDs.
wnid_to_idx (dict): Dict with items (wordnet_id, class_index).
@@ -57,12 +57,11 @@
super(ImageNet, self).__init__(self.split_folder, **kwargs)
self.root = root
- idcs = [idx for _, idx in self.imgs]
self.wnids = self.classes
- self.wnid_to_idx = {wnid: idx for idx, wnid in zip(idcs, self.wnids)}
+ self.wnid_to_idx = self.class_to_idx
self.classes = [wnid_to_classes[wnid] for wnid in self.wnids]
self.class_to_idx = {cls: idx
- for clss, idx in zip(self.classes, idcs)
+ for idx, clss in enumerate(self.classes)
for cls in clss}
def download(self):
| {"golden_diff": "diff --git a/torchvision/datasets/imagenet.py b/torchvision/datasets/imagenet.py\n--- a/torchvision/datasets/imagenet.py\n+++ b/torchvision/datasets/imagenet.py\n@@ -38,7 +38,7 @@\n loader (callable, optional): A function to load an image given its path.\n \n Attributes:\n- classes (list): List of the class names.\n+ classes (list): List of the class name tuples.\n class_to_idx (dict): Dict with items (class_name, class_index).\n wnids (list): List of the WordNet IDs.\n wnid_to_idx (dict): Dict with items (wordnet_id, class_index).\n@@ -57,12 +57,11 @@\n super(ImageNet, self).__init__(self.split_folder, **kwargs)\n self.root = root\n \n- idcs = [idx for _, idx in self.imgs]\n self.wnids = self.classes\n- self.wnid_to_idx = {wnid: idx for idx, wnid in zip(idcs, self.wnids)}\n+ self.wnid_to_idx = self.class_to_idx\n self.classes = [wnid_to_classes[wnid] for wnid in self.wnids]\n self.class_to_idx = {cls: idx\n- for clss, idx in zip(self.classes, idcs)\n+ for idx, clss in enumerate(self.classes)\n for cls in clss}\n \n def download(self):\n", "issue": "A small bug in ImageNet Dataset Implementation\nCurrently, all values (i.e., index) in the `class_to_idx` and `wnid_to_idx` of `ImageNet` are 0.\r\nThe current code is as follows where `idcs` is wrong since it will fetch the indices of images instead of classes.\r\n```python\r\n idcs = [idx for _, idx in self.imgs]\r\n self.wnids = self.classes\r\n self.wnid_to_idx = {wnid: idx for idx, wnid in zip(idcs, self.wnids)}\r\n self.classes = [wnid_to_classes[wnid] for wnid in self.wnids]\r\n self.class_to_idx = {cls: idx\r\n for clss, idx in zip(self.classes, idcs)\r\n for cls in clss}\r\n```\r\nA simple fix will be,\r\n```python\r\n self.wnids = self.classes\r\n self.wnid_to_idx = self.class_to_idx # ImageFolder handles it well\r\n self.classes, self.class_to_idx = [], {}\r\n for wnid, idx in self.wnid_to_idx.items():\r\n classes = wnid_to_classes[wnid]\r\n self.classes.append(classes)\r\n for c in classes:\r\n self.class_to_idx[c] = idx\r\n```\n", "before_files": [{"content": "from __future__ import print_function\nimport os\nimport shutil\nimport tempfile\nimport torch\nfrom .folder import ImageFolder\nfrom .utils import check_integrity, download_and_extract_archive, extract_archive\n\nARCHIVE_DICT = {\n 'train': {\n 'url': 'http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_train.tar',\n 'md5': '1d675b47d978889d74fa0da5fadfb00e',\n },\n 'val': {\n 'url': 'http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_img_val.tar',\n 'md5': '29b22e2961454d5413ddabcf34fc5622',\n },\n 'devkit': {\n 'url': 'http://www.image-net.org/challenges/LSVRC/2012/nnoupb/ILSVRC2012_devkit_t12.tar.gz',\n 'md5': 'fa75699e90414af021442c21a62c3abf',\n }\n}\n\n\nclass ImageNet(ImageFolder):\n \"\"\"`ImageNet <http://image-net.org/>`_ 2012 Classification Dataset.\n\n Args:\n root (string): Root directory of the ImageNet Dataset.\n split (string, optional): The dataset split, supports ``train``, or ``val``.\n download (bool, optional): If true, downloads the dataset from the internet and\n puts it in root directory. If dataset is already downloaded, it is not\n downloaded again.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n loader (callable, optional): A function to load an image given its path.\n\n Attributes:\n classes (list): List of the class names.\n class_to_idx (dict): Dict with items (class_name, class_index).\n wnids (list): List of the WordNet IDs.\n wnid_to_idx (dict): Dict with items (wordnet_id, class_index).\n imgs (list): List of (image path, class_index) tuples\n targets (list): The class_index value for each image in the dataset\n \"\"\"\n\n def __init__(self, root, split='train', download=False, **kwargs):\n root = self.root = os.path.expanduser(root)\n self.split = self._verify_split(split)\n\n if download:\n self.download()\n wnid_to_classes = self._load_meta_file()[0]\n\n super(ImageNet, self).__init__(self.split_folder, **kwargs)\n self.root = root\n\n idcs = [idx for _, idx in self.imgs]\n self.wnids = self.classes\n self.wnid_to_idx = {wnid: idx for idx, wnid in zip(idcs, self.wnids)}\n self.classes = [wnid_to_classes[wnid] for wnid in self.wnids]\n self.class_to_idx = {cls: idx\n for clss, idx in zip(self.classes, idcs)\n for cls in clss}\n\n def download(self):\n if not check_integrity(self.meta_file):\n tmp_dir = tempfile.mkdtemp()\n\n archive_dict = ARCHIVE_DICT['devkit']\n download_and_extract_archive(archive_dict['url'], self.root,\n extract_root=tmp_dir,\n md5=archive_dict['md5'])\n devkit_folder = _splitexts(os.path.basename(archive_dict['url']))[0]\n meta = parse_devkit(os.path.join(tmp_dir, devkit_folder))\n self._save_meta_file(*meta)\n\n shutil.rmtree(tmp_dir)\n\n if not os.path.isdir(self.split_folder):\n archive_dict = ARCHIVE_DICT[self.split]\n download_and_extract_archive(archive_dict['url'], self.root,\n extract_root=self.split_folder,\n md5=archive_dict['md5'])\n\n if self.split == 'train':\n prepare_train_folder(self.split_folder)\n elif self.split == 'val':\n val_wnids = self._load_meta_file()[1]\n prepare_val_folder(self.split_folder, val_wnids)\n else:\n msg = (\"You set download=True, but a folder '{}' already exist in \"\n \"the root directory. If you want to re-download or re-extract the \"\n \"archive, delete the folder.\")\n print(msg.format(self.split))\n\n @property\n def meta_file(self):\n return os.path.join(self.root, 'meta.bin')\n\n def _load_meta_file(self):\n if check_integrity(self.meta_file):\n return torch.load(self.meta_file)\n else:\n raise RuntimeError(\"Meta file not found or corrupted.\",\n \"You can use download=True to create it.\")\n\n def _save_meta_file(self, wnid_to_class, val_wnids):\n torch.save((wnid_to_class, val_wnids), self.meta_file)\n\n def _verify_split(self, split):\n if split not in self.valid_splits:\n msg = \"Unknown split {} .\".format(split)\n msg += \"Valid splits are {{}}.\".format(\", \".join(self.valid_splits))\n raise ValueError(msg)\n return split\n\n @property\n def valid_splits(self):\n return 'train', 'val'\n\n @property\n def split_folder(self):\n return os.path.join(self.root, self.split)\n\n def extra_repr(self):\n return \"Split: {split}\".format(**self.__dict__)\n\n\ndef parse_devkit(root):\n idx_to_wnid, wnid_to_classes = parse_meta(root)\n val_idcs = parse_val_groundtruth(root)\n val_wnids = [idx_to_wnid[idx] for idx in val_idcs]\n return wnid_to_classes, val_wnids\n\n\ndef parse_meta(devkit_root, path='data', filename='meta.mat'):\n import scipy.io as sio\n\n metafile = os.path.join(devkit_root, path, filename)\n meta = sio.loadmat(metafile, squeeze_me=True)['synsets']\n nums_children = list(zip(*meta))[4]\n meta = [meta[idx] for idx, num_children in enumerate(nums_children)\n if num_children == 0]\n idcs, wnids, classes = list(zip(*meta))[:3]\n classes = [tuple(clss.split(', ')) for clss in classes]\n idx_to_wnid = {idx: wnid for idx, wnid in zip(idcs, wnids)}\n wnid_to_classes = {wnid: clss for wnid, clss in zip(wnids, classes)}\n return idx_to_wnid, wnid_to_classes\n\n\ndef parse_val_groundtruth(devkit_root, path='data',\n filename='ILSVRC2012_validation_ground_truth.txt'):\n with open(os.path.join(devkit_root, path, filename), 'r') as txtfh:\n val_idcs = txtfh.readlines()\n return [int(val_idx) for val_idx in val_idcs]\n\n\ndef prepare_train_folder(folder):\n for archive in [os.path.join(folder, archive) for archive in os.listdir(folder)]:\n extract_archive(archive, os.path.splitext(archive)[0], remove_finished=True)\n\n\ndef prepare_val_folder(folder, wnids):\n img_files = sorted([os.path.join(folder, file) for file in os.listdir(folder)])\n\n for wnid in set(wnids):\n os.mkdir(os.path.join(folder, wnid))\n\n for wnid, img_file in zip(wnids, img_files):\n shutil.move(img_file, os.path.join(folder, wnid, os.path.basename(img_file)))\n\n\ndef _splitexts(root):\n exts = []\n ext = '.'\n while ext:\n root, ext = os.path.splitext(root)\n exts.append(ext)\n return root, ''.join(reversed(exts))\n", "path": "torchvision/datasets/imagenet.py"}]} | 3,049 | 329 |
gh_patches_debug_7048 | rasdani/github-patches | git_diff | airctic__icevision-821 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding Non Unique Names To ClassMap Must Be Illegal
Currently, `add_name` doesn't check if the added `name` is a duplicate or not. This should be illegal, as classes must be unique
https://github.com/airctic/icevision/blob/82f3c7322d5adbc41d6a1b7b9e4327eb32ad23c4/icevision/core/class_map.py#L47-L51
</issue>
<code>
[start of icevision/core/class_map.py]
1 __all__ = ["ClassMap", "BACKGROUND"]
2
3 from icevision.imports import *
4
5 BACKGROUND = "background"
6
7
8 class ClassMap:
9 """Utility class for mapping between class name and id."""
10
11 def __init__(
12 self,
13 classes: Optional[Sequence[str]] = None,
14 background: Optional[str] = BACKGROUND,
15 ):
16 self._lock = True
17
18 self._id2class = copy(list(classes)) if classes else []
19 # insert background if required
20 self._background = background
21 if self._background is not None:
22 try:
23 self._id2class.remove(self._background)
24 except ValueError:
25 pass
26 # background is always index zero
27 self._id2class.insert(0, self._background)
28
29 self._class2id = {name: i for i, name in enumerate(self._id2class)}
30
31 @property
32 def num_classes(self):
33 return len(self)
34
35 def get_by_id(self, id: int) -> str:
36 return self._id2class[id]
37
38 def get_by_name(self, name: str) -> int:
39 try:
40 return self._class2id[name]
41 except KeyError as e:
42 if not self._lock:
43 return self.add_name(name)
44 else:
45 raise e
46
47 def add_name(self, name) -> int:
48 self._id2class.append(name)
49 id = len(self._class2id)
50 self._class2id[name] = id
51 return id
52
53 def lock(self):
54 self._lock = True
55 return self
56
57 def unlock(self):
58 self._lock = False
59 return self
60
61 def __eq__(self, other) -> bool:
62 if isinstance(other, ClassMap):
63 return self.__dict__ == other.__dict__
64 return False
65
66 def __len__(self):
67 return len(self._id2class)
68
69 def __repr__(self):
70 return f"<ClassMap: {self._class2id.__repr__()}>"
71
[end of icevision/core/class_map.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/icevision/core/class_map.py b/icevision/core/class_map.py
--- a/icevision/core/class_map.py
+++ b/icevision/core/class_map.py
@@ -44,7 +44,13 @@
else:
raise e
- def add_name(self, name) -> int:
+ def add_name(self, name: str) -> int:
+ # Raise error if trying to add duplicate value
+ if name in self._id2class:
+ raise ValueError(
+ f"'{name}' already exists in the ClassMap. You can only add new labels that are unique"
+ )
+
self._id2class.append(name)
id = len(self._class2id)
self._class2id[name] = id
| {"golden_diff": "diff --git a/icevision/core/class_map.py b/icevision/core/class_map.py\n--- a/icevision/core/class_map.py\n+++ b/icevision/core/class_map.py\n@@ -44,7 +44,13 @@\n else:\n raise e\n \n- def add_name(self, name) -> int:\n+ def add_name(self, name: str) -> int:\n+ # Raise error if trying to add duplicate value\n+ if name in self._id2class:\n+ raise ValueError(\n+ f\"'{name}' already exists in the ClassMap. You can only add new labels that are unique\"\n+ )\n+\n self._id2class.append(name)\n id = len(self._class2id)\n self._class2id[name] = id\n", "issue": "Adding Non Unique Names To ClassMap Must Be Illegal\nCurrently, `add_name` doesn't check if the added `name` is a duplicate or not. This should be illegal, as classes must be unique\r\n\r\nhttps://github.com/airctic/icevision/blob/82f3c7322d5adbc41d6a1b7b9e4327eb32ad23c4/icevision/core/class_map.py#L47-L51\n", "before_files": [{"content": "__all__ = [\"ClassMap\", \"BACKGROUND\"]\n\nfrom icevision.imports import *\n\nBACKGROUND = \"background\"\n\n\nclass ClassMap:\n \"\"\"Utility class for mapping between class name and id.\"\"\"\n\n def __init__(\n self,\n classes: Optional[Sequence[str]] = None,\n background: Optional[str] = BACKGROUND,\n ):\n self._lock = True\n\n self._id2class = copy(list(classes)) if classes else []\n # insert background if required\n self._background = background\n if self._background is not None:\n try:\n self._id2class.remove(self._background)\n except ValueError:\n pass\n # background is always index zero\n self._id2class.insert(0, self._background)\n\n self._class2id = {name: i for i, name in enumerate(self._id2class)}\n\n @property\n def num_classes(self):\n return len(self)\n\n def get_by_id(self, id: int) -> str:\n return self._id2class[id]\n\n def get_by_name(self, name: str) -> int:\n try:\n return self._class2id[name]\n except KeyError as e:\n if not self._lock:\n return self.add_name(name)\n else:\n raise e\n\n def add_name(self, name) -> int:\n self._id2class.append(name)\n id = len(self._class2id)\n self._class2id[name] = id\n return id\n\n def lock(self):\n self._lock = True\n return self\n\n def unlock(self):\n self._lock = False\n return self\n\n def __eq__(self, other) -> bool:\n if isinstance(other, ClassMap):\n return self.__dict__ == other.__dict__\n return False\n\n def __len__(self):\n return len(self._id2class)\n\n def __repr__(self):\n return f\"<ClassMap: {self._class2id.__repr__()}>\"\n", "path": "icevision/core/class_map.py"}]} | 1,216 | 172 |
gh_patches_debug_9985 | rasdani/github-patches | git_diff | cal-itp__benefits-1030 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Error page bug - Log out button too low
A clear and concise description of what the bug is.
## To Reproduce
Steps to reproduce the behavior:
1. Go through the Login.gov IAL2 flow with a user who is under age 65
2. See error
## Expected behavior
A clear and concise description of what you expected to happen.
## Screenshots
Tablet/Mobile widths are off:
<img width="485" alt="image" src="https://user-images.githubusercontent.com/3673236/190286767-dc5e2f26-3a48-44e7-a480-9751726938ea.png">
<img width="457" alt="image" src="https://user-images.githubusercontent.com/3673236/190287117-4c1072cb-271d-4b1a-b0c8-00f7dd958302.png">
Desktop is good:
<img width="1440" alt="image" src="https://user-images.githubusercontent.com/3673236/190287052-4c8a778a-7ccf-42dd-8042-c59a0b350308.png">
## Desktop (please complete the following information)
- OS: [e.g. iOS]
## Smartphone (please complete the following information)
- Device: [e.g. iPhone6]
## Additional context
Add any other context about the problem here.
</issue>
<code>
[start of benefits/enrollment/views.py]
1 """
2 The enrollment application: view definitions for the benefits enrollment flow.
3 """
4 import logging
5
6 from django.http import JsonResponse
7 from django.template.response import TemplateResponse
8 from django.urls import reverse
9 from django.utils.html import format_html
10 from django.utils.decorators import decorator_from_middleware
11 from django.utils.translation import pgettext, gettext as _
12
13 from benefits.core import models, session, viewmodels
14 from benefits.core.middleware import EligibleSessionRequired, VerifierSessionRequired, pageview_decorator
15 from benefits.core.views import ROUTE_HELP, ROUTE_LOGGED_OUT
16 from . import analytics, api, forms
17
18
19 ROUTE_INDEX = "enrollment:index"
20 ROUTE_RETRY = "enrollment:retry"
21 ROUTE_SUCCESS = "enrollment:success"
22 ROUTE_TOKEN = "enrollment:token"
23
24 TEMPLATE_INDEX = "enrollment/index.html"
25 TEMPLATE_RETRY = "enrollment/retry.html"
26 TEMPLATE_SUCCESS = "enrollment/success.html"
27
28
29 logger = logging.getLogger(__name__)
30
31
32 @decorator_from_middleware(EligibleSessionRequired)
33 def token(request):
34 """View handler for the enrollment auth token."""
35 if not session.enrollment_token_valid(request):
36 agency = session.agency(request)
37 response = api.Client(agency).access_token()
38 session.update(request, enrollment_token=response.access_token, enrollment_token_exp=response.expiry)
39
40 data = {"token": session.enrollment_token(request)}
41
42 return JsonResponse(data)
43
44
45 @decorator_from_middleware(EligibleSessionRequired)
46 def index(request):
47 """View handler for the enrollment landing page."""
48 session.update(request, origin=reverse(ROUTE_INDEX))
49
50 agency = session.agency(request)
51 verifier = session.verifier(request)
52
53 # POST back after payment processor form, process card token
54 if request.method == "POST":
55 form = forms.CardTokenizeSuccessForm(request.POST)
56 if not form.is_valid():
57 raise Exception("Invalid card token form")
58
59 eligibility = session.eligibility(request)
60 logger.debug(f"Session contains an {models.EligibilityType.__name__}")
61
62 logger.debug("Read tokenized card")
63 card_token = form.cleaned_data.get("card_token")
64
65 response = api.Client(agency).enroll(card_token, eligibility.group_id)
66 if response.success:
67 analytics.returned_success(request, eligibility.group_id)
68 return success(request)
69 else:
70 analytics.returned_error(request, response.message)
71 raise Exception(response.message)
72
73 # GET enrollment index, with button to initiate payment processor connection
74 else:
75 tokenize_button = "tokenize_card"
76 tokenize_retry_form = forms.CardTokenizeFailForm(ROUTE_RETRY)
77 tokenize_success_form = forms.CardTokenizeSuccessForm(auto_id=True, label_suffix="")
78
79 confirmed_eligibility_item = viewmodels.MediaItem(
80 icon=viewmodels.Icon("happybus", pgettext("image alt text", "core.icons.happybus")),
81 heading=_(verifier.eligibility_confirmed_item_heading),
82 details=_(verifier.eligibility_confirmed_item_details),
83 )
84
85 help_link = reverse(ROUTE_HELP)
86 link_card_item = viewmodels.MediaItem(
87 icon=viewmodels.Icon("bankcardcheck", pgettext("image alt text", "core.icons.bankcardcheck")),
88 heading=_("enrollment.pages.index.link_card_item.heading"),
89 details=[
90 format_html(_("enrollment.pages.index.link_card_item.p[0]%(link)s") % {"link": f"{help_link}#littlepay"}),
91 _("enrollment.pages.index.link_card_item.p[1]"),
92 ],
93 )
94
95 media = [confirmed_eligibility_item, link_card_item]
96
97 page = viewmodels.Page(
98 title=_("enrollment.pages.index.title"),
99 headline=_("enrollment.pages.index.headline"),
100 forms=[tokenize_retry_form, tokenize_success_form],
101 buttons=[
102 viewmodels.Button.primary(
103 text=_("enrollment.buttons.payment_partner"), id=tokenize_button, url=f"#{tokenize_button}"
104 ),
105 ],
106 )
107 context = {"media": media}
108 context.update(page.context_dict())
109
110 # add agency details
111 agency_vm = viewmodels.TransitAgency(agency)
112 context.update(agency_vm.context_dict())
113
114 # and payment processor details
115 processor_vm = viewmodels.PaymentProcessor(
116 model=agency.payment_processor,
117 access_token_url=reverse(ROUTE_TOKEN),
118 element_id=f"#{tokenize_button}",
119 color="#046b99",
120 name=f"{agency.long_name} {_('partnered with')} {agency.payment_processor.name}",
121 )
122 context.update(processor_vm.context_dict())
123 logger.warning(f"card_tokenize_url: {context['payment_processor'].card_tokenize_url}")
124
125 # the tokenize form URLs are injected to page-generated Javascript
126 context["forms"] = {
127 "tokenize_retry": reverse(tokenize_retry_form.action_url),
128 "tokenize_success": reverse(tokenize_success_form.action_url),
129 }
130
131 return TemplateResponse(request, TEMPLATE_INDEX, context)
132
133
134 @decorator_from_middleware(EligibleSessionRequired)
135 def retry(request):
136 """View handler for a recoverable failure condition."""
137 if request.method == "POST":
138 analytics.returned_retry(request)
139 form = forms.CardTokenizeFailForm(request.POST)
140 if form.is_valid():
141 agency = session.agency(request)
142 page = viewmodels.Page(
143 classes="no-image-mobile",
144 title=_("enrollment.pages.retry.title"),
145 icon=viewmodels.Icon("bankcardquestion", pgettext("image alt text", "core.icons.bankcardquestion")),
146 headline=_("enrollment.pages.retry.title"),
147 paragraphs=[_("enrollment.pages.retry.p[0]")],
148 buttons=viewmodels.Button.agency_contact_links(agency),
149 )
150 page.buttons.append(viewmodels.Button.primary(text=_("core.buttons.retry"), url=session.origin(request)))
151 return TemplateResponse(request, TEMPLATE_RETRY, page.context_dict())
152 else:
153 analytics.returned_error(request, "Invalid retry submission.")
154 raise Exception("Invalid retry submission.")
155 else:
156 analytics.returned_error(request, "This view method only supports POST.")
157 raise Exception("This view method only supports POST.")
158
159
160 @pageview_decorator
161 @decorator_from_middleware(VerifierSessionRequired)
162 def success(request):
163 """View handler for the final success page."""
164 request.path = "/enrollment/success"
165 session.update(request, origin=reverse(ROUTE_SUCCESS))
166
167 verifier = session.verifier(request)
168
169 page = viewmodels.Page(title=_("enrollment.pages.success.title"), headline=_("enrollment.pages.success.headline"))
170
171 if verifier.is_auth_required and session.logged_in(request):
172 # overwrite origin for a logged in user
173 # if they click the logout button, they are taken to the new route
174 session.update(request, origin=reverse(ROUTE_LOGGED_OUT))
175 page.buttons = [viewmodels.Button.logout()]
176
177 success_item = viewmodels.MediaItem(
178 icon=viewmodels.Icon("happybus", pgettext("image alt text", "core.icons.happybus")),
179 details=[
180 _(verifier.enrollment_success_confirm_item_details),
181 format_html(_("enrollment.pages.success.helplink%(link)s") % {"link": f"{reverse(ROUTE_HELP)}"}),
182 ],
183 )
184 media = [success_item]
185
186 if verifier.enrollment_success_expiry_item_heading or verifier.enrollment_success_expiry_item_details:
187 heading = (
188 _(verifier.enrollment_success_expiry_item_heading) if verifier.enrollment_success_expiry_item_heading else None
189 )
190 details = (
191 _(verifier.enrollment_success_expiry_item_details) if verifier.enrollment_success_expiry_item_details else None
192 )
193 expiry_item = viewmodels.MediaItem(
194 icon=viewmodels.Icon("calendarcheck", pgettext("image alt text", "core.icons.calendarcheck")),
195 heading=heading,
196 details=details,
197 )
198 media.insert(0, expiry_item)
199
200 context = {"media": media}
201 context.update(page.context_dict())
202
203 return TemplateResponse(request, TEMPLATE_SUCCESS, context)
204
[end of benefits/enrollment/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/benefits/enrollment/views.py b/benefits/enrollment/views.py
--- a/benefits/enrollment/views.py
+++ b/benefits/enrollment/views.py
@@ -140,7 +140,6 @@
if form.is_valid():
agency = session.agency(request)
page = viewmodels.Page(
- classes="no-image-mobile",
title=_("enrollment.pages.retry.title"),
icon=viewmodels.Icon("bankcardquestion", pgettext("image alt text", "core.icons.bankcardquestion")),
headline=_("enrollment.pages.retry.title"),
| {"golden_diff": "diff --git a/benefits/enrollment/views.py b/benefits/enrollment/views.py\n--- a/benefits/enrollment/views.py\n+++ b/benefits/enrollment/views.py\n@@ -140,7 +140,6 @@\n if form.is_valid():\n agency = session.agency(request)\n page = viewmodels.Page(\n- classes=\"no-image-mobile\",\n title=_(\"enrollment.pages.retry.title\"),\n icon=viewmodels.Icon(\"bankcardquestion\", pgettext(\"image alt text\", \"core.icons.bankcardquestion\")),\n headline=_(\"enrollment.pages.retry.title\"),\n", "issue": "Bug: Error page bug - Log out button too low\nA clear and concise description of what the bug is.\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Go through the Login.gov IAL2 flow with a user who is under age 65\r\n2. See error\r\n\r\n## Expected behavior\r\n\r\nA clear and concise description of what you expected to happen.\r\n\r\n## Screenshots\r\n\r\nTablet/Mobile widths are off:\r\n<img width=\"485\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/190286767-dc5e2f26-3a48-44e7-a480-9751726938ea.png\">\r\n<img width=\"457\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/190287117-4c1072cb-271d-4b1a-b0c8-00f7dd958302.png\">\r\n\r\nDesktop is good:\r\n<img width=\"1440\" alt=\"image\" src=\"https://user-images.githubusercontent.com/3673236/190287052-4c8a778a-7ccf-42dd-8042-c59a0b350308.png\">\r\n\r\n\r\n## Desktop (please complete the following information)\r\n\r\n- OS: [e.g. iOS]\r\n\r\n## Smartphone (please complete the following information)\r\n\r\n- Device: [e.g. iPhone6]\r\n\r\n## Additional context\r\n\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "\"\"\"\nThe enrollment application: view definitions for the benefits enrollment flow.\n\"\"\"\nimport logging\n\nfrom django.http import JsonResponse\nfrom django.template.response import TemplateResponse\nfrom django.urls import reverse\nfrom django.utils.html import format_html\nfrom django.utils.decorators import decorator_from_middleware\nfrom django.utils.translation import pgettext, gettext as _\n\nfrom benefits.core import models, session, viewmodels\nfrom benefits.core.middleware import EligibleSessionRequired, VerifierSessionRequired, pageview_decorator\nfrom benefits.core.views import ROUTE_HELP, ROUTE_LOGGED_OUT\nfrom . import analytics, api, forms\n\n\nROUTE_INDEX = \"enrollment:index\"\nROUTE_RETRY = \"enrollment:retry\"\nROUTE_SUCCESS = \"enrollment:success\"\nROUTE_TOKEN = \"enrollment:token\"\n\nTEMPLATE_INDEX = \"enrollment/index.html\"\nTEMPLATE_RETRY = \"enrollment/retry.html\"\nTEMPLATE_SUCCESS = \"enrollment/success.html\"\n\n\nlogger = logging.getLogger(__name__)\n\n\n@decorator_from_middleware(EligibleSessionRequired)\ndef token(request):\n \"\"\"View handler for the enrollment auth token.\"\"\"\n if not session.enrollment_token_valid(request):\n agency = session.agency(request)\n response = api.Client(agency).access_token()\n session.update(request, enrollment_token=response.access_token, enrollment_token_exp=response.expiry)\n\n data = {\"token\": session.enrollment_token(request)}\n\n return JsonResponse(data)\n\n\n@decorator_from_middleware(EligibleSessionRequired)\ndef index(request):\n \"\"\"View handler for the enrollment landing page.\"\"\"\n session.update(request, origin=reverse(ROUTE_INDEX))\n\n agency = session.agency(request)\n verifier = session.verifier(request)\n\n # POST back after payment processor form, process card token\n if request.method == \"POST\":\n form = forms.CardTokenizeSuccessForm(request.POST)\n if not form.is_valid():\n raise Exception(\"Invalid card token form\")\n\n eligibility = session.eligibility(request)\n logger.debug(f\"Session contains an {models.EligibilityType.__name__}\")\n\n logger.debug(\"Read tokenized card\")\n card_token = form.cleaned_data.get(\"card_token\")\n\n response = api.Client(agency).enroll(card_token, eligibility.group_id)\n if response.success:\n analytics.returned_success(request, eligibility.group_id)\n return success(request)\n else:\n analytics.returned_error(request, response.message)\n raise Exception(response.message)\n\n # GET enrollment index, with button to initiate payment processor connection\n else:\n tokenize_button = \"tokenize_card\"\n tokenize_retry_form = forms.CardTokenizeFailForm(ROUTE_RETRY)\n tokenize_success_form = forms.CardTokenizeSuccessForm(auto_id=True, label_suffix=\"\")\n\n confirmed_eligibility_item = viewmodels.MediaItem(\n icon=viewmodels.Icon(\"happybus\", pgettext(\"image alt text\", \"core.icons.happybus\")),\n heading=_(verifier.eligibility_confirmed_item_heading),\n details=_(verifier.eligibility_confirmed_item_details),\n )\n\n help_link = reverse(ROUTE_HELP)\n link_card_item = viewmodels.MediaItem(\n icon=viewmodels.Icon(\"bankcardcheck\", pgettext(\"image alt text\", \"core.icons.bankcardcheck\")),\n heading=_(\"enrollment.pages.index.link_card_item.heading\"),\n details=[\n format_html(_(\"enrollment.pages.index.link_card_item.p[0]%(link)s\") % {\"link\": f\"{help_link}#littlepay\"}),\n _(\"enrollment.pages.index.link_card_item.p[1]\"),\n ],\n )\n\n media = [confirmed_eligibility_item, link_card_item]\n\n page = viewmodels.Page(\n title=_(\"enrollment.pages.index.title\"),\n headline=_(\"enrollment.pages.index.headline\"),\n forms=[tokenize_retry_form, tokenize_success_form],\n buttons=[\n viewmodels.Button.primary(\n text=_(\"enrollment.buttons.payment_partner\"), id=tokenize_button, url=f\"#{tokenize_button}\"\n ),\n ],\n )\n context = {\"media\": media}\n context.update(page.context_dict())\n\n # add agency details\n agency_vm = viewmodels.TransitAgency(agency)\n context.update(agency_vm.context_dict())\n\n # and payment processor details\n processor_vm = viewmodels.PaymentProcessor(\n model=agency.payment_processor,\n access_token_url=reverse(ROUTE_TOKEN),\n element_id=f\"#{tokenize_button}\",\n color=\"#046b99\",\n name=f\"{agency.long_name} {_('partnered with')} {agency.payment_processor.name}\",\n )\n context.update(processor_vm.context_dict())\n logger.warning(f\"card_tokenize_url: {context['payment_processor'].card_tokenize_url}\")\n\n # the tokenize form URLs are injected to page-generated Javascript\n context[\"forms\"] = {\n \"tokenize_retry\": reverse(tokenize_retry_form.action_url),\n \"tokenize_success\": reverse(tokenize_success_form.action_url),\n }\n\n return TemplateResponse(request, TEMPLATE_INDEX, context)\n\n\n@decorator_from_middleware(EligibleSessionRequired)\ndef retry(request):\n \"\"\"View handler for a recoverable failure condition.\"\"\"\n if request.method == \"POST\":\n analytics.returned_retry(request)\n form = forms.CardTokenizeFailForm(request.POST)\n if form.is_valid():\n agency = session.agency(request)\n page = viewmodels.Page(\n classes=\"no-image-mobile\",\n title=_(\"enrollment.pages.retry.title\"),\n icon=viewmodels.Icon(\"bankcardquestion\", pgettext(\"image alt text\", \"core.icons.bankcardquestion\")),\n headline=_(\"enrollment.pages.retry.title\"),\n paragraphs=[_(\"enrollment.pages.retry.p[0]\")],\n buttons=viewmodels.Button.agency_contact_links(agency),\n )\n page.buttons.append(viewmodels.Button.primary(text=_(\"core.buttons.retry\"), url=session.origin(request)))\n return TemplateResponse(request, TEMPLATE_RETRY, page.context_dict())\n else:\n analytics.returned_error(request, \"Invalid retry submission.\")\n raise Exception(\"Invalid retry submission.\")\n else:\n analytics.returned_error(request, \"This view method only supports POST.\")\n raise Exception(\"This view method only supports POST.\")\n\n\n@pageview_decorator\n@decorator_from_middleware(VerifierSessionRequired)\ndef success(request):\n \"\"\"View handler for the final success page.\"\"\"\n request.path = \"/enrollment/success\"\n session.update(request, origin=reverse(ROUTE_SUCCESS))\n\n verifier = session.verifier(request)\n\n page = viewmodels.Page(title=_(\"enrollment.pages.success.title\"), headline=_(\"enrollment.pages.success.headline\"))\n\n if verifier.is_auth_required and session.logged_in(request):\n # overwrite origin for a logged in user\n # if they click the logout button, they are taken to the new route\n session.update(request, origin=reverse(ROUTE_LOGGED_OUT))\n page.buttons = [viewmodels.Button.logout()]\n\n success_item = viewmodels.MediaItem(\n icon=viewmodels.Icon(\"happybus\", pgettext(\"image alt text\", \"core.icons.happybus\")),\n details=[\n _(verifier.enrollment_success_confirm_item_details),\n format_html(_(\"enrollment.pages.success.helplink%(link)s\") % {\"link\": f\"{reverse(ROUTE_HELP)}\"}),\n ],\n )\n media = [success_item]\n\n if verifier.enrollment_success_expiry_item_heading or verifier.enrollment_success_expiry_item_details:\n heading = (\n _(verifier.enrollment_success_expiry_item_heading) if verifier.enrollment_success_expiry_item_heading else None\n )\n details = (\n _(verifier.enrollment_success_expiry_item_details) if verifier.enrollment_success_expiry_item_details else None\n )\n expiry_item = viewmodels.MediaItem(\n icon=viewmodels.Icon(\"calendarcheck\", pgettext(\"image alt text\", \"core.icons.calendarcheck\")),\n heading=heading,\n details=details,\n )\n media.insert(0, expiry_item)\n\n context = {\"media\": media}\n context.update(page.context_dict())\n\n return TemplateResponse(request, TEMPLATE_SUCCESS, context)\n", "path": "benefits/enrollment/views.py"}]} | 3,091 | 125 |
gh_patches_debug_54726 | rasdani/github-patches | git_diff | DDMAL__CantusDB-845 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"my sources"" page side panel
http://206.12.88.113/my-sources/ has a "created by" side panel. This is
1. not paginated
3. includes all the sources and not just the created ones (so it's both extra- long and also redundant).
Staging:
<img width="1106" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/3d11789e-6027-4358-8595-328e95e89d7b">
on production this only has the sources created on production, so it only has one source (I'm assuming the others will show up once we sort out the "Created by" info from OldCantus?
<img width="1160" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/f6e98d78-0f66-421c-aad9-2ede47400d88">
On OldCantus it looks like this:
<img width="981" alt="image" src="https://github.com/DDMAL/CantusDB/assets/67451875/15f4b995-d930-4645-9ca4-3befce6a868d">
</issue>
<code>
[start of django/cantusdb_project/main_app/views/user.py]
1 from django.urls import reverse
2 from django.db.models.aggregates import Count
3 from django.views.generic import DetailView
4 from django.contrib.auth import get_user_model, login as auth_login
5 from main_app.models import Source
6 from django.views.generic import ListView
7 from django.contrib.auth.mixins import LoginRequiredMixin
8 from django.db.models import Q
9 from django.core.paginator import Paginator
10 from django.contrib.auth.views import LogoutView, LoginView
11 from django.contrib import messages
12 from extra_views import SearchableListMixin
13 from django.http import HttpResponseRedirect
14 from django.core.exceptions import PermissionDenied
15
16
17 class UserDetailView(DetailView):
18 """Detail view for User model
19
20 Accessed by /users/<pk>
21 """
22
23 model = get_user_model()
24 context_object_name = "user"
25 template_name = "user_detail.html"
26
27 def get_context_data(self, **kwargs):
28 user = self.get_object()
29 # to begin, if the person viewing the site is not logged in,
30 # they should only be able to view the detail pages of indexers,
31 # and not the detail pages of run-of-the-mill users
32 viewing_user = self.request.user
33 if not (viewing_user.is_authenticated or user.is_indexer):
34 raise PermissionDenied()
35
36 context = super().get_context_data(**kwargs)
37 display_unpublished = viewing_user.is_authenticated
38 sort_by_siglum = lambda source: source.siglum
39 if display_unpublished:
40 context["inventoried_sources"] = sorted(
41 user.inventoried_sources.all(), key=sort_by_siglum
42 )
43 context["full_text_sources"] = sorted(
44 user.entered_full_text_for_sources.all(), key=sort_by_siglum
45 )
46 context["melody_sources"] = sorted(
47 user.entered_melody_for_sources.all(), key=sort_by_siglum
48 )
49 context["proofread_sources"] = sorted(
50 user.proofread_sources.all(), key=sort_by_siglum
51 )
52 context["edited_sources"] = sorted(
53 user.edited_sources.all(), key=sort_by_siglum
54 )
55 else:
56 context["inventoried_sources"] = sorted(
57 user.inventoried_sources.all().filter(published=True),
58 key=sort_by_siglum,
59 )
60 context["full_text_sources"] = sorted(
61 user.entered_full_text_for_sources.all().filter(published=True),
62 key=sort_by_siglum,
63 )
64 context["melody_sources"] = sorted(
65 user.entered_melody_for_sources.all().filter(published=True),
66 key=sort_by_siglum,
67 )
68 context["proofread_sources"] = sorted(
69 user.proofread_sources.all().filter(published=True), key=sort_by_siglum
70 )
71 context["edited_sources"] = sorted(
72 user.edited_sources.all().filter(published=True), key=sort_by_siglum
73 )
74
75 return context
76
77
78 class UserSourceListView(LoginRequiredMixin, ListView):
79 model = Source
80 context_object_name = "sources"
81 template_name = "user_source_list.html"
82 paginate_by = 100
83
84 def get_queryset(self):
85 return (
86 Source.objects.filter(
87 Q(current_editors=self.request.user)
88 | Q(created_by=self.request.user)
89 # | Q(inventoried_by=self.request.user)
90 # | Q(full_text_entered_by=self.request.user)
91 # | Q(melodies_entered_by=self.request.user)
92 # | Q(proofreaders=self.request.user)
93 # | Q(other_editors=self.request.user)
94 )
95 .order_by("-date_created")
96 .distinct()
97 )
98
99 def get_context_data(self, **kwargs):
100 context = super().get_context_data(**kwargs)
101
102 user_created_sources = (
103 Source.objects.filter(created_by=self.request.user)
104 .order_by("-date_created")
105 .distinct()
106 )
107 paginator = Paginator(user_created_sources, 10)
108 page_number = self.request.GET.get("page2")
109 page_obj = paginator.get_page(page_number)
110
111 context["user_created_sources_page_obj"] = page_obj
112 return context
113
114
115 class CustomLogoutView(LogoutView):
116 def get_next_page(self):
117 next_page = super().get_next_page()
118 messages.success(self.request, "You have successfully logged out!")
119 return next_page
120
121
122 class UserListView(LoginRequiredMixin, SearchableListMixin, ListView):
123 """A list of all User objects
124
125 This view is equivalent to the user list view on the old Cantus.
126 This includes all User objects on the old Cantus.
127 When passed a `?q=<query>` argument in the GET request, it will filter users
128 based on the fields defined in `search_fields` with the `icontains` lookup.
129
130 Accessed by /users/
131 """
132
133 model = get_user_model()
134 ordering = "full_name"
135 search_fields = ["full_name", "institution", "city", "country"]
136 paginate_by = 100
137 template_name = "user_list.html"
138 context_object_name = "users"
139
140
141 class IndexerListView(SearchableListMixin, ListView):
142 """A list of User objects shown to the public
143
144 This view replaces the indexer list view on the old Cantus.
145 The indexers are considered a subset of all User objects, the subset shown to the public.
146 This includes the User objects corresponding to Indexer objects on the old Cantus.
147 When passed a `?q=<query>` argument in the GET request, it will filter users
148 based on the fields defined in `search_fields` with the `icontains` lookup.
149
150 Accessed by /indexers/
151 """
152
153 model = get_user_model()
154 ordering = "full_name"
155 search_fields = ["full_name", "institution", "city", "country"]
156 paginate_by = 100
157 template_name = "indexer_list.html"
158 context_object_name = "indexers"
159
160 def get_queryset(self):
161 all_users = super().get_queryset()
162 indexers = all_users.filter(is_indexer=True)
163 display_unpublished = self.request.user.is_authenticated
164 if display_unpublished:
165 indexers = indexers.annotate(source_count=Count("inventoried_sources"))
166 # display those who have at least one source
167 return indexers.filter(source_count__gte=1)
168 else:
169 indexers = indexers.annotate(
170 source_count=Count(
171 "inventoried_sources", filter=Q(inventoried_sources__published=True)
172 )
173 )
174 # display those who have at least one published source
175 return indexers.filter(source_count__gte=1)
176
[end of django/cantusdb_project/main_app/views/user.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py
--- a/django/cantusdb_project/main_app/views/user.py
+++ b/django/cantusdb_project/main_app/views/user.py
@@ -104,7 +104,7 @@
.order_by("-date_created")
.distinct()
)
- paginator = Paginator(user_created_sources, 10)
+ paginator = Paginator(user_created_sources, 6)
page_number = self.request.GET.get("page2")
page_obj = paginator.get_page(page_number)
| {"golden_diff": "diff --git a/django/cantusdb_project/main_app/views/user.py b/django/cantusdb_project/main_app/views/user.py\n--- a/django/cantusdb_project/main_app/views/user.py\n+++ b/django/cantusdb_project/main_app/views/user.py\n@@ -104,7 +104,7 @@\n .order_by(\"-date_created\")\n .distinct()\n )\n- paginator = Paginator(user_created_sources, 10)\n+ paginator = Paginator(user_created_sources, 6)\n page_number = self.request.GET.get(\"page2\")\n page_obj = paginator.get_page(page_number)\n", "issue": "\"my sources\"\" page side panel\nhttp://206.12.88.113/my-sources/ has a \"created by\" side panel. This is\r\n1. not paginated\r\n3. includes all the sources and not just the created ones (so it's both extra- long and also redundant).\r\nStaging:\r\n<img width=\"1106\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/3d11789e-6027-4358-8595-328e95e89d7b\">\r\non production this only has the sources created on production, so it only has one source (I'm assuming the others will show up once we sort out the \"Created by\" info from OldCantus?\r\n<img width=\"1160\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/f6e98d78-0f66-421c-aad9-2ede47400d88\">\r\nOn OldCantus it looks like this:\r\n\r\n<img width=\"981\" alt=\"image\" src=\"https://github.com/DDMAL/CantusDB/assets/67451875/15f4b995-d930-4645-9ca4-3befce6a868d\">\r\n\r\n\n", "before_files": [{"content": "from django.urls import reverse\nfrom django.db.models.aggregates import Count\nfrom django.views.generic import DetailView\nfrom django.contrib.auth import get_user_model, login as auth_login\nfrom main_app.models import Source\nfrom django.views.generic import ListView\nfrom django.contrib.auth.mixins import LoginRequiredMixin\nfrom django.db.models import Q\nfrom django.core.paginator import Paginator\nfrom django.contrib.auth.views import LogoutView, LoginView\nfrom django.contrib import messages\nfrom extra_views import SearchableListMixin\nfrom django.http import HttpResponseRedirect\nfrom django.core.exceptions import PermissionDenied\n\n\nclass UserDetailView(DetailView):\n \"\"\"Detail view for User model\n\n Accessed by /users/<pk>\n \"\"\"\n\n model = get_user_model()\n context_object_name = \"user\"\n template_name = \"user_detail.html\"\n\n def get_context_data(self, **kwargs):\n user = self.get_object()\n # to begin, if the person viewing the site is not logged in,\n # they should only be able to view the detail pages of indexers,\n # and not the detail pages of run-of-the-mill users\n viewing_user = self.request.user\n if not (viewing_user.is_authenticated or user.is_indexer):\n raise PermissionDenied()\n\n context = super().get_context_data(**kwargs)\n display_unpublished = viewing_user.is_authenticated\n sort_by_siglum = lambda source: source.siglum\n if display_unpublished:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all(), key=sort_by_siglum\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all(), key=sort_by_siglum\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all(), key=sort_by_siglum\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all(), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all(), key=sort_by_siglum\n )\n else:\n context[\"inventoried_sources\"] = sorted(\n user.inventoried_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"full_text_sources\"] = sorted(\n user.entered_full_text_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"melody_sources\"] = sorted(\n user.entered_melody_for_sources.all().filter(published=True),\n key=sort_by_siglum,\n )\n context[\"proofread_sources\"] = sorted(\n user.proofread_sources.all().filter(published=True), key=sort_by_siglum\n )\n context[\"edited_sources\"] = sorted(\n user.edited_sources.all().filter(published=True), key=sort_by_siglum\n )\n\n return context\n\n\nclass UserSourceListView(LoginRequiredMixin, ListView):\n model = Source\n context_object_name = \"sources\"\n template_name = \"user_source_list.html\"\n paginate_by = 100\n\n def get_queryset(self):\n return (\n Source.objects.filter(\n Q(current_editors=self.request.user)\n | Q(created_by=self.request.user)\n # | Q(inventoried_by=self.request.user)\n # | Q(full_text_entered_by=self.request.user)\n # | Q(melodies_entered_by=self.request.user)\n # | Q(proofreaders=self.request.user)\n # | Q(other_editors=self.request.user)\n )\n .order_by(\"-date_created\")\n .distinct()\n )\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n\n user_created_sources = (\n Source.objects.filter(created_by=self.request.user)\n .order_by(\"-date_created\")\n .distinct()\n )\n paginator = Paginator(user_created_sources, 10)\n page_number = self.request.GET.get(\"page2\")\n page_obj = paginator.get_page(page_number)\n\n context[\"user_created_sources_page_obj\"] = page_obj\n return context\n\n\nclass CustomLogoutView(LogoutView):\n def get_next_page(self):\n next_page = super().get_next_page()\n messages.success(self.request, \"You have successfully logged out!\")\n return next_page\n\n\nclass UserListView(LoginRequiredMixin, SearchableListMixin, ListView):\n \"\"\"A list of all User objects\n\n This view is equivalent to the user list view on the old Cantus.\n This includes all User objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /users/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"user_list.html\"\n context_object_name = \"users\"\n\n\nclass IndexerListView(SearchableListMixin, ListView):\n \"\"\"A list of User objects shown to the public\n\n This view replaces the indexer list view on the old Cantus.\n The indexers are considered a subset of all User objects, the subset shown to the public.\n This includes the User objects corresponding to Indexer objects on the old Cantus.\n When passed a `?q=<query>` argument in the GET request, it will filter users\n based on the fields defined in `search_fields` with the `icontains` lookup.\n\n Accessed by /indexers/\n \"\"\"\n\n model = get_user_model()\n ordering = \"full_name\"\n search_fields = [\"full_name\", \"institution\", \"city\", \"country\"]\n paginate_by = 100\n template_name = \"indexer_list.html\"\n context_object_name = \"indexers\"\n\n def get_queryset(self):\n all_users = super().get_queryset()\n indexers = all_users.filter(is_indexer=True)\n display_unpublished = self.request.user.is_authenticated\n if display_unpublished:\n indexers = indexers.annotate(source_count=Count(\"inventoried_sources\"))\n # display those who have at least one source\n return indexers.filter(source_count__gte=1)\n else:\n indexers = indexers.annotate(\n source_count=Count(\n \"inventoried_sources\", filter=Q(inventoried_sources__published=True)\n )\n )\n # display those who have at least one published source\n return indexers.filter(source_count__gte=1)\n", "path": "django/cantusdb_project/main_app/views/user.py"}]} | 2,733 | 135 |
gh_patches_debug_41815 | rasdani/github-patches | git_diff | mars-project__mars-2953 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `dask.persist` cannot work on dask-on-mars
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
`dask.persist` cannot work on dask-on-mars.
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```
In [1]: import mars
In [2]: mars.new_session()
Web service started at http://0.0.0.0:64573
Out[2]: <mars.deploy.oscar.session.SyncSession at 0x7fa0f8366d90>
In [3]: import dask
...: from mars.contrib.dask import mars_scheduler
In [4]: def inc(x):
...: return x + 1
...:
In [5]: dask_task = dask.delayed(inc)(1)
In [6]: dask_task.persist(scheduler=mars_scheduler)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-6-095d61a2acc0> in <module>
----> 1 dask_task.persist(scheduler=mars_scheduler)
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/base.py in persist(self, **kwargs)
259 dask.base.persist
260 """
--> 261 (result,) = persist(self, traverse=False, **kwargs)
262 return result
263
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/base.py in persist(traverse, optimize_graph, scheduler, *args, **kwargs)
831 postpersists.append((rebuild, a_keys, state))
832
--> 833 results = schedule(dsk, keys, **kwargs)
834 d = dict(zip(keys, results))
835 results2 = [r({k: d[k] for k in ks}, *s) for r, ks, s in postpersists]
~/Workspace/mars/mars/contrib/dask/scheduler.py in mars_scheduler(dsk, keys)
39 Computed values corresponding to the provided keys.
40 """
---> 41 res = reduce(mars_dask_get(dsk, keys)).execute().fetch()
42 if not isinstance(res, List):
43 return [[res]]
~/Workspace/mars/mars/contrib/dask/scheduler.py in mars_dask_get(dsk, keys)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
~/Workspace/mars/mars/contrib/dask/scheduler.py in <listcomp>(.0)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
~/Workspace/mars/mars/contrib/dask/scheduler.py in <listcomp>(.0)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/highlevelgraph.py in __getitem__(self, key)
737 pass
738
--> 739 raise KeyError(key)
740
741 def __len__(self) -> int:
KeyError: 'i'
```
</issue>
<code>
[start of mars/contrib/dask/scheduler.py]
1 # Copyright 1999-2021 Alibaba Group Holding Ltd.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from dask.core import istask, ishashable
16
17 from typing import List, Tuple
18 from .utils import reduce
19 from ...remote import spawn
20
21
22 def mars_scheduler(dsk: dict, keys: List[List[str]]):
23 """
24 A Dask-Mars scheduler
25
26 This scheduler is intended to be compatible with existing
27 dask user interface, no callbacks are implemented.
28
29 Parameters
30 ----------
31 dsk: Dict
32 Dask graph, represented as a task DAG dictionary.
33 keys: List[List[str]]
34 2d-list of Dask graph keys whose values we wish to compute and return.
35
36 Returns
37 -------
38 Object
39 Computed values corresponding to the provided keys.
40 """
41 res = reduce(mars_dask_get(dsk, keys)).execute().fetch()
42 if not isinstance(res, List):
43 return [[res]]
44 else:
45 return res
46
47
48 def mars_dask_get(dsk: dict, keys: List[List]):
49 """
50 A Dask-Mars convert function. This function will send the dask graph layers
51 to Mars Remote API, generating mars objects correspond to the provided keys.
52
53 Parameters
54 ----------
55 dsk: Dict
56 Dask graph, represented as a task DAG dictionary.
57 keys: List[List[str]]
58 2d-list of Dask graph keys whose values we wish to compute and return.
59
60 Returns
61 -------
62 Object
63 Spawned mars objects corresponding to the provided keys.
64 """
65
66 def _get_arg(a):
67 # if arg contains layer index or callable objs, handle it
68 if ishashable(a) and a in dsk.keys():
69 while ishashable(a) and a in dsk.keys():
70 a = dsk[a]
71 return _execute_task(a)
72 elif not isinstance(a, str) and hasattr(a, "__getitem__"):
73 if istask(
74 a
75 ): # TODO:Handle `SubgraphCallable`, which may contains dsk in it
76 return spawn(a[0], args=tuple(_get_arg(i) for i in a[1:]))
77 elif isinstance(a, dict):
78 return {k: _get_arg(v) for k, v in a.items()}
79 elif isinstance(a, List) or isinstance(a, Tuple):
80 return type(a)(_get_arg(i) for i in a)
81 return a
82
83 def _execute_task(task: tuple):
84 if not istask(task):
85 return _get_arg(task)
86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
87
88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
89
[end of mars/contrib/dask/scheduler.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mars/contrib/dask/scheduler.py b/mars/contrib/dask/scheduler.py
--- a/mars/contrib/dask/scheduler.py
+++ b/mars/contrib/dask/scheduler.py
@@ -14,12 +14,13 @@
from dask.core import istask, ishashable
-from typing import List, Tuple
+from typing import List, Tuple, Union
from .utils import reduce
from ...remote import spawn
+from ...deploy.oscar.session import execute
-def mars_scheduler(dsk: dict, keys: List[List[str]]):
+def mars_scheduler(dsk: dict, keys: Union[List[List[str]], List[str]]):
"""
A Dask-Mars scheduler
@@ -30,22 +31,29 @@
----------
dsk: Dict
Dask graph, represented as a task DAG dictionary.
- keys: List[List[str]]
- 2d-list of Dask graph keys whose values we wish to compute and return.
+ keys: Union[List[List[str]], List[str]]
+ 1d or 2d list of Dask graph keys whose values we wish to compute and return.
Returns
-------
Object
- Computed values corresponding to the provided keys.
+ Computed values corresponding to the provided keys with same dimension.
"""
- res = reduce(mars_dask_get(dsk, keys)).execute().fetch()
- if not isinstance(res, List):
- return [[res]]
- else:
- return res
+ if isinstance(keys, List) and not isinstance(keys[0], List): # 1d keys
+ task = execute(mars_dask_get(dsk, keys))
+ if not isinstance(task, List):
+ task = [task]
+ return map(lambda x: x.fetch(), task)
+ else: # 2d keys
+ res = execute(reduce(mars_dask_get(dsk, keys))).fetch()
+ if not isinstance(res, List):
+ return [[res]]
+ else:
+ return res
-def mars_dask_get(dsk: dict, keys: List[List]):
+
+def mars_dask_get(dsk: dict, keys: Union[List[List[str]], List[str]]):
"""
A Dask-Mars convert function. This function will send the dask graph layers
to Mars Remote API, generating mars objects correspond to the provided keys.
@@ -54,13 +62,13 @@
----------
dsk: Dict
Dask graph, represented as a task DAG dictionary.
- keys: List[List[str]]
- 2d-list of Dask graph keys whose values we wish to compute and return.
+ keys: Union[List[List[str]], List[str]]
+ 1d or 2d list of Dask graph keys whose values we wish to compute and return.
Returns
-------
Object
- Spawned mars objects corresponding to the provided keys.
+ Spawned mars objects corresponding to the provided keys with same dimension.
"""
def _get_arg(a):
@@ -68,7 +76,7 @@
if ishashable(a) and a in dsk.keys():
while ishashable(a) and a in dsk.keys():
a = dsk[a]
- return _execute_task(a)
+ return _spawn_task(a)
elif not isinstance(a, str) and hasattr(a, "__getitem__"):
if istask(
a
@@ -80,9 +88,14 @@
return type(a)(_get_arg(i) for i in a)
return a
- def _execute_task(task: tuple):
+ def _spawn_task(task: tuple):
if not istask(task):
return _get_arg(task)
return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))
- return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]
+ return [
+ [_spawn_task(dsk[k]) for k in keys_d]
+ if isinstance(keys_d, List)
+ else _spawn_task(dsk[keys_d])
+ for keys_d in keys
+ ]
| {"golden_diff": "diff --git a/mars/contrib/dask/scheduler.py b/mars/contrib/dask/scheduler.py\n--- a/mars/contrib/dask/scheduler.py\n+++ b/mars/contrib/dask/scheduler.py\n@@ -14,12 +14,13 @@\n \n from dask.core import istask, ishashable\n \n-from typing import List, Tuple\n+from typing import List, Tuple, Union\n from .utils import reduce\n from ...remote import spawn\n+from ...deploy.oscar.session import execute\n \n \n-def mars_scheduler(dsk: dict, keys: List[List[str]]):\n+def mars_scheduler(dsk: dict, keys: Union[List[List[str]], List[str]]):\n \"\"\"\n A Dask-Mars scheduler\n \n@@ -30,22 +31,29 @@\n ----------\n dsk: Dict\n Dask graph, represented as a task DAG dictionary.\n- keys: List[List[str]]\n- 2d-list of Dask graph keys whose values we wish to compute and return.\n+ keys: Union[List[List[str]], List[str]]\n+ 1d or 2d list of Dask graph keys whose values we wish to compute and return.\n \n Returns\n -------\n Object\n- Computed values corresponding to the provided keys.\n+ Computed values corresponding to the provided keys with same dimension.\n \"\"\"\n- res = reduce(mars_dask_get(dsk, keys)).execute().fetch()\n- if not isinstance(res, List):\n- return [[res]]\n- else:\n- return res\n \n+ if isinstance(keys, List) and not isinstance(keys[0], List): # 1d keys\n+ task = execute(mars_dask_get(dsk, keys))\n+ if not isinstance(task, List):\n+ task = [task]\n+ return map(lambda x: x.fetch(), task)\n+ else: # 2d keys\n+ res = execute(reduce(mars_dask_get(dsk, keys))).fetch()\n+ if not isinstance(res, List):\n+ return [[res]]\n+ else:\n+ return res\n \n-def mars_dask_get(dsk: dict, keys: List[List]):\n+\n+def mars_dask_get(dsk: dict, keys: Union[List[List[str]], List[str]]):\n \"\"\"\n A Dask-Mars convert function. This function will send the dask graph layers\n to Mars Remote API, generating mars objects correspond to the provided keys.\n@@ -54,13 +62,13 @@\n ----------\n dsk: Dict\n Dask graph, represented as a task DAG dictionary.\n- keys: List[List[str]]\n- 2d-list of Dask graph keys whose values we wish to compute and return.\n+ keys: Union[List[List[str]], List[str]]\n+ 1d or 2d list of Dask graph keys whose values we wish to compute and return.\n \n Returns\n -------\n Object\n- Spawned mars objects corresponding to the provided keys.\n+ Spawned mars objects corresponding to the provided keys with same dimension.\n \"\"\"\n \n def _get_arg(a):\n@@ -68,7 +76,7 @@\n if ishashable(a) and a in dsk.keys():\n while ishashable(a) and a in dsk.keys():\n a = dsk[a]\n- return _execute_task(a)\n+ return _spawn_task(a)\n elif not isinstance(a, str) and hasattr(a, \"__getitem__\"):\n if istask(\n a\n@@ -80,9 +88,14 @@\n return type(a)(_get_arg(i) for i in a)\n return a\n \n- def _execute_task(task: tuple):\n+ def _spawn_task(task: tuple):\n if not istask(task):\n return _get_arg(task)\n return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))\n \n- return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]\n+ return [\n+ [_spawn_task(dsk[k]) for k in keys_d]\n+ if isinstance(keys_d, List)\n+ else _spawn_task(dsk[keys_d])\n+ for keys_d in keys\n+ ]\n", "issue": "[BUG] `dask.persist` cannot work on dask-on-mars\n<!--\r\nThank you for your contribution!\r\n\r\nPlease review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.\r\n-->\r\n\r\n**Describe the bug**\r\n\r\n`dask.persist` cannot work on dask-on-mars.\r\n\r\n**To Reproduce**\r\nTo help us reproducing this bug, please provide information below:\r\n1. Your Python version\r\n2. The version of Mars you use\r\n3. Versions of crucial packages, such as numpy, scipy and pandas\r\n4. Full stack of the error.\r\n5. Minimized code to reproduce the error.\r\n\r\n```\r\nIn [1]: import mars\r\n\r\nIn [2]: mars.new_session()\r\nWeb service started at http://0.0.0.0:64573\r\nOut[2]: <mars.deploy.oscar.session.SyncSession at 0x7fa0f8366d90>\r\n\r\nIn [3]: import dask\r\n ...: from mars.contrib.dask import mars_scheduler\r\n\r\nIn [4]: def inc(x):\r\n ...: return x + 1\r\n ...: \r\n\r\nIn [5]: dask_task = dask.delayed(inc)(1)\r\n\r\nIn [6]: dask_task.persist(scheduler=mars_scheduler)\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-6-095d61a2acc0> in <module>\r\n----> 1 dask_task.persist(scheduler=mars_scheduler)\r\n\r\n~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/base.py in persist(self, **kwargs)\r\n 259 dask.base.persist\r\n 260 \"\"\"\r\n--> 261 (result,) = persist(self, traverse=False, **kwargs)\r\n 262 return result\r\n 263 \r\n\r\n~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/base.py in persist(traverse, optimize_graph, scheduler, *args, **kwargs)\r\n 831 postpersists.append((rebuild, a_keys, state))\r\n 832 \r\n--> 833 results = schedule(dsk, keys, **kwargs)\r\n 834 d = dict(zip(keys, results))\r\n 835 results2 = [r({k: d[k] for k in ks}, *s) for r, ks, s in postpersists]\r\n\r\n~/Workspace/mars/mars/contrib/dask/scheduler.py in mars_scheduler(dsk, keys)\r\n 39 Computed values corresponding to the provided keys.\r\n 40 \"\"\"\r\n---> 41 res = reduce(mars_dask_get(dsk, keys)).execute().fetch()\r\n 42 if not isinstance(res, List):\r\n 43 return [[res]]\r\n\r\n~/Workspace/mars/mars/contrib/dask/scheduler.py in mars_dask_get(dsk, keys)\r\n 86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))\r\n 87 \r\n---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]\r\n\r\n~/Workspace/mars/mars/contrib/dask/scheduler.py in <listcomp>(.0)\r\n 86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))\r\n 87 \r\n---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]\r\n\r\n~/Workspace/mars/mars/contrib/dask/scheduler.py in <listcomp>(.0)\r\n 86 return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))\r\n 87 \r\n---> 88 return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]\r\n\r\n~/miniconda3/envs/mars3.8/lib/python3.8/site-packages/dask/highlevelgraph.py in __getitem__(self, key)\r\n 737 pass\r\n 738 \r\n--> 739 raise KeyError(key)\r\n 740 \r\n 741 def __len__(self) -> int:\r\n\r\nKeyError: 'i'\r\n```\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 1999-2021 Alibaba Group Holding Ltd.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom dask.core import istask, ishashable\n\nfrom typing import List, Tuple\nfrom .utils import reduce\nfrom ...remote import spawn\n\n\ndef mars_scheduler(dsk: dict, keys: List[List[str]]):\n \"\"\"\n A Dask-Mars scheduler\n\n This scheduler is intended to be compatible with existing\n dask user interface, no callbacks are implemented.\n\n Parameters\n ----------\n dsk: Dict\n Dask graph, represented as a task DAG dictionary.\n keys: List[List[str]]\n 2d-list of Dask graph keys whose values we wish to compute and return.\n\n Returns\n -------\n Object\n Computed values corresponding to the provided keys.\n \"\"\"\n res = reduce(mars_dask_get(dsk, keys)).execute().fetch()\n if not isinstance(res, List):\n return [[res]]\n else:\n return res\n\n\ndef mars_dask_get(dsk: dict, keys: List[List]):\n \"\"\"\n A Dask-Mars convert function. This function will send the dask graph layers\n to Mars Remote API, generating mars objects correspond to the provided keys.\n\n Parameters\n ----------\n dsk: Dict\n Dask graph, represented as a task DAG dictionary.\n keys: List[List[str]]\n 2d-list of Dask graph keys whose values we wish to compute and return.\n\n Returns\n -------\n Object\n Spawned mars objects corresponding to the provided keys.\n \"\"\"\n\n def _get_arg(a):\n # if arg contains layer index or callable objs, handle it\n if ishashable(a) and a in dsk.keys():\n while ishashable(a) and a in dsk.keys():\n a = dsk[a]\n return _execute_task(a)\n elif not isinstance(a, str) and hasattr(a, \"__getitem__\"):\n if istask(\n a\n ): # TODO:Handle `SubgraphCallable`, which may contains dsk in it\n return spawn(a[0], args=tuple(_get_arg(i) for i in a[1:]))\n elif isinstance(a, dict):\n return {k: _get_arg(v) for k, v in a.items()}\n elif isinstance(a, List) or isinstance(a, Tuple):\n return type(a)(_get_arg(i) for i in a)\n return a\n\n def _execute_task(task: tuple):\n if not istask(task):\n return _get_arg(task)\n return spawn(task[0], args=tuple(_get_arg(a) for a in task[1:]))\n\n return [[_execute_task(dsk[k]) for k in keys_d] for keys_d in keys]\n", "path": "mars/contrib/dask/scheduler.py"}]} | 2,376 | 925 |
gh_patches_debug_42527 | rasdani/github-patches | git_diff | conan-io__conan-center-index-2077 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[request] cpp-taskflow/2.5.0
### Package Details
* Package Name/Version: **cpp-taskflow/2.5.0**
* Changelog: **https://taskflow.github.io/taskflow/release-2-5-0.html**
Please note that `cpp-taskflow` is renamed to `taskflow`. So new recipe shall be created.
P.S. Can you tell me please, why recipes is not updated automatically, when new library version is released? Lask of automation? Will it be fixed in future conan center pipeline?
</issue>
<code>
[start of recipes/taskflow/all/conanfile.py]
1 from conans import ConanFile, tools
2 from conans.errors import ConanInvalidConfiguration
3 from conans.model.version import Version
4 import os
5
6
7 class TaskflowConan(ConanFile):
8 name = "taskflow"
9 description = "A fast C++ header-only library to help you quickly write parallel programs with complex task dependencies."
10 topics = ("conan", "taskflow", "tasking", "parallelism")
11 url = "https://github.com/conan-io/conan-center-index"
12 homepage = "https://github.com/taskflow/taskflow"
13 license = "MIT"
14
15 no_copy_source = True
16
17 settings = "os", "compiler"
18
19 _source_subfolder = "source_subfolder"
20
21 def configure(self):
22 compiler = str(self.settings.compiler)
23 compiler_version = tools.Version(self.settings.compiler.version)
24 min_req_cppstd = "17" if tools.Version(self.version) <= "2.2.0" else "14"
25
26 if self.settings.compiler.cppstd:
27 tools.check_min_cppstd(self, min_req_cppstd)
28 else:
29 self.output.warn("%s recipe lacks information about the %s compiler"
30 " standard version support" % (self.name, compiler))
31
32 minimal_version = {
33 "17" : {
34 "Visual Studio": "16",
35 "gcc": "7.3",
36 "clang": "6.0",
37 "apple-clang": "10.0"
38 },
39 "14" : {
40 "Visual Studio": "15",
41 "gcc": "5",
42 "clang": "4.0",
43 "apple-clang": "8.0"
44 }
45 }
46
47 if compiler not in minimal_version[min_req_cppstd]:
48 self.output.info("%s requires a compiler that supports at least C++%s" % (self.name, min_req_cppstd))
49 return
50
51 # Exclude compilers not supported by taskflow
52 if compiler_version < minimal_version[min_req_cppstd][compiler]:
53 raise ConanInvalidConfiguration("%s requires a compiler that supports"
54 " at least C++%s. %s %s is not"
55 " supported." % (self.name, min_req_cppstd, compiler, Version(self.settings.compiler.version.value)))
56
57 def source(self):
58 tools.get(**self.conan_data["sources"][self.version])
59 extracted_dir = self.name + "-" + self.version
60 os.rename(extracted_dir, self._source_subfolder)
61
62 def package(self):
63 self.copy(pattern="LICENSE", dst="licenses", src=self._source_subfolder)
64 self.copy(pattern="*", dst="include/taskflow", src=os.path.join(self._source_subfolder, "taskflow"))
65
66 def package_id(self):
67 self.info.header_only()
68
69 def package_info(self):
70 if self.settings.os == "Linux":
71 self.cpp_info.system_libs.append("pthread")
72 if self.settings.compiler == "Visual Studio":
73 self.cpp_info.defines.append("_ENABLE_EXTENDED_ALIGNED_STORAGE")
74
75
[end of recipes/taskflow/all/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/recipes/taskflow/all/conanfile.py b/recipes/taskflow/all/conanfile.py
--- a/recipes/taskflow/all/conanfile.py
+++ b/recipes/taskflow/all/conanfile.py
@@ -1,8 +1,7 @@
-from conans import ConanFile, tools
-from conans.errors import ConanInvalidConfiguration
-from conans.model.version import Version
import os
+from conans import ConanFile, tools
+from conans.errors import ConanInvalidConfiguration
class TaskflowConan(ConanFile):
name = "taskflow"
@@ -16,43 +15,41 @@
settings = "os", "compiler"
- _source_subfolder = "source_subfolder"
+ @property
+ def _source_subfolder(self):
+ return "source_subfolder"
def configure(self):
- compiler = str(self.settings.compiler)
- compiler_version = tools.Version(self.settings.compiler.version)
- min_req_cppstd = "17" if tools.Version(self.version) <= "2.2.0" else "14"
+ minimal_cpp_standard = "17" if tools.Version(self.version) <= "2.2.0" else "14"
if self.settings.compiler.cppstd:
- tools.check_min_cppstd(self, min_req_cppstd)
- else:
- self.output.warn("%s recipe lacks information about the %s compiler"
- " standard version support" % (self.name, compiler))
+ tools.check_min_cppstd(self, minimal_cpp_standard)
minimal_version = {
- "17" : {
+ "17": {
"Visual Studio": "16",
"gcc": "7.3",
"clang": "6.0",
"apple-clang": "10.0"
},
- "14" : {
+ "14": {
"Visual Studio": "15",
"gcc": "5",
"clang": "4.0",
"apple-clang": "8.0"
}
- }
+ }[minimal_cpp_standard]
- if compiler not in minimal_version[min_req_cppstd]:
- self.output.info("%s requires a compiler that supports at least C++%s" % (self.name, min_req_cppstd))
+ compiler = str(self.settings.compiler)
+ if compiler not in minimal_version:
+ self.output.warn(
+ "%s recipe lacks information about the %s compiler standard version support" % (self.name, compiler))
+ self.output.warn(
+ "%s requires a compiler that supports at least C++%s" % (self.name, minimal_cpp_standard))
return
-
- # Exclude compilers not supported by taskflow
- if compiler_version < minimal_version[min_req_cppstd][compiler]:
- raise ConanInvalidConfiguration("%s requires a compiler that supports"
- " at least C++%s. %s %s is not"
- " supported." % (self.name, min_req_cppstd, compiler, Version(self.settings.compiler.version.value)))
+ version = tools.Version(self.settings.compiler.version)
+ if version < minimal_version[compiler]:
+ raise ConanInvalidConfiguration("%s requires a compiler that supports at least C++%s" % (self.name, minimal_cpp_standard))
def source(self):
tools.get(**self.conan_data["sources"][self.version])
@@ -61,7 +58,7 @@
def package(self):
self.copy(pattern="LICENSE", dst="licenses", src=self._source_subfolder)
- self.copy(pattern="*", dst="include/taskflow", src=os.path.join(self._source_subfolder, "taskflow"))
+ self.copy(pattern="*", dst=os.path.join("include", "taskflow"), src=os.path.join(self._source_subfolder, "taskflow"))
def package_id(self):
self.info.header_only()
@@ -71,4 +68,5 @@
self.cpp_info.system_libs.append("pthread")
if self.settings.compiler == "Visual Studio":
self.cpp_info.defines.append("_ENABLE_EXTENDED_ALIGNED_STORAGE")
-
+ self.cpp_info.names["cmake_find_package"] = "Taskflow"
+ self.cpp_info.names["cmake_find_package_multi"] = "Taskflow"
| {"golden_diff": "diff --git a/recipes/taskflow/all/conanfile.py b/recipes/taskflow/all/conanfile.py\n--- a/recipes/taskflow/all/conanfile.py\n+++ b/recipes/taskflow/all/conanfile.py\n@@ -1,8 +1,7 @@\n-from conans import ConanFile, tools\n-from conans.errors import ConanInvalidConfiguration\n-from conans.model.version import Version\n import os\n \n+from conans import ConanFile, tools\n+from conans.errors import ConanInvalidConfiguration\n \n class TaskflowConan(ConanFile):\n name = \"taskflow\"\n@@ -16,43 +15,41 @@\n \n settings = \"os\", \"compiler\"\n \n- _source_subfolder = \"source_subfolder\"\n+ @property\n+ def _source_subfolder(self):\n+ return \"source_subfolder\"\n \n def configure(self):\n- compiler = str(self.settings.compiler)\n- compiler_version = tools.Version(self.settings.compiler.version)\n- min_req_cppstd = \"17\" if tools.Version(self.version) <= \"2.2.0\" else \"14\"\n+ minimal_cpp_standard = \"17\" if tools.Version(self.version) <= \"2.2.0\" else \"14\"\n \n if self.settings.compiler.cppstd:\n- tools.check_min_cppstd(self, min_req_cppstd)\n- else:\n- self.output.warn(\"%s recipe lacks information about the %s compiler\"\n- \" standard version support\" % (self.name, compiler))\n+ tools.check_min_cppstd(self, minimal_cpp_standard)\n \n minimal_version = {\n- \"17\" : {\n+ \"17\": {\n \"Visual Studio\": \"16\",\n \"gcc\": \"7.3\",\n \"clang\": \"6.0\",\n \"apple-clang\": \"10.0\"\n },\n- \"14\" : {\n+ \"14\": {\n \"Visual Studio\": \"15\",\n \"gcc\": \"5\",\n \"clang\": \"4.0\",\n \"apple-clang\": \"8.0\"\n }\n- }\n+ }[minimal_cpp_standard]\n \n- if compiler not in minimal_version[min_req_cppstd]:\n- self.output.info(\"%s requires a compiler that supports at least C++%s\" % (self.name, min_req_cppstd))\n+ compiler = str(self.settings.compiler)\n+ if compiler not in minimal_version:\n+ self.output.warn(\n+ \"%s recipe lacks information about the %s compiler standard version support\" % (self.name, compiler))\n+ self.output.warn(\n+ \"%s requires a compiler that supports at least C++%s\" % (self.name, minimal_cpp_standard))\n return\n-\n- # Exclude compilers not supported by taskflow\n- if compiler_version < minimal_version[min_req_cppstd][compiler]:\n- raise ConanInvalidConfiguration(\"%s requires a compiler that supports\"\n- \" at least C++%s. %s %s is not\"\n- \" supported.\" % (self.name, min_req_cppstd, compiler, Version(self.settings.compiler.version.value)))\n+ version = tools.Version(self.settings.compiler.version)\n+ if version < minimal_version[compiler]:\n+ raise ConanInvalidConfiguration(\"%s requires a compiler that supports at least C++%s\" % (self.name, minimal_cpp_standard))\n \n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n@@ -61,7 +58,7 @@\n \n def package(self):\n self.copy(pattern=\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n- self.copy(pattern=\"*\", dst=\"include/taskflow\", src=os.path.join(self._source_subfolder, \"taskflow\"))\n+ self.copy(pattern=\"*\", dst=os.path.join(\"include\", \"taskflow\"), src=os.path.join(self._source_subfolder, \"taskflow\"))\n \n def package_id(self):\n self.info.header_only()\n@@ -71,4 +68,5 @@\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines.append(\"_ENABLE_EXTENDED_ALIGNED_STORAGE\")\n-\n+ self.cpp_info.names[\"cmake_find_package\"] = \"Taskflow\"\n+ self.cpp_info.names[\"cmake_find_package_multi\"] = \"Taskflow\"\n", "issue": "[request] cpp-taskflow/2.5.0\n### Package Details\r\n * Package Name/Version: **cpp-taskflow/2.5.0**\r\n * Changelog: **https://taskflow.github.io/taskflow/release-2-5-0.html**\r\n\r\n\r\nPlease note that `cpp-taskflow` is renamed to `taskflow`. So new recipe shall be created.\r\n\r\nP.S. Can you tell me please, why recipes is not updated automatically, when new library version is released? Lask of automation? Will it be fixed in future conan center pipeline?\r\n\n", "before_files": [{"content": "from conans import ConanFile, tools\nfrom conans.errors import ConanInvalidConfiguration\nfrom conans.model.version import Version\nimport os\n\n\nclass TaskflowConan(ConanFile):\n name = \"taskflow\"\n description = \"A fast C++ header-only library to help you quickly write parallel programs with complex task dependencies.\"\n topics = (\"conan\", \"taskflow\", \"tasking\", \"parallelism\")\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://github.com/taskflow/taskflow\"\n license = \"MIT\"\n\n no_copy_source = True\n\n settings = \"os\", \"compiler\"\n\n _source_subfolder = \"source_subfolder\"\n\n def configure(self):\n compiler = str(self.settings.compiler)\n compiler_version = tools.Version(self.settings.compiler.version)\n min_req_cppstd = \"17\" if tools.Version(self.version) <= \"2.2.0\" else \"14\"\n\n if self.settings.compiler.cppstd:\n tools.check_min_cppstd(self, min_req_cppstd)\n else:\n self.output.warn(\"%s recipe lacks information about the %s compiler\"\n \" standard version support\" % (self.name, compiler))\n\n minimal_version = {\n \"17\" : {\n \"Visual Studio\": \"16\",\n \"gcc\": \"7.3\",\n \"clang\": \"6.0\",\n \"apple-clang\": \"10.0\"\n },\n \"14\" : {\n \"Visual Studio\": \"15\",\n \"gcc\": \"5\",\n \"clang\": \"4.0\",\n \"apple-clang\": \"8.0\"\n }\n }\n\n if compiler not in minimal_version[min_req_cppstd]:\n self.output.info(\"%s requires a compiler that supports at least C++%s\" % (self.name, min_req_cppstd))\n return\n\n # Exclude compilers not supported by taskflow\n if compiler_version < minimal_version[min_req_cppstd][compiler]:\n raise ConanInvalidConfiguration(\"%s requires a compiler that supports\"\n \" at least C++%s. %s %s is not\"\n \" supported.\" % (self.name, min_req_cppstd, compiler, Version(self.settings.compiler.version.value)))\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def package(self):\n self.copy(pattern=\"LICENSE\", dst=\"licenses\", src=self._source_subfolder)\n self.copy(pattern=\"*\", dst=\"include/taskflow\", src=os.path.join(self._source_subfolder, \"taskflow\"))\n\n def package_id(self):\n self.info.header_only()\n\n def package_info(self):\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs.append(\"pthread\")\n if self.settings.compiler == \"Visual Studio\":\n self.cpp_info.defines.append(\"_ENABLE_EXTENDED_ALIGNED_STORAGE\")\n\n", "path": "recipes/taskflow/all/conanfile.py"}]} | 1,448 | 932 |
gh_patches_debug_3808 | rasdani/github-patches | git_diff | ManimCommunity__manim-2197 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Create a test for resolution configuration
## Enhancement proposal
A test for the `--resolution` configuration option. It should be placed [here](https://github.com/ManimCommunity/manim/blob/main/tests/test_scene_rendering/test_cli_flags.py) and [here](https://github.com/ManimCommunity/manim/blob/main/tests/test_scene_rendering/opengl/test_cli_flags_opengl.py) (the OpenGL test can be marked as `xfail` if setting resolution does not work properly on OpenGL). Test different resolutions as well as different ways of separating the resolutions (width and height separated by `;`, `,`, and `-`).
## Additional comments
<!-- Add further context that you think might be relevant. -->
</issue>
<code>
[start of manim/utils/commands.py]
1 import json
2 import os
3 from subprocess import run
4 from typing import Any, Dict, List
5
6 __all__ = [
7 "capture",
8 "get_video_metadata",
9 "get_dir_layout",
10 ]
11
12
13 def capture(command, cwd=None, command_input=None):
14 p = run(command, cwd=cwd, input=command_input, capture_output=True, text=True)
15 out, err = p.stdout, p.stderr
16 return out, err, p.returncode
17
18
19 def get_video_metadata(path_to_video: str) -> Dict[str, Any]:
20 command = [
21 "ffprobe",
22 "-v",
23 "error",
24 "-select_streams",
25 "v:0",
26 "-show_entries",
27 "stream=width,height,nb_frames,duration,avg_frame_rate,codec_name",
28 "-print_format",
29 "json",
30 path_to_video,
31 ]
32 config, err, exitcode = capture(command)
33 assert exitcode == 0, f"FFprobe error: {err}"
34 return json.loads(config)["streams"][0]
35
36
37 def get_dir_layout(dirpath: str) -> List[str]:
38 """Get list of paths relative to dirpath of all files in dir and subdirs recursively."""
39 index_files: List[str] = []
40 for root, dirs, files in os.walk(dirpath):
41 for file in files:
42 index_files.append(f"{os.path.relpath(os.path.join(root, file), dirpath)}")
43 return index_files
44
[end of manim/utils/commands.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/manim/utils/commands.py b/manim/utils/commands.py
--- a/manim/utils/commands.py
+++ b/manim/utils/commands.py
@@ -27,7 +27,7 @@
"stream=width,height,nb_frames,duration,avg_frame_rate,codec_name",
"-print_format",
"json",
- path_to_video,
+ str(path_to_video),
]
config, err, exitcode = capture(command)
assert exitcode == 0, f"FFprobe error: {err}"
| {"golden_diff": "diff --git a/manim/utils/commands.py b/manim/utils/commands.py\n--- a/manim/utils/commands.py\n+++ b/manim/utils/commands.py\n@@ -27,7 +27,7 @@\n \"stream=width,height,nb_frames,duration,avg_frame_rate,codec_name\",\n \"-print_format\",\n \"json\",\n- path_to_video,\n+ str(path_to_video),\n ]\n config, err, exitcode = capture(command)\n assert exitcode == 0, f\"FFprobe error: {err}\"\n", "issue": "Create a test for resolution configuration\n## Enhancement proposal\r\nA test for the `--resolution` configuration option. It should be placed [here](https://github.com/ManimCommunity/manim/blob/main/tests/test_scene_rendering/test_cli_flags.py) and [here](https://github.com/ManimCommunity/manim/blob/main/tests/test_scene_rendering/opengl/test_cli_flags_opengl.py) (the OpenGL test can be marked as `xfail` if setting resolution does not work properly on OpenGL). Test different resolutions as well as different ways of separating the resolutions (width and height separated by `;`, `,`, and `-`).\r\n\r\n## Additional comments\r\n<!-- Add further context that you think might be relevant. -->\r\n\n", "before_files": [{"content": "import json\nimport os\nfrom subprocess import run\nfrom typing import Any, Dict, List\n\n__all__ = [\n \"capture\",\n \"get_video_metadata\",\n \"get_dir_layout\",\n]\n\n\ndef capture(command, cwd=None, command_input=None):\n p = run(command, cwd=cwd, input=command_input, capture_output=True, text=True)\n out, err = p.stdout, p.stderr\n return out, err, p.returncode\n\n\ndef get_video_metadata(path_to_video: str) -> Dict[str, Any]:\n command = [\n \"ffprobe\",\n \"-v\",\n \"error\",\n \"-select_streams\",\n \"v:0\",\n \"-show_entries\",\n \"stream=width,height,nb_frames,duration,avg_frame_rate,codec_name\",\n \"-print_format\",\n \"json\",\n path_to_video,\n ]\n config, err, exitcode = capture(command)\n assert exitcode == 0, f\"FFprobe error: {err}\"\n return json.loads(config)[\"streams\"][0]\n\n\ndef get_dir_layout(dirpath: str) -> List[str]:\n \"\"\"Get list of paths relative to dirpath of all files in dir and subdirs recursively.\"\"\"\n index_files: List[str] = []\n for root, dirs, files in os.walk(dirpath):\n for file in files:\n index_files.append(f\"{os.path.relpath(os.path.join(root, file), dirpath)}\")\n return index_files\n", "path": "manim/utils/commands.py"}]} | 1,076 | 119 |
gh_patches_debug_13600 | rasdani/github-patches | git_diff | scikit-image__scikit-image-2722 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs build fails with numpydoc 0.7.0
## Description
If you just pip install numpydoc today and run 'make html' in doc you get:
```
reading sources... [ 2%] api/skimage.data
Exception occurred:
File "/opt/opensource/deb/py36/venv3/lib/python3.5/site-packages/numpydoc/docscrape.py", line 343, in _parse
raise ValueError(msg)
ValueError: The section Other Parameters appears twice in the docstring of None in None.
```
Installing numpydoc 0.6.0 gets past this problem.
I have no idea if this is a numpydoc or scikit-image bug really...
</issue>
<code>
[start of skimage/io/_io.py]
1 from io import BytesIO
2
3 import numpy as np
4 import six
5
6 from ..io.manage_plugins import call_plugin
7 from ..color import rgb2grey
8 from .util import file_or_url_context
9 from ..exposure import is_low_contrast
10 from .._shared.utils import all_warnings, warn
11
12
13 __all__ = ['imread', 'imsave', 'imshow', 'show',
14 'imread_collection', 'imshow_collection']
15
16
17 def imread(fname, as_grey=False, plugin=None, flatten=None,
18 **plugin_args):
19 """Load an image from file.
20
21 Parameters
22 ----------
23 fname : string
24 Image file name, e.g. ``test.jpg`` or URL.
25 as_grey : bool
26 If True, convert color images to grey-scale (64-bit floats).
27 Images that are already in grey-scale format are not converted.
28 plugin : str
29 Name of plugin to use. By default, the different plugins are
30 tried (starting with the Python Imaging Library) until a suitable
31 candidate is found. If not given and fname is a tiff file, the
32 tifffile plugin will be used.
33
34 Other Parameters
35 ----------------
36 flatten : bool
37 Backward compatible keyword, superseded by `as_grey`.
38
39 Returns
40 -------
41 img_array : ndarray
42 The different color bands/channels are stored in the
43 third dimension, such that a grey-image is MxN, an
44 RGB-image MxNx3 and an RGBA-image MxNx4.
45
46 Other parameters
47 ----------------
48 plugin_args : keywords
49 Passed to the given plugin.
50
51 """
52 # Backward compatibility
53 if flatten is not None:
54 as_grey = flatten
55
56 if plugin is None and hasattr(fname, 'lower'):
57 if fname.lower().endswith(('.tiff', '.tif')):
58 plugin = 'tifffile'
59
60 with file_or_url_context(fname) as fname:
61 img = call_plugin('imread', fname, plugin=plugin, **plugin_args)
62
63 if not hasattr(img, 'ndim'):
64 return img
65
66 if img.ndim > 2:
67 if img.shape[-1] not in (3, 4) and img.shape[-3] in (3, 4):
68 img = np.swapaxes(img, -1, -3)
69 img = np.swapaxes(img, -2, -3)
70
71 if as_grey:
72 img = rgb2grey(img)
73
74 return img
75
76
77 def imread_collection(load_pattern, conserve_memory=True,
78 plugin=None, **plugin_args):
79 """
80 Load a collection of images.
81
82 Parameters
83 ----------
84 load_pattern : str or list
85 List of objects to load. These are usually filenames, but may
86 vary depending on the currently active plugin. See the docstring
87 for ``ImageCollection`` for the default behaviour of this parameter.
88 conserve_memory : bool, optional
89 If True, never keep more than one in memory at a specific
90 time. Otherwise, images will be cached once they are loaded.
91
92 Returns
93 -------
94 ic : ImageCollection
95 Collection of images.
96
97 Other parameters
98 ----------------
99 plugin_args : keywords
100 Passed to the given plugin.
101
102 """
103 return call_plugin('imread_collection', load_pattern, conserve_memory,
104 plugin=plugin, **plugin_args)
105
106
107 def imsave(fname, arr, plugin=None, **plugin_args):
108 """Save an image to file.
109
110 Parameters
111 ----------
112 fname : str
113 Target filename.
114 arr : ndarray of shape (M,N) or (M,N,3) or (M,N,4)
115 Image data.
116 plugin : str
117 Name of plugin to use. By default, the different plugins are
118 tried (starting with the Python Imaging Library) until a suitable
119 candidate is found. If not given and fname is a tiff file, the
120 tifffile plugin will be used.
121
122 Other parameters
123 ----------------
124 plugin_args : keywords
125 Passed to the given plugin.
126
127 Notes
128 -----
129 When saving a JPEG, the compression ratio may be controlled using the
130 ``quality`` keyword argument which is an integer with values in [1, 100]
131 where 1 is worst quality and smallest file size, and 100 is best quality and
132 largest file size (default 75). This is only available when using the PIL
133 and imageio plugins.
134 """
135 if plugin is None and hasattr(fname, 'lower'):
136 if fname.lower().endswith(('.tiff', '.tif')):
137 plugin = 'tifffile'
138 if is_low_contrast(arr):
139 warn('%s is a low contrast image' % fname)
140 return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)
141
142
143 def imshow(arr, plugin=None, **plugin_args):
144 """Display an image.
145
146 Parameters
147 ----------
148 arr : ndarray or str
149 Image data or name of image file.
150 plugin : str
151 Name of plugin to use. By default, the different plugins are
152 tried (starting with the Python Imaging Library) until a suitable
153 candidate is found.
154
155 Other parameters
156 ----------------
157 plugin_args : keywords
158 Passed to the given plugin.
159
160 """
161 if isinstance(arr, six.string_types):
162 arr = call_plugin('imread', arr, plugin=plugin)
163 return call_plugin('imshow', arr, plugin=plugin, **plugin_args)
164
165
166 def imshow_collection(ic, plugin=None, **plugin_args):
167 """Display a collection of images.
168
169 Parameters
170 ----------
171 ic : ImageCollection
172 Collection to display.
173 plugin : str
174 Name of plugin to use. By default, the different plugins are
175 tried until a suitable candidate is found.
176
177 Other parameters
178 ----------------
179 plugin_args : keywords
180 Passed to the given plugin.
181
182 """
183 return call_plugin('imshow_collection', ic, plugin=plugin, **plugin_args)
184
185
186 def show():
187 '''Display pending images.
188
189 Launch the event loop of the current gui plugin, and display all
190 pending images, queued via `imshow`. This is required when using
191 `imshow` from non-interactive scripts.
192
193 A call to `show` will block execution of code until all windows
194 have been closed.
195
196 Examples
197 --------
198 >>> import skimage.io as io
199
200 >>> for i in range(4):
201 ... ax_im = io.imshow(np.random.rand(50, 50))
202 >>> io.show() # doctest: +SKIP
203
204 '''
205 return call_plugin('_app_show')
206
[end of skimage/io/_io.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/io/_io.py b/skimage/io/_io.py
--- a/skimage/io/_io.py
+++ b/skimage/io/_io.py
@@ -36,6 +36,9 @@
flatten : bool
Backward compatible keyword, superseded by `as_grey`.
+ plugin_args : keywords
+ Passed to the given plugin.
+
Returns
-------
img_array : ndarray
@@ -43,11 +46,6 @@
third dimension, such that a grey-image is MxN, an
RGB-image MxNx3 and an RGBA-image MxNx4.
- Other parameters
- ----------------
- plugin_args : keywords
- Passed to the given plugin.
-
"""
# Backward compatibility
if flatten is not None:
| {"golden_diff": "diff --git a/skimage/io/_io.py b/skimage/io/_io.py\n--- a/skimage/io/_io.py\n+++ b/skimage/io/_io.py\n@@ -36,6 +36,9 @@\n flatten : bool\n Backward compatible keyword, superseded by `as_grey`.\n \n+ plugin_args : keywords\n+ Passed to the given plugin.\n+\n Returns\n -------\n img_array : ndarray\n@@ -43,11 +46,6 @@\n third dimension, such that a grey-image is MxN, an\n RGB-image MxNx3 and an RGBA-image MxNx4.\n \n- Other parameters\n- ----------------\n- plugin_args : keywords\n- Passed to the given plugin.\n-\n \"\"\"\n # Backward compatibility\n if flatten is not None:\n", "issue": "docs build fails with numpydoc 0.7.0\n## Description\r\n\r\nIf you just pip install numpydoc today and run 'make html' in doc you get:\r\n\r\n```\r\nreading sources... [ 2%] api/skimage.data \r\nException occurred:\r\n File \"/opt/opensource/deb/py36/venv3/lib/python3.5/site-packages/numpydoc/docscrape.py\", line 343, in _parse\r\n raise ValueError(msg)\r\nValueError: The section Other Parameters appears twice in the docstring of None in None.\r\n```\r\n\r\nInstalling numpydoc 0.6.0 gets past this problem.\r\n\r\nI have no idea if this is a numpydoc or scikit-image bug really...\r\n\n", "before_files": [{"content": "from io import BytesIO\n\nimport numpy as np\nimport six\n\nfrom ..io.manage_plugins import call_plugin\nfrom ..color import rgb2grey\nfrom .util import file_or_url_context\nfrom ..exposure import is_low_contrast\nfrom .._shared.utils import all_warnings, warn\n\n\n__all__ = ['imread', 'imsave', 'imshow', 'show',\n 'imread_collection', 'imshow_collection']\n\n\ndef imread(fname, as_grey=False, plugin=None, flatten=None,\n **plugin_args):\n \"\"\"Load an image from file.\n\n Parameters\n ----------\n fname : string\n Image file name, e.g. ``test.jpg`` or URL.\n as_grey : bool\n If True, convert color images to grey-scale (64-bit floats).\n Images that are already in grey-scale format are not converted.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried (starting with the Python Imaging Library) until a suitable\n candidate is found. If not given and fname is a tiff file, the\n tifffile plugin will be used.\n\n Other Parameters\n ----------------\n flatten : bool\n Backward compatible keyword, superseded by `as_grey`.\n\n Returns\n -------\n img_array : ndarray\n The different color bands/channels are stored in the\n third dimension, such that a grey-image is MxN, an\n RGB-image MxNx3 and an RGBA-image MxNx4.\n\n Other parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n # Backward compatibility\n if flatten is not None:\n as_grey = flatten\n\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n\n with file_or_url_context(fname) as fname:\n img = call_plugin('imread', fname, plugin=plugin, **plugin_args)\n\n if not hasattr(img, 'ndim'):\n return img\n\n if img.ndim > 2:\n if img.shape[-1] not in (3, 4) and img.shape[-3] in (3, 4):\n img = np.swapaxes(img, -1, -3)\n img = np.swapaxes(img, -2, -3)\n\n if as_grey:\n img = rgb2grey(img)\n\n return img\n\n\ndef imread_collection(load_pattern, conserve_memory=True,\n plugin=None, **plugin_args):\n \"\"\"\n Load a collection of images.\n\n Parameters\n ----------\n load_pattern : str or list\n List of objects to load. These are usually filenames, but may\n vary depending on the currently active plugin. See the docstring\n for ``ImageCollection`` for the default behaviour of this parameter.\n conserve_memory : bool, optional\n If True, never keep more than one in memory at a specific\n time. Otherwise, images will be cached once they are loaded.\n\n Returns\n -------\n ic : ImageCollection\n Collection of images.\n\n Other parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n return call_plugin('imread_collection', load_pattern, conserve_memory,\n plugin=plugin, **plugin_args)\n\n\ndef imsave(fname, arr, plugin=None, **plugin_args):\n \"\"\"Save an image to file.\n\n Parameters\n ----------\n fname : str\n Target filename.\n arr : ndarray of shape (M,N) or (M,N,3) or (M,N,4)\n Image data.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried (starting with the Python Imaging Library) until a suitable\n candidate is found. If not given and fname is a tiff file, the\n tifffile plugin will be used.\n\n Other parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n Notes\n -----\n When saving a JPEG, the compression ratio may be controlled using the\n ``quality`` keyword argument which is an integer with values in [1, 100]\n where 1 is worst quality and smallest file size, and 100 is best quality and\n largest file size (default 75). This is only available when using the PIL\n and imageio plugins.\n \"\"\"\n if plugin is None and hasattr(fname, 'lower'):\n if fname.lower().endswith(('.tiff', '.tif')):\n plugin = 'tifffile'\n if is_low_contrast(arr):\n warn('%s is a low contrast image' % fname)\n return call_plugin('imsave', fname, arr, plugin=plugin, **plugin_args)\n\n\ndef imshow(arr, plugin=None, **plugin_args):\n \"\"\"Display an image.\n\n Parameters\n ----------\n arr : ndarray or str\n Image data or name of image file.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried (starting with the Python Imaging Library) until a suitable\n candidate is found.\n\n Other parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n if isinstance(arr, six.string_types):\n arr = call_plugin('imread', arr, plugin=plugin)\n return call_plugin('imshow', arr, plugin=plugin, **plugin_args)\n\n\ndef imshow_collection(ic, plugin=None, **plugin_args):\n \"\"\"Display a collection of images.\n\n Parameters\n ----------\n ic : ImageCollection\n Collection to display.\n plugin : str\n Name of plugin to use. By default, the different plugins are\n tried until a suitable candidate is found.\n\n Other parameters\n ----------------\n plugin_args : keywords\n Passed to the given plugin.\n\n \"\"\"\n return call_plugin('imshow_collection', ic, plugin=plugin, **plugin_args)\n\n\ndef show():\n '''Display pending images.\n\n Launch the event loop of the current gui plugin, and display all\n pending images, queued via `imshow`. This is required when using\n `imshow` from non-interactive scripts.\n\n A call to `show` will block execution of code until all windows\n have been closed.\n\n Examples\n --------\n >>> import skimage.io as io\n\n >>> for i in range(4):\n ... ax_im = io.imshow(np.random.rand(50, 50))\n >>> io.show() # doctest: +SKIP\n\n '''\n return call_plugin('_app_show')\n", "path": "skimage/io/_io.py"}]} | 2,650 | 185 |
gh_patches_debug_21419 | rasdani/github-patches | git_diff | Mailu__Mailu-1088 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MAIL FROM: does-not-exist@[X.X.X.X] results in an admin exeption stacktrace
If a client (read: malicious attacker) connects to Postfix and uses an IP address within brackets instead of a domain name for the sender, e.g.:
```
$ telnet mail.example.com 25
Connected to example.com
Escape character is '^]'.
220 mail.example.com ESMTP ready
EHLO dummy.example.com
250-mail.example.com
250 STARTTLS
MAIL FROM: does-not-exist@[116.203.165.200]
250 2.0.0 OK
RCPT TO: [email protected]
451 4.3.0 <does-not-exist@[116.203.165.200]>: Temporary lookup failure
```
we see the following stracktrace in the admin container:
```
[2019-07-27 14:26:27,015] ERROR in app: Exception on /internal/postfix/domain/[116.203.165.200] [GET]
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/idna/core.py", line 271, in alabel
ulabel(label)
File "/usr/lib/python3.7/site-packages/idna/core.py", line 307, in ulabel
check_label(label)
File "/usr/lib/python3.7/site-packages/idna/core.py", line 261, in check_label
raise InvalidCodepoint('Codepoint {0} at position {1} of {2} not allowed'.format(_unot(cp_value), pos+1, repr(label)))
idna.core.InvalidCodepoint: Codepoint U+005B at position 1 of '[116' not allowed
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1127, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 669, in _init_compiled
param.append(processors[key](compiled_params[key]))
File "/usr/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py", line 1182, in process
return process_param(value, dialect)
File "/app/mailu/models.py", line 30, in process_bind_param
return idna.encode(value).decode("ascii").lower()
File "/usr/lib/python3.7/site-packages/idna/core.py", line 361, in encode
s = alabel(label)
File "/usr/lib/python3.7/site-packages/idna/core.py", line 273, in alabel
raise IDNAError('The label {0} is not a valid A-label'.format(label))
idna.core.IDNAError: The label b'[116' is not a valid A-label
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/lib/python3.7/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/lib/python3.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/app/mailu/internal/views/postfix.py", line 9, in postfix_mailbox_domain
domain = models.Domain.query.get(domain_name) or \
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 924, in get
ident, loading.load_on_pk_identity)
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 1007, in _get_impl
return db_load_fn(self, primary_key_identity)
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/loading.py", line 250, in load_on_pk_identity
return q.one()
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2954, in one
ret = self.one_or_none()
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2924, in one_or_none
ret = list(self)
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 2995, in __iter__
return self._execute_and_instances(context)
File "/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py", line 3018, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1132, in _execute_context
None, None)
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 248, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1127, in _execute_context
context = constructor(dialect, self, conn, *args)
File "/usr/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 669, in _init_compiled
param.append(processors[key](compiled_params[key]))
File "/usr/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py", line 1182, in process
return process_param(value, dialect)
File "/app/mailu/models.py", line 30, in process_bind_param
return idna.encode(value).decode("ascii").lower()
File "/usr/lib/python3.7/site-packages/idna/core.py", line 361, in encode
s = alabel(label)
File "/usr/lib/python3.7/site-packages/idna/core.py", line 273, in alabel
raise IDNAError('The label {0} is not a valid A-label'.format(label))
sqlalchemy.exc.StatementError: (idna.core.IDNAError) The label b'[116' is not a valid A-label [SQL: 'SELECT domain.created_at AS domain_created_at, domain.updated_at AS domain_updated_at, domain.comment AS domain_comment, domain.name AS domain_name, domain.max_users AS domain_max_users, domain.max_aliases AS domain_max_aliases, domain.max_quota_bytes AS domain_max_quota_bytes, domain.signup_enabled AS domain_signup_enabled \nFROM domain \nWHERE domain.name = ?'] [parameters: [{'%(140312238936248 param)s': '[116.203.165.200]'}]]
10.36.0.0 - - [27/Jul/2019:14:26:27 +0000] "GET /internal/postfix/domain/%5B116.203.165.200%5D HTTP/1.1" 500 291 "-" "Python/3.7 aiohttp/3.5.4"
10.44.0.0 - - [27/Jul/2019:14:26:31 +0000] "GET /internal/auth/email HTTP/1.0" 200 0 "-" "-"
```
NOTE: the lookup is cached I think and you need to use another IP address to reproduce the error multiple times.
I think we're missing some validation rules, because trying a similar (invalid) MAIL FROM with an IP (but without brackets) is handled properly:
```
telnet mail.example.com 25
Trying ...
Connected to example.com.
Escape character is '^]'.
220 mail.example.com ESMTP ready
EHLO dummy.example.com
250-mail.example.com
250 STARTTLS
MAIL FROM: [email protected]
250 2.0.0 OK
RCPT TO: [email protected]
501 5.1.7 Bad sender address syntax
Connection closed by foreign host.
```
</issue>
<code>
[start of core/admin/mailu/internal/views/postfix.py]
1 from mailu import models
2 from mailu.internal import internal
3
4 import flask
5
6
7 @internal.route("/postfix/domain/<domain_name>")
8 def postfix_mailbox_domain(domain_name):
9 domain = models.Domain.query.get(domain_name) or \
10 models.Alternative.query.get(domain_name) or \
11 flask.abort(404)
12 return flask.jsonify(domain.name)
13
14
15 @internal.route("/postfix/mailbox/<path:email>")
16 def postfix_mailbox_map(email):
17 user = models.User.query.get(email) or flask.abort(404)
18 return flask.jsonify(user.email)
19
20
21 @internal.route("/postfix/alias/<path:alias>")
22 def postfix_alias_map(alias):
23 localpart, domain_name = models.Email.resolve_domain(alias)
24 if localpart is None:
25 return flask.jsonify(domain_name)
26 destination = models.Email.resolve_destination(localpart, domain_name)
27 return flask.jsonify(",".join(destination)) if destination else flask.abort(404)
28
29
30 @internal.route("/postfix/transport/<path:email>")
31 def postfix_transport(email):
32 if email == '*':
33 return flask.abort(404)
34 localpart, domain_name = models.Email.resolve_domain(email)
35 relay = models.Relay.query.get(domain_name) or flask.abort(404)
36 return flask.jsonify("smtp:[{}]".format(relay.smtp))
37
38
39 @internal.route("/postfix/sender/login/<path:sender>")
40 def postfix_sender_login(sender):
41 localpart, domain_name = models.Email.resolve_domain(sender)
42 if localpart is None:
43 return flask.abort(404)
44 destination = models.Email.resolve_destination(localpart, domain_name, True)
45 return flask.jsonify(",".join(destination)) if destination else flask.abort(404)
46
47
48 @internal.route("/postfix/sender/access/<path:sender>")
49 def postfix_sender_access(sender):
50 """ Simply reject any sender that pretends to be from a local domain
51 """
52 if not is_void_address(sender):
53 localpart, domain_name = models.Email.resolve_domain(sender)
54 return flask.jsonify("REJECT") if models.Domain.query.get(domain_name) else flask.abort(404)
55 else:
56 return flask.abort(404)
57
58
59 def is_void_address(email):
60 '''True if the email is void (null) email address.
61 '''
62 if email.startswith('<') and email.endswith('>'):
63 email = email[1:-1]
64 # Some MTAs use things like '<MAILER-DAEMON>' instead of '<>'; so let's
65 # consider void any such thing.
66 return '@' not in email
67
[end of core/admin/mailu/internal/views/postfix.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py
--- a/core/admin/mailu/internal/views/postfix.py
+++ b/core/admin/mailu/internal/views/postfix.py
@@ -2,10 +2,13 @@
from mailu.internal import internal
import flask
+import re
@internal.route("/postfix/domain/<domain_name>")
def postfix_mailbox_domain(domain_name):
+ if re.match("^\[.*\]$", domain_name):
+ return flask.abort(404)
domain = models.Domain.query.get(domain_name) or \
models.Alternative.query.get(domain_name) or \
flask.abort(404)
@@ -29,7 +32,7 @@
@internal.route("/postfix/transport/<path:email>")
def postfix_transport(email):
- if email == '*':
+ if email == '*' or re.match("(^|.*@)\[.*\]$", email):
return flask.abort(404)
localpart, domain_name = models.Email.resolve_domain(email)
relay = models.Relay.query.get(domain_name) or flask.abort(404)
| {"golden_diff": "diff --git a/core/admin/mailu/internal/views/postfix.py b/core/admin/mailu/internal/views/postfix.py\n--- a/core/admin/mailu/internal/views/postfix.py\n+++ b/core/admin/mailu/internal/views/postfix.py\n@@ -2,10 +2,13 @@\n from mailu.internal import internal\n \n import flask\n+import re\n \n \n @internal.route(\"/postfix/domain/<domain_name>\")\n def postfix_mailbox_domain(domain_name):\n+ if re.match(\"^\\[.*\\]$\", domain_name):\n+ return flask.abort(404)\n domain = models.Domain.query.get(domain_name) or \\\n models.Alternative.query.get(domain_name) or \\\n flask.abort(404)\n@@ -29,7 +32,7 @@\n \n @internal.route(\"/postfix/transport/<path:email>\")\n def postfix_transport(email):\n- if email == '*':\n+ if email == '*' or re.match(\"(^|.*@)\\[.*\\]$\", email):\n return flask.abort(404)\n localpart, domain_name = models.Email.resolve_domain(email)\n relay = models.Relay.query.get(domain_name) or flask.abort(404)\n", "issue": "MAIL FROM: does-not-exist@[X.X.X.X] results in an admin exeption stacktrace\nIf a client (read: malicious attacker) connects to Postfix and uses an IP address within brackets instead of a domain name for the sender, e.g.:\r\n\r\n```\r\n$ telnet mail.example.com 25\r\n\r\nConnected to example.com\r\nEscape character is '^]'.\r\n220 mail.example.com ESMTP ready\r\nEHLO dummy.example.com\r\n250-mail.example.com\r\n250 STARTTLS\r\nMAIL FROM: does-not-exist@[116.203.165.200]\r\n250 2.0.0 OK\r\nRCPT TO: [email protected]\r\n451 4.3.0 <does-not-exist@[116.203.165.200]>: Temporary lookup failure\r\n```\r\n\r\nwe see the following stracktrace in the admin container:\r\n\r\n```\r\n[2019-07-27 14:26:27,015] ERROR in app: Exception on /internal/postfix/domain/[116.203.165.200] [GET]\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 271, in alabel\r\n ulabel(label)\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 307, in ulabel\r\n check_label(label)\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 261, in check_label\r\n raise InvalidCodepoint('Codepoint {0} at position {1} of {2} not allowed'.format(_unot(cp_value), pos+1, repr(label)))\r\nidna.core.InvalidCodepoint: Codepoint U+005B at position 1 of '[116' not allowed\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py\", line 1127, in _execute_context\r\n context = constructor(dialect, self, conn, *args)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/default.py\", line 669, in _init_compiled\r\n param.append(processors[key](compiled_params[key]))\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py\", line 1182, in process\r\n return process_param(value, dialect)\r\n File \"/app/mailu/models.py\", line 30, in process_bind_param\r\n return idna.encode(value).decode(\"ascii\").lower()\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 361, in encode\r\n s = alabel(label)\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 273, in alabel\r\n raise IDNAError('The label {0} is not a valid A-label'.format(label))\r\nidna.core.IDNAError: The label b'[116' is not a valid A-label\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/lib/python3.7/site-packages/flask/_compat.py\", line 35, in reraise\r\n raise value\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1813, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/usr/lib/python3.7/site-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/app/mailu/internal/views/postfix.py\", line 9, in postfix_mailbox_domain\r\n domain = models.Domain.query.get(domain_name) or \\\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py\", line 924, in get\r\n ident, loading.load_on_pk_identity)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py\", line 1007, in _get_impl\r\n return db_load_fn(self, primary_key_identity)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/loading.py\", line 250, in load_on_pk_identity\r\n return q.one()\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py\", line 2954, in one\r\n ret = self.one_or_none()\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py\", line 2924, in one_or_none\r\n ret = list(self)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py\", line 2995, in __iter__\r\n return self._execute_and_instances(context)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/orm/query.py\", line 3018, in _execute_and_instances\r\n result = conn.execute(querycontext.statement, self._params)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py\", line 948, in execute\r\n return meth(self, multiparams, params)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/sql/elements.py\", line 269, in _execute_on_connection\r\n return connection._execute_clauseelement(self, multiparams, params)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py\", line 1060, in _execute_clauseelement\r\n compiled_sql, distilled_params\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py\", line 1132, in _execute_context\r\n None, None)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py\", line 1413, in _handle_dbapi_exception\r\n exc_info\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/util/compat.py\", line 265, in raise_from_cause\r\n reraise(type(exception), exception, tb=exc_tb, cause=cause)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/util/compat.py\", line 248, in reraise\r\n raise value.with_traceback(tb)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/base.py\", line 1127, in _execute_context\r\n context = constructor(dialect, self, conn, *args)\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/engine/default.py\", line 669, in _init_compiled\r\n param.append(processors[key](compiled_params[key]))\r\n File \"/usr/lib/python3.7/site-packages/sqlalchemy/sql/type_api.py\", line 1182, in process\r\n return process_param(value, dialect)\r\n File \"/app/mailu/models.py\", line 30, in process_bind_param\r\n return idna.encode(value).decode(\"ascii\").lower()\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 361, in encode\r\n s = alabel(label)\r\n File \"/usr/lib/python3.7/site-packages/idna/core.py\", line 273, in alabel\r\n raise IDNAError('The label {0} is not a valid A-label'.format(label))\r\nsqlalchemy.exc.StatementError: (idna.core.IDNAError) The label b'[116' is not a valid A-label [SQL: 'SELECT domain.created_at AS domain_created_at, domain.updated_at AS domain_updated_at, domain.comment AS domain_comment, domain.name AS domain_name, domain.max_users AS domain_max_users, domain.max_aliases AS domain_max_aliases, domain.max_quota_bytes AS domain_max_quota_bytes, domain.signup_enabled AS domain_signup_enabled \\nFROM domain \\nWHERE domain.name = ?'] [parameters: [{'%(140312238936248 param)s': '[116.203.165.200]'}]]\r\n10.36.0.0 - - [27/Jul/2019:14:26:27 +0000] \"GET /internal/postfix/domain/%5B116.203.165.200%5D HTTP/1.1\" 500 291 \"-\" \"Python/3.7 aiohttp/3.5.4\"\r\n10.44.0.0 - - [27/Jul/2019:14:26:31 +0000] \"GET /internal/auth/email HTTP/1.0\" 200 0 \"-\" \"-\"\r\n```\r\n\r\nNOTE: the lookup is cached I think and you need to use another IP address to reproduce the error multiple times.\r\n\r\nI think we're missing some validation rules, because trying a similar (invalid) MAIL FROM with an IP (but without brackets) is handled properly:\r\n\r\n```\r\ntelnet mail.example.com 25\r\nTrying ...\r\nConnected to example.com.\r\nEscape character is '^]'.\r\n220 mail.example.com ESMTP ready\r\nEHLO dummy.example.com\r\n250-mail.example.com\r\n250 STARTTLS\r\nMAIL FROM: [email protected]\r\n250 2.0.0 OK\r\nRCPT TO: [email protected]\r\n501 5.1.7 Bad sender address syntax\r\nConnection closed by foreign host.\r\n```\n", "before_files": [{"content": "from mailu import models\nfrom mailu.internal import internal\n\nimport flask\n\n\[email protected](\"/postfix/domain/<domain_name>\")\ndef postfix_mailbox_domain(domain_name):\n domain = models.Domain.query.get(domain_name) or \\\n models.Alternative.query.get(domain_name) or \\\n flask.abort(404)\n return flask.jsonify(domain.name)\n\n\[email protected](\"/postfix/mailbox/<path:email>\")\ndef postfix_mailbox_map(email):\n user = models.User.query.get(email) or flask.abort(404)\n return flask.jsonify(user.email)\n\n\[email protected](\"/postfix/alias/<path:alias>\")\ndef postfix_alias_map(alias):\n localpart, domain_name = models.Email.resolve_domain(alias)\n if localpart is None:\n return flask.jsonify(domain_name)\n destination = models.Email.resolve_destination(localpart, domain_name)\n return flask.jsonify(\",\".join(destination)) if destination else flask.abort(404)\n\n\[email protected](\"/postfix/transport/<path:email>\")\ndef postfix_transport(email):\n if email == '*':\n return flask.abort(404)\n localpart, domain_name = models.Email.resolve_domain(email)\n relay = models.Relay.query.get(domain_name) or flask.abort(404)\n return flask.jsonify(\"smtp:[{}]\".format(relay.smtp))\n\n\[email protected](\"/postfix/sender/login/<path:sender>\")\ndef postfix_sender_login(sender):\n localpart, domain_name = models.Email.resolve_domain(sender)\n if localpart is None:\n return flask.abort(404)\n destination = models.Email.resolve_destination(localpart, domain_name, True)\n return flask.jsonify(\",\".join(destination)) if destination else flask.abort(404)\n\n\[email protected](\"/postfix/sender/access/<path:sender>\")\ndef postfix_sender_access(sender):\n \"\"\" Simply reject any sender that pretends to be from a local domain\n \"\"\"\n if not is_void_address(sender):\n localpart, domain_name = models.Email.resolve_domain(sender)\n return flask.jsonify(\"REJECT\") if models.Domain.query.get(domain_name) else flask.abort(404)\n else:\n return flask.abort(404)\n\n\ndef is_void_address(email):\n '''True if the email is void (null) email address.\n '''\n if email.startswith('<') and email.endswith('>'):\n email = email[1:-1]\n # Some MTAs use things like '<MAILER-DAEMON>' instead of '<>'; so let's\n # consider void any such thing.\n return '@' not in email\n", "path": "core/admin/mailu/internal/views/postfix.py"}]} | 3,428 | 252 |
gh_patches_debug_28244 | rasdani/github-patches | git_diff | scrapy__scrapy-3383 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Contracts for FormRequest
At the moment there's no way to create a contract for a callback that is passed a response from `FormRequest`.
How about something simple like this (`ContractsManager.from_method`):
```python
request = FormRequest(**kwargs) if 'formdata' in kwargs else Request(**kwargs)
```
</issue>
<code>
[start of scrapy/contracts/__init__.py]
1 import sys
2 import re
3 from functools import wraps
4 from inspect import getmembers
5 from unittest import TestCase
6
7 from scrapy.http import Request
8 from scrapy.utils.spider import iterate_spider_output
9 from scrapy.utils.python import get_spec
10
11
12 class ContractsManager(object):
13 contracts = {}
14
15 def __init__(self, contracts):
16 for contract in contracts:
17 self.contracts[contract.name] = contract
18
19 def tested_methods_from_spidercls(self, spidercls):
20 methods = []
21 for key, value in getmembers(spidercls):
22 if (callable(value) and value.__doc__ and
23 re.search(r'^\s*@', value.__doc__, re.MULTILINE)):
24 methods.append(key)
25
26 return methods
27
28 def extract_contracts(self, method):
29 contracts = []
30 for line in method.__doc__.split('\n'):
31 line = line.strip()
32
33 if line.startswith('@'):
34 name, args = re.match(r'@(\w+)\s*(.*)', line).groups()
35 args = re.split(r'\s+', args)
36
37 contracts.append(self.contracts[name](method, *args))
38
39 return contracts
40
41 def from_spider(self, spider, results):
42 requests = []
43 for method in self.tested_methods_from_spidercls(type(spider)):
44 bound_method = spider.__getattribute__(method)
45 requests.append(self.from_method(bound_method, results))
46
47 return requests
48
49 def from_method(self, method, results):
50 contracts = self.extract_contracts(method)
51 if contracts:
52 # calculate request args
53 args, kwargs = get_spec(Request.__init__)
54 kwargs['callback'] = method
55 for contract in contracts:
56 kwargs = contract.adjust_request_args(kwargs)
57
58 # create and prepare request
59 args.remove('self')
60 if set(args).issubset(set(kwargs)):
61 request = Request(**kwargs)
62
63 # execute pre and post hooks in order
64 for contract in reversed(contracts):
65 request = contract.add_pre_hook(request, results)
66 for contract in contracts:
67 request = contract.add_post_hook(request, results)
68
69 self._clean_req(request, method, results)
70 return request
71
72 def _clean_req(self, request, method, results):
73 """ stop the request from returning objects and records any errors """
74
75 cb = request.callback
76
77 @wraps(cb)
78 def cb_wrapper(response):
79 try:
80 output = cb(response)
81 output = list(iterate_spider_output(output))
82 except:
83 case = _create_testcase(method, 'callback')
84 results.addError(case, sys.exc_info())
85
86 def eb_wrapper(failure):
87 case = _create_testcase(method, 'errback')
88 exc_info = failure.type, failure.value, failure.getTracebackObject()
89 results.addError(case, exc_info)
90
91 request.callback = cb_wrapper
92 request.errback = eb_wrapper
93
94
95 class Contract(object):
96 """ Abstract class for contracts """
97
98 def __init__(self, method, *args):
99 self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)
100 self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name)
101 self.args = args
102
103 def add_pre_hook(self, request, results):
104 if hasattr(self, 'pre_process'):
105 cb = request.callback
106
107 @wraps(cb)
108 def wrapper(response):
109 try:
110 results.startTest(self.testcase_pre)
111 self.pre_process(response)
112 results.stopTest(self.testcase_pre)
113 except AssertionError:
114 results.addFailure(self.testcase_pre, sys.exc_info())
115 except Exception:
116 results.addError(self.testcase_pre, sys.exc_info())
117 else:
118 results.addSuccess(self.testcase_pre)
119 finally:
120 return list(iterate_spider_output(cb(response)))
121
122 request.callback = wrapper
123
124 return request
125
126 def add_post_hook(self, request, results):
127 if hasattr(self, 'post_process'):
128 cb = request.callback
129
130 @wraps(cb)
131 def wrapper(response):
132 output = list(iterate_spider_output(cb(response)))
133 try:
134 results.startTest(self.testcase_post)
135 self.post_process(output)
136 results.stopTest(self.testcase_post)
137 except AssertionError:
138 results.addFailure(self.testcase_post, sys.exc_info())
139 except Exception:
140 results.addError(self.testcase_post, sys.exc_info())
141 else:
142 results.addSuccess(self.testcase_post)
143 finally:
144 return output
145
146 request.callback = wrapper
147
148 return request
149
150 def adjust_request_args(self, args):
151 return args
152
153
154 def _create_testcase(method, desc):
155 spider = method.__self__.name
156
157 class ContractTestCase(TestCase):
158 def __str__(_self):
159 return "[%s] %s (%s)" % (spider, method.__name__, desc)
160
161 name = '%s_%s' % (spider, method.__name__)
162 setattr(ContractTestCase, name, lambda x: x)
163 return ContractTestCase(name)
164
[end of scrapy/contracts/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py
--- a/scrapy/contracts/__init__.py
+++ b/scrapy/contracts/__init__.py
@@ -49,16 +49,22 @@
def from_method(self, method, results):
contracts = self.extract_contracts(method)
if contracts:
+ request_cls = Request
+ for contract in contracts:
+ if contract.request_cls is not None:
+ request_cls = contract.request_cls
+
# calculate request args
- args, kwargs = get_spec(Request.__init__)
+ args, kwargs = get_spec(request_cls.__init__)
kwargs['callback'] = method
for contract in contracts:
kwargs = contract.adjust_request_args(kwargs)
- # create and prepare request
args.remove('self')
+
+ # check if all positional arguments are defined in kwargs
if set(args).issubset(set(kwargs)):
- request = Request(**kwargs)
+ request = request_cls(**kwargs)
# execute pre and post hooks in order
for contract in reversed(contracts):
@@ -94,6 +100,7 @@
class Contract(object):
""" Abstract class for contracts """
+ request_cls = None
def __init__(self, method, *args):
self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)
| {"golden_diff": "diff --git a/scrapy/contracts/__init__.py b/scrapy/contracts/__init__.py\n--- a/scrapy/contracts/__init__.py\n+++ b/scrapy/contracts/__init__.py\n@@ -49,16 +49,22 @@\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n+ request_cls = Request\n+ for contract in contracts:\n+ if contract.request_cls is not None:\n+ request_cls = contract.request_cls\n+\n # calculate request args\n- args, kwargs = get_spec(Request.__init__)\n+ args, kwargs = get_spec(request_cls.__init__)\n kwargs['callback'] = method\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n \n- # create and prepare request\n args.remove('self')\n+\n+ # check if all positional arguments are defined in kwargs\n if set(args).issubset(set(kwargs)):\n- request = Request(**kwargs)\n+ request = request_cls(**kwargs)\n \n # execute pre and post hooks in order\n for contract in reversed(contracts):\n@@ -94,6 +100,7 @@\n \n class Contract(object):\n \"\"\" Abstract class for contracts \"\"\"\n+ request_cls = None\n \n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)\n", "issue": "Contracts for FormRequest\nAt the moment there's no way to create a contract for a callback that is passed a response from `FormRequest`.\r\n\r\nHow about something simple like this (`ContractsManager.from_method`):\r\n```python\r\nrequest = FormRequest(**kwargs) if 'formdata' in kwargs else Request(**kwargs)\r\n```\n", "before_files": [{"content": "import sys\nimport re\nfrom functools import wraps\nfrom inspect import getmembers\nfrom unittest import TestCase\n\nfrom scrapy.http import Request\nfrom scrapy.utils.spider import iterate_spider_output\nfrom scrapy.utils.python import get_spec\n\n\nclass ContractsManager(object):\n contracts = {}\n\n def __init__(self, contracts):\n for contract in contracts:\n self.contracts[contract.name] = contract\n\n def tested_methods_from_spidercls(self, spidercls):\n methods = []\n for key, value in getmembers(spidercls):\n if (callable(value) and value.__doc__ and\n re.search(r'^\\s*@', value.__doc__, re.MULTILINE)):\n methods.append(key)\n\n return methods\n\n def extract_contracts(self, method):\n contracts = []\n for line in method.__doc__.split('\\n'):\n line = line.strip()\n\n if line.startswith('@'):\n name, args = re.match(r'@(\\w+)\\s*(.*)', line).groups()\n args = re.split(r'\\s+', args)\n\n contracts.append(self.contracts[name](method, *args))\n\n return contracts\n\n def from_spider(self, spider, results):\n requests = []\n for method in self.tested_methods_from_spidercls(type(spider)):\n bound_method = spider.__getattribute__(method)\n requests.append(self.from_method(bound_method, results))\n\n return requests\n\n def from_method(self, method, results):\n contracts = self.extract_contracts(method)\n if contracts:\n # calculate request args\n args, kwargs = get_spec(Request.__init__)\n kwargs['callback'] = method\n for contract in contracts:\n kwargs = contract.adjust_request_args(kwargs)\n\n # create and prepare request\n args.remove('self')\n if set(args).issubset(set(kwargs)):\n request = Request(**kwargs)\n\n # execute pre and post hooks in order\n for contract in reversed(contracts):\n request = contract.add_pre_hook(request, results)\n for contract in contracts:\n request = contract.add_post_hook(request, results)\n\n self._clean_req(request, method, results)\n return request\n\n def _clean_req(self, request, method, results):\n \"\"\" stop the request from returning objects and records any errors \"\"\"\n\n cb = request.callback\n\n @wraps(cb)\n def cb_wrapper(response):\n try:\n output = cb(response)\n output = list(iterate_spider_output(output))\n except:\n case = _create_testcase(method, 'callback')\n results.addError(case, sys.exc_info())\n\n def eb_wrapper(failure):\n case = _create_testcase(method, 'errback')\n exc_info = failure.type, failure.value, failure.getTracebackObject()\n results.addError(case, exc_info)\n\n request.callback = cb_wrapper\n request.errback = eb_wrapper\n\n\nclass Contract(object):\n \"\"\" Abstract class for contracts \"\"\"\n\n def __init__(self, method, *args):\n self.testcase_pre = _create_testcase(method, '@%s pre-hook' % self.name)\n self.testcase_post = _create_testcase(method, '@%s post-hook' % self.name)\n self.args = args\n\n def add_pre_hook(self, request, results):\n if hasattr(self, 'pre_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n try:\n results.startTest(self.testcase_pre)\n self.pre_process(response)\n results.stopTest(self.testcase_pre)\n except AssertionError:\n results.addFailure(self.testcase_pre, sys.exc_info())\n except Exception:\n results.addError(self.testcase_pre, sys.exc_info())\n else:\n results.addSuccess(self.testcase_pre)\n finally:\n return list(iterate_spider_output(cb(response)))\n\n request.callback = wrapper\n\n return request\n\n def add_post_hook(self, request, results):\n if hasattr(self, 'post_process'):\n cb = request.callback\n\n @wraps(cb)\n def wrapper(response):\n output = list(iterate_spider_output(cb(response)))\n try:\n results.startTest(self.testcase_post)\n self.post_process(output)\n results.stopTest(self.testcase_post)\n except AssertionError:\n results.addFailure(self.testcase_post, sys.exc_info())\n except Exception:\n results.addError(self.testcase_post, sys.exc_info())\n else:\n results.addSuccess(self.testcase_post)\n finally:\n return output\n\n request.callback = wrapper\n\n return request\n\n def adjust_request_args(self, args):\n return args\n\n\ndef _create_testcase(method, desc):\n spider = method.__self__.name\n\n class ContractTestCase(TestCase):\n def __str__(_self):\n return \"[%s] %s (%s)\" % (spider, method.__name__, desc)\n\n name = '%s_%s' % (spider, method.__name__)\n setattr(ContractTestCase, name, lambda x: x)\n return ContractTestCase(name)\n", "path": "scrapy/contracts/__init__.py"}]} | 2,073 | 311 |
gh_patches_debug_18067 | rasdani/github-patches | git_diff | ivy-llc__ivy-13637 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
beta
</issue>
<code>
[start of ivy/functional/frontends/jax/random.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import (
4 to_ivy_arrays_and_back,
5 handle_jax_dtype,
6 )
7
8
9 @to_ivy_arrays_and_back
10 def PRNGKey(seed):
11 return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)
12
13
14 @handle_jax_dtype
15 @to_ivy_arrays_and_back
16 def uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):
17 return ivy.random_uniform(
18 low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])
19 )
20
21
22 @handle_jax_dtype
23 @to_ivy_arrays_and_back
24 def normal(key, shape=(), dtype=None):
25 return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))
26
[end of ivy/functional/frontends/jax/random.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py
--- a/ivy/functional/frontends/jax/random.py
+++ b/ivy/functional/frontends/jax/random.py
@@ -1,5 +1,6 @@
# local
import ivy
+from ivy.func_wrapper import with_unsupported_dtypes
from ivy.functional.frontends.jax.func_wrapper import (
to_ivy_arrays_and_back,
handle_jax_dtype,
@@ -23,3 +24,24 @@
@to_ivy_arrays_and_back
def normal(key, shape=(), dtype=None):
return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))
+
+
+def _get_seed(key):
+ key1, key2 = int(key[0]), int(key[1])
+ return ivy.to_scalar(int("".join(map(str, [key1, key2]))))
+
+
+@handle_jax_dtype
+@to_ivy_arrays_and_back
+@with_unsupported_dtypes(
+ {
+ "0.3.14 and below": (
+ "float16",
+ "bfloat16",
+ )
+ },
+ "jax",
+)
+def beta(key, a, b, shape=None, dtype=None):
+ seed = _get_seed(key)
+ return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/random.py b/ivy/functional/frontends/jax/random.py\n--- a/ivy/functional/frontends/jax/random.py\n+++ b/ivy/functional/frontends/jax/random.py\n@@ -1,5 +1,6 @@\n # local\n import ivy\n+from ivy.func_wrapper import with_unsupported_dtypes\n from ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n@@ -23,3 +24,24 @@\n @to_ivy_arrays_and_back\n def normal(key, shape=(), dtype=None):\n return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))\n+\n+\n+def _get_seed(key):\n+ key1, key2 = int(key[0]), int(key[1])\n+ return ivy.to_scalar(int(\"\".join(map(str, [key1, key2]))))\n+\n+\n+@handle_jax_dtype\n+@to_ivy_arrays_and_back\n+@with_unsupported_dtypes(\n+ {\n+ \"0.3.14 and below\": (\n+ \"float16\",\n+ \"bfloat16\",\n+ )\n+ },\n+ \"jax\",\n+)\n+def beta(key, a, b, shape=None, dtype=None):\n+ seed = _get_seed(key)\n+ return ivy.beta(a, b, shape=shape, dtype=dtype, seed=seed)\n", "issue": "beta\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_jax_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef PRNGKey(seed):\n return ivy.array([0, seed % 4294967295 - (seed // 4294967295)], dtype=ivy.int64)\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef uniform(key, shape=(), dtype=None, minval=0.0, maxval=1.0):\n return ivy.random_uniform(\n low=minval, high=maxval, shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1])\n )\n\n\n@handle_jax_dtype\n@to_ivy_arrays_and_back\ndef normal(key, shape=(), dtype=None):\n return ivy.random_normal(shape=shape, dtype=dtype, seed=ivy.to_scalar(key[1]))\n", "path": "ivy/functional/frontends/jax/random.py"}]} | 808 | 327 |
gh_patches_debug_28361 | rasdani/github-patches | git_diff | getsentry__sentry-41334 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip-compile and CUSTOM_COMPILE_COMMAND
### Environment
SaaS (https://sentry.io/)
### Version
_No response_
### Steps to Reproduce
JFYI
https://github.com/getsentry/sentry/blob/2e36d4d3a3926e9efe06287e59d27be1c40300fa/tools/freeze_requirements.py#L13
There is `CUSTOM_COMPILE_COMMAND` env to [customize](https://github.com/jazzband/pip-tools#configuration) the header:
```
$ CUSTOM_COMPILE_COMMAND="make freeze-requirements" pip-compile
#
# This file is autogenerated by pip-compile with python 3.11
# To update, run:
#
# make freeze-requirements
#
....
```
### Expected Result
N/A
### Actual Result
N/A
</issue>
<code>
[start of tools/freeze_requirements.py]
1 from __future__ import annotations
2
3 import argparse
4 from concurrent.futures import Future, ThreadPoolExecutor
5 from os.path import abspath
6 from subprocess import CalledProcessError, run
7 from typing import Sequence
8
9 from tools.lib import gitroot
10
11
12 def worker(args: tuple[str, ...]) -> None:
13 # pip-compile doesn't let you customize the header, so we write
14 # one ourselves. However, pip-compile needs -o DEST otherwise
15 # it will bump >= pins even if they're satisfied. So, we need to
16 # unfortunately rewrite the whole file.
17 dest = args[-1]
18 try:
19 run(args, check=True, capture_output=True)
20 except CalledProcessError as e:
21 raise e
22
23 with open(dest, "rb+") as f:
24 content = f.read()
25 f.seek(0, 0)
26 f.write(
27 b"""# DO NOT MODIFY. This file was generated with `make freeze-requirements`.
28
29 """
30 + content
31 )
32
33
34 def check_futures(futures: list[Future[None]]) -> int:
35 rc = 0
36 for future in futures:
37 try:
38 future.result()
39 except CalledProcessError as e:
40 rc = 1
41 print(
42 f"""`{e.cmd}` returned code {e.returncode}
43
44 stdout:
45 {e.stdout.decode()}
46
47 stderr:
48 {e.stderr.decode()}
49 """
50 )
51 return rc
52
53
54 def main(argv: Sequence[str] | None = None) -> int:
55 parser = argparse.ArgumentParser()
56 parser.parse_args(argv)
57
58 base_path = abspath(gitroot())
59
60 base_cmd = (
61 "pip-compile",
62 "--allow-unsafe",
63 "--no-annotate",
64 "--no-header",
65 "--quiet",
66 "--strip-extras",
67 "--index-url=https://pypi.devinfra.sentry.io/simple",
68 )
69
70 executor = ThreadPoolExecutor(max_workers=2)
71 futures = [
72 executor.submit(
73 worker,
74 (
75 *base_cmd,
76 f"{base_path}/requirements-base.txt",
77 f"{base_path}/requirements-getsentry.txt",
78 "-o",
79 f"{base_path}/requirements-frozen.txt",
80 ),
81 ),
82 executor.submit(
83 worker,
84 (
85 *base_cmd,
86 f"{base_path}/requirements-base.txt",
87 f"{base_path}/requirements-getsentry.txt",
88 f"{base_path}/requirements-dev.txt",
89 "-o",
90 f"{base_path}/requirements-dev-frozen.txt",
91 ),
92 ),
93 ]
94
95 rc = check_futures(futures)
96 executor.shutdown()
97 return rc
98
99
100 if __name__ == "__main__":
101 raise SystemExit(main())
102
[end of tools/freeze_requirements.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/freeze_requirements.py b/tools/freeze_requirements.py
--- a/tools/freeze_requirements.py
+++ b/tools/freeze_requirements.py
@@ -1,6 +1,7 @@
from __future__ import annotations
import argparse
+import os
from concurrent.futures import Future, ThreadPoolExecutor
from os.path import abspath
from subprocess import CalledProcessError, run
@@ -10,25 +11,10 @@
def worker(args: tuple[str, ...]) -> None:
- # pip-compile doesn't let you customize the header, so we write
- # one ourselves. However, pip-compile needs -o DEST otherwise
- # it will bump >= pins even if they're satisfied. So, we need to
- # unfortunately rewrite the whole file.
- dest = args[-1]
- try:
- run(args, check=True, capture_output=True)
- except CalledProcessError as e:
- raise e
-
- with open(dest, "rb+") as f:
- content = f.read()
- f.seek(0, 0)
- f.write(
- b"""# DO NOT MODIFY. This file was generated with `make freeze-requirements`.
+ env = os.environ.copy()
+ env["CUSTOM_COMPILE_COMMAND"] = "make freeze-requirements"
-"""
- + content
- )
+ run(args, check=True, capture_output=True, env=env)
def check_futures(futures: list[Future[None]]) -> int:
@@ -61,7 +47,6 @@
"pip-compile",
"--allow-unsafe",
"--no-annotate",
- "--no-header",
"--quiet",
"--strip-extras",
"--index-url=https://pypi.devinfra.sentry.io/simple",
| {"golden_diff": "diff --git a/tools/freeze_requirements.py b/tools/freeze_requirements.py\n--- a/tools/freeze_requirements.py\n+++ b/tools/freeze_requirements.py\n@@ -1,6 +1,7 @@\n from __future__ import annotations\n \n import argparse\n+import os\n from concurrent.futures import Future, ThreadPoolExecutor\n from os.path import abspath\n from subprocess import CalledProcessError, run\n@@ -10,25 +11,10 @@\n \n \n def worker(args: tuple[str, ...]) -> None:\n- # pip-compile doesn't let you customize the header, so we write\n- # one ourselves. However, pip-compile needs -o DEST otherwise\n- # it will bump >= pins even if they're satisfied. So, we need to\n- # unfortunately rewrite the whole file.\n- dest = args[-1]\n- try:\n- run(args, check=True, capture_output=True)\n- except CalledProcessError as e:\n- raise e\n-\n- with open(dest, \"rb+\") as f:\n- content = f.read()\n- f.seek(0, 0)\n- f.write(\n- b\"\"\"# DO NOT MODIFY. This file was generated with `make freeze-requirements`.\n+ env = os.environ.copy()\n+ env[\"CUSTOM_COMPILE_COMMAND\"] = \"make freeze-requirements\"\n \n-\"\"\"\n- + content\n- )\n+ run(args, check=True, capture_output=True, env=env)\n \n \n def check_futures(futures: list[Future[None]]) -> int:\n@@ -61,7 +47,6 @@\n \"pip-compile\",\n \"--allow-unsafe\",\n \"--no-annotate\",\n- \"--no-header\",\n \"--quiet\",\n \"--strip-extras\",\n \"--index-url=https://pypi.devinfra.sentry.io/simple\",\n", "issue": "pip-compile and CUSTOM_COMPILE_COMMAND\n### Environment\r\n\r\nSaaS (https://sentry.io/)\r\n\r\n### Version\r\n\r\n_No response_\r\n\r\n### Steps to Reproduce\r\n\r\nJFYI\r\n\r\nhttps://github.com/getsentry/sentry/blob/2e36d4d3a3926e9efe06287e59d27be1c40300fa/tools/freeze_requirements.py#L13\r\n\r\nThere is `CUSTOM_COMPILE_COMMAND` env to [customize](https://github.com/jazzband/pip-tools#configuration) the header:\r\n\r\n\r\n```\r\n$ CUSTOM_COMPILE_COMMAND=\"make freeze-requirements\" pip-compile\r\n#\r\n# This file is autogenerated by pip-compile with python 3.11\r\n# To update, run:\r\n#\r\n# make freeze-requirements\r\n#\r\n....\r\n```\r\n\r\n### Expected Result\r\n\r\nN/A\r\n\r\n### Actual Result\r\n\r\nN/A\n", "before_files": [{"content": "from __future__ import annotations\n\nimport argparse\nfrom concurrent.futures import Future, ThreadPoolExecutor\nfrom os.path import abspath\nfrom subprocess import CalledProcessError, run\nfrom typing import Sequence\n\nfrom tools.lib import gitroot\n\n\ndef worker(args: tuple[str, ...]) -> None:\n # pip-compile doesn't let you customize the header, so we write\n # one ourselves. However, pip-compile needs -o DEST otherwise\n # it will bump >= pins even if they're satisfied. So, we need to\n # unfortunately rewrite the whole file.\n dest = args[-1]\n try:\n run(args, check=True, capture_output=True)\n except CalledProcessError as e:\n raise e\n\n with open(dest, \"rb+\") as f:\n content = f.read()\n f.seek(0, 0)\n f.write(\n b\"\"\"# DO NOT MODIFY. This file was generated with `make freeze-requirements`.\n\n\"\"\"\n + content\n )\n\n\ndef check_futures(futures: list[Future[None]]) -> int:\n rc = 0\n for future in futures:\n try:\n future.result()\n except CalledProcessError as e:\n rc = 1\n print(\n f\"\"\"`{e.cmd}` returned code {e.returncode}\n\nstdout:\n{e.stdout.decode()}\n\nstderr:\n{e.stderr.decode()}\n\"\"\"\n )\n return rc\n\n\ndef main(argv: Sequence[str] | None = None) -> int:\n parser = argparse.ArgumentParser()\n parser.parse_args(argv)\n\n base_path = abspath(gitroot())\n\n base_cmd = (\n \"pip-compile\",\n \"--allow-unsafe\",\n \"--no-annotate\",\n \"--no-header\",\n \"--quiet\",\n \"--strip-extras\",\n \"--index-url=https://pypi.devinfra.sentry.io/simple\",\n )\n\n executor = ThreadPoolExecutor(max_workers=2)\n futures = [\n executor.submit(\n worker,\n (\n *base_cmd,\n f\"{base_path}/requirements-base.txt\",\n f\"{base_path}/requirements-getsentry.txt\",\n \"-o\",\n f\"{base_path}/requirements-frozen.txt\",\n ),\n ),\n executor.submit(\n worker,\n (\n *base_cmd,\n f\"{base_path}/requirements-base.txt\",\n f\"{base_path}/requirements-getsentry.txt\",\n f\"{base_path}/requirements-dev.txt\",\n \"-o\",\n f\"{base_path}/requirements-dev-frozen.txt\",\n ),\n ),\n ]\n\n rc = check_futures(futures)\n executor.shutdown()\n return rc\n\n\nif __name__ == \"__main__\":\n raise SystemExit(main())\n", "path": "tools/freeze_requirements.py"}]} | 1,499 | 395 |
gh_patches_debug_9494 | rasdani/github-patches | git_diff | quantopian__zipline-382 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make ta-lib optional
`ta-lib` is causing us many installation problems. We should thus make it optional.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 #
3 # Copyright 2013 Quantopian, Inc.
4 #
5 # Licensed under the Apache License, Version 2.0 (the "License");
6 # you may not use this file except in compliance with the License.
7 # You may obtain a copy of the License at
8 #
9 # http://www.apache.org/licenses/LICENSE-2.0
10 #
11 # Unless required by applicable law or agreed to in writing, software
12 # distributed under the License is distributed on an "AS IS" BASIS,
13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14 # See the License for the specific language governing permissions and
15 # limitations under the License.
16 import sys
17
18 from setuptools import setup, find_packages
19
20 LONG_DESCRIPTION = None
21 README_MARKDOWN = None
22
23 with open('README.md') as markdown_source:
24 README_MARKDOWN = markdown_source.read()
25
26 if 'upload' in sys.argv:
27 # Converts the README.md file to ReST, since PyPI uses ReST for formatting,
28 # This allows to have one canonical README file, being the README.md
29 # The conversion only needs to be done on upload.
30 # Otherwise, the pandoc import and errors that are thrown when
31 # pandoc are both overhead and a source of confusion for general
32 # usage/installation.
33 import pandoc
34 pandoc.core.PANDOC_PATH = 'pandoc'
35 doc = pandoc.Document()
36 doc.markdown = README_MARKDOWN
37 LONG_DESCRIPTION = doc.rst
38 else:
39 # If pandoc isn't installed, e.g. when downloading from pip,
40 # just use the regular README.
41 LONG_DESCRIPTION = README_MARKDOWN
42
43 setup(
44 name='zipline',
45 version='0.7.0',
46 description='A backtester for financial algorithms.',
47 author='Quantopian Inc.',
48 author_email='[email protected]',
49 packages=find_packages(),
50 scripts=['scripts/run_algo.py'],
51 long_description=LONG_DESCRIPTION,
52 license='Apache 2.0',
53 classifiers=[
54 'Development Status :: 4 - Beta',
55 'License :: OSI Approved :: Apache Software License',
56 'Natural Language :: English',
57 'Programming Language :: Python',
58 'Programming Language :: Python :: 2.7',
59 'Programming Language :: Python :: 3.3',
60 'Operating System :: OS Independent',
61 'Intended Audience :: Science/Research',
62 'Topic :: Office/Business :: Financial',
63 'Topic :: Scientific/Engineering :: Information Analysis',
64 'Topic :: System :: Distributed Computing',
65 ],
66 install_requires=[
67 'Logbook',
68 'pytz',
69 'requests',
70 'numpy',
71 'pandas',
72 'six'
73 ],
74 url="https://github.com/quantopian/zipline"
75 )
76
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,6 +1,6 @@
#!/usr/bin/env python
#
-# Copyright 2013 Quantopian, Inc.
+# Copyright 2014 Quantopian, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -71,5 +71,8 @@
'pandas',
'six'
],
+ extras_require = {
+ 'talib': ["talib"],
+ },
url="https://github.com/quantopian/zipline"
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,6 +1,6 @@\n #!/usr/bin/env python\n #\n-# Copyright 2013 Quantopian, Inc.\n+# Copyright 2014 Quantopian, Inc.\n #\n # Licensed under the Apache License, Version 2.0 (the \"License\");\n # you may not use this file except in compliance with the License.\n@@ -71,5 +71,8 @@\n 'pandas',\n 'six'\n ],\n+ extras_require = {\n+ 'talib': [\"talib\"],\n+ },\n url=\"https://github.com/quantopian/zipline\"\n )\n", "issue": "Make ta-lib optional\n`ta-lib` is causing us many installation problems. We should thus make it optional.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n#\n# Copyright 2013 Quantopian, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nimport sys\n\nfrom setuptools import setup, find_packages\n\nLONG_DESCRIPTION = None\nREADME_MARKDOWN = None\n\nwith open('README.md') as markdown_source:\n README_MARKDOWN = markdown_source.read()\n\nif 'upload' in sys.argv:\n # Converts the README.md file to ReST, since PyPI uses ReST for formatting,\n # This allows to have one canonical README file, being the README.md\n # The conversion only needs to be done on upload.\n # Otherwise, the pandoc import and errors that are thrown when\n # pandoc are both overhead and a source of confusion for general\n # usage/installation.\n import pandoc\n pandoc.core.PANDOC_PATH = 'pandoc'\n doc = pandoc.Document()\n doc.markdown = README_MARKDOWN\n LONG_DESCRIPTION = doc.rst\nelse:\n # If pandoc isn't installed, e.g. when downloading from pip,\n # just use the regular README.\n LONG_DESCRIPTION = README_MARKDOWN\n\nsetup(\n name='zipline',\n version='0.7.0',\n description='A backtester for financial algorithms.',\n author='Quantopian Inc.',\n author_email='[email protected]',\n packages=find_packages(),\n scripts=['scripts/run_algo.py'],\n long_description=LONG_DESCRIPTION,\n license='Apache 2.0',\n classifiers=[\n 'Development Status :: 4 - Beta',\n 'License :: OSI Approved :: Apache Software License',\n 'Natural Language :: English',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3.3',\n 'Operating System :: OS Independent',\n 'Intended Audience :: Science/Research',\n 'Topic :: Office/Business :: Financial',\n 'Topic :: Scientific/Engineering :: Information Analysis',\n 'Topic :: System :: Distributed Computing',\n ],\n install_requires=[\n 'Logbook',\n 'pytz',\n 'requests',\n 'numpy',\n 'pandas',\n 'six'\n ],\n url=\"https://github.com/quantopian/zipline\"\n)\n", "path": "setup.py"}]} | 1,284 | 153 |
gh_patches_debug_14618 | rasdani/github-patches | git_diff | Kinto__kinto-1312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PostgreSQL Cache: Deadlock
```
(psycopg2.extensions.TransactionRollbackError) deadlock detected
DETAIL: Process 12750 waits for ShareLock on transaction 632693475; blocked by process 10080.
Process 10080 waits for ShareLock on transaction 632693479; blocked by process 12750.
HINT: See server log for query details.
CONTEXT: while deleting tuple (426,1) in relation "cache"
[SQL: 'DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;']
```
</issue>
<code>
[start of kinto/core/cache/postgresql/__init__.py]
1 import logging
2 import os
3
4 from kinto.core.cache import CacheBase
5 from kinto.core.storage.postgresql.client import create_from_config
6 from kinto.core.utils import json
7
8
9 logger = logging.getLogger(__name__)
10
11
12 class Cache(CacheBase):
13 """Cache backend using PostgreSQL.
14
15 Enable in configuration::
16
17 kinto.cache_backend = kinto.core.cache.postgresql
18
19 Database location URI can be customized::
20
21 kinto.cache_url = postgres://user:[email protected]:5432/dbname
22
23 Alternatively, username and password could also rely on system user ident
24 or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).
25
26 .. note::
27
28 Some tables and indices are created when ``kinto migrate`` is run.
29 This requires some privileges on the database, or some error will
30 be raised.
31
32 **Alternatively**, the schema can be initialized outside the
33 python application, using the SQL file located in
34 :file:`kinto/core/cache/postgresql/schema.sql`. This allows to
35 distinguish schema manipulation privileges from schema usage.
36
37
38 A connection pool is enabled by default::
39
40 kinto.cache_pool_size = 10
41 kinto.cache_maxoverflow = 10
42 kinto.cache_max_backlog = -1
43 kinto.cache_pool_recycle = -1
44 kinto.cache_pool_timeout = 30
45 kinto.cache_poolclass =
46 kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog
47
48 The ``max_backlog`` limits the number of threads that can be in the queue
49 waiting for a connection. Once this limit has been reached, any further
50 attempts to acquire a connection will be rejected immediately, instead of
51 locking up all threads by keeping them waiting in the queue.
52
53 See `dedicated section in SQLAlchemy documentation
54 <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_
55 for default values and behaviour.
56
57 .. note::
58
59 Using a `dedicated connection pool <http://pgpool.net>`_ is still
60 recommended to allow load balancing, replication or limit the number
61 of connections used in a multi-process deployment.
62
63 :noindex:
64 """ # NOQA
65 def __init__(self, client, *args, **kwargs):
66 super().__init__(*args, **kwargs)
67 self.client = client
68
69 def initialize_schema(self, dry_run=False):
70 # Check if cache table exists.
71 query = """
72 SELECT 1
73 FROM information_schema.tables
74 WHERE table_name = 'cache';
75 """
76 with self.client.connect(readonly=True) as conn:
77 result = conn.execute(query)
78 if result.rowcount > 0:
79 logger.info("PostgreSQL cache schema is up-to-date.")
80 return
81
82 # Create schema
83 here = os.path.abspath(os.path.dirname(__file__))
84 sql_file = os.path.join(here, 'schema.sql')
85
86 if dry_run:
87 logger.info("Create cache schema from '{}'".format(sql_file))
88 return
89
90 # Since called outside request, force commit.
91 with open(sql_file) as f:
92 schema = f.read()
93 with self.client.connect(force_commit=True) as conn:
94 conn.execute(schema)
95 logger.info('Created PostgreSQL cache tables')
96
97 def flush(self):
98 query = """
99 DELETE FROM cache;
100 """
101 # Since called outside request (e.g. tests), force commit.
102 with self.client.connect(force_commit=True) as conn:
103 conn.execute(query)
104 logger.debug('Flushed PostgreSQL cache tables')
105
106 def ttl(self, key):
107 query = """
108 SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl
109 FROM cache
110 WHERE key = :key
111 AND ttl IS NOT NULL;
112 """
113 with self.client.connect(readonly=True) as conn:
114 result = conn.execute(query, dict(key=self.prefix + key))
115 if result.rowcount > 0:
116 return result.fetchone()['ttl']
117 return -1
118
119 def expire(self, key, ttl):
120 query = """
121 UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;
122 """
123 with self.client.connect() as conn:
124 conn.execute(query, dict(ttl=ttl, key=self.prefix + key))
125
126 def set(self, key, value, ttl):
127 if isinstance(value, bytes):
128 raise TypeError("a string-like object is required, not 'bytes'")
129
130 query = """
131 INSERT INTO cache (key, value, ttl)
132 VALUES (:key, :value, sec2ttl(:ttl))
133 ON CONFLICT (key) DO UPDATE
134 SET value = :value,
135 ttl = sec2ttl(:ttl);
136 """
137 value = json.dumps(value)
138 with self.client.connect() as conn:
139 conn.execute(query, dict(key=self.prefix + key,
140 value=value, ttl=ttl))
141
142 def get(self, key):
143 purge = "DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;"
144 query = "SELECT value FROM cache WHERE key = :key;"
145 with self.client.connect() as conn:
146 conn.execute(purge)
147 result = conn.execute(query, dict(key=self.prefix + key))
148 if result.rowcount > 0:
149 value = result.fetchone()['value']
150 return json.loads(value)
151
152 def delete(self, key):
153 query = "DELETE FROM cache WHERE key = :key RETURNING value;"
154 with self.client.connect() as conn:
155 result = conn.execute(query, dict(key=self.prefix + key))
156 if result.rowcount > 0:
157 value = result.fetchone()['value']
158 return json.loads(value)
159 return None
160
161
162 def load_from_config(config):
163 settings = config.get_settings()
164 client = create_from_config(config, prefix='cache_', with_transaction=False)
165 return Cache(client=client, cache_prefix=settings['cache_prefix'])
166
[end of kinto/core/cache/postgresql/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/cache/postgresql/__init__.py b/kinto/core/cache/postgresql/__init__.py
--- a/kinto/core/cache/postgresql/__init__.py
+++ b/kinto/core/cache/postgresql/__init__.py
@@ -140,8 +140,17 @@
value=value, ttl=ttl))
def get(self, key):
- purge = "DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;"
- query = "SELECT value FROM cache WHERE key = :key;"
+ purge = """
+ DELETE FROM cache c
+ USING (
+ SELECT key
+ FROM cache
+ WHERE ttl IS NOT NULL AND now() > ttl
+ ORDER BY key ASC
+ FOR UPDATE
+ ) del
+ WHERE del.key = c.key;"""
+ query = "SELECT value FROM cache WHERE key = :key AND now() < ttl;"
with self.client.connect() as conn:
conn.execute(purge)
result = conn.execute(query, dict(key=self.prefix + key))
| {"golden_diff": "diff --git a/kinto/core/cache/postgresql/__init__.py b/kinto/core/cache/postgresql/__init__.py\n--- a/kinto/core/cache/postgresql/__init__.py\n+++ b/kinto/core/cache/postgresql/__init__.py\n@@ -140,8 +140,17 @@\n value=value, ttl=ttl))\n \n def get(self, key):\n- purge = \"DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;\"\n- query = \"SELECT value FROM cache WHERE key = :key;\"\n+ purge = \"\"\"\n+ DELETE FROM cache c\n+ USING (\n+ SELECT key\n+ FROM cache\n+ WHERE ttl IS NOT NULL AND now() > ttl\n+ ORDER BY key ASC\n+ FOR UPDATE\n+ ) del\n+ WHERE del.key = c.key;\"\"\"\n+ query = \"SELECT value FROM cache WHERE key = :key AND now() < ttl;\"\n with self.client.connect() as conn:\n conn.execute(purge)\n result = conn.execute(query, dict(key=self.prefix + key))\n", "issue": "PostgreSQL Cache: Deadlock\n```\r\n(psycopg2.extensions.TransactionRollbackError) deadlock detected\r\nDETAIL: Process 12750 waits for ShareLock on transaction 632693475; blocked by process 10080.\r\nProcess 10080 waits for ShareLock on transaction 632693479; blocked by process 12750.\r\nHINT: See server log for query details.\r\nCONTEXT: while deleting tuple (426,1) in relation \"cache\"\r\n [SQL: 'DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;']\r\n```\n", "before_files": [{"content": "import logging\nimport os\n\nfrom kinto.core.cache import CacheBase\nfrom kinto.core.storage.postgresql.client import create_from_config\nfrom kinto.core.utils import json\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass Cache(CacheBase):\n \"\"\"Cache backend using PostgreSQL.\n\n Enable in configuration::\n\n kinto.cache_backend = kinto.core.cache.postgresql\n\n Database location URI can be customized::\n\n kinto.cache_url = postgres://user:[email protected]:5432/dbname\n\n Alternatively, username and password could also rely on system user ident\n or even specified in :file:`~/.pgpass` (*see PostgreSQL documentation*).\n\n .. note::\n\n Some tables and indices are created when ``kinto migrate`` is run.\n This requires some privileges on the database, or some error will\n be raised.\n\n **Alternatively**, the schema can be initialized outside the\n python application, using the SQL file located in\n :file:`kinto/core/cache/postgresql/schema.sql`. This allows to\n distinguish schema manipulation privileges from schema usage.\n\n\n A connection pool is enabled by default::\n\n kinto.cache_pool_size = 10\n kinto.cache_maxoverflow = 10\n kinto.cache_max_backlog = -1\n kinto.cache_pool_recycle = -1\n kinto.cache_pool_timeout = 30\n kinto.cache_poolclass =\n kinto.core.storage.postgresql.pool.QueuePoolWithMaxBacklog\n\n The ``max_backlog`` limits the number of threads that can be in the queue\n waiting for a connection. Once this limit has been reached, any further\n attempts to acquire a connection will be rejected immediately, instead of\n locking up all threads by keeping them waiting in the queue.\n\n See `dedicated section in SQLAlchemy documentation\n <http://docs.sqlalchemy.org/en/rel_1_0/core/engines.html>`_\n for default values and behaviour.\n\n .. note::\n\n Using a `dedicated connection pool <http://pgpool.net>`_ is still\n recommended to allow load balancing, replication or limit the number\n of connections used in a multi-process deployment.\n\n :noindex:\n \"\"\" # NOQA\n def __init__(self, client, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.client = client\n\n def initialize_schema(self, dry_run=False):\n # Check if cache table exists.\n query = \"\"\"\n SELECT 1\n FROM information_schema.tables\n WHERE table_name = 'cache';\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query)\n if result.rowcount > 0:\n logger.info(\"PostgreSQL cache schema is up-to-date.\")\n return\n\n # Create schema\n here = os.path.abspath(os.path.dirname(__file__))\n sql_file = os.path.join(here, 'schema.sql')\n\n if dry_run:\n logger.info(\"Create cache schema from '{}'\".format(sql_file))\n return\n\n # Since called outside request, force commit.\n with open(sql_file) as f:\n schema = f.read()\n with self.client.connect(force_commit=True) as conn:\n conn.execute(schema)\n logger.info('Created PostgreSQL cache tables')\n\n def flush(self):\n query = \"\"\"\n DELETE FROM cache;\n \"\"\"\n # Since called outside request (e.g. tests), force commit.\n with self.client.connect(force_commit=True) as conn:\n conn.execute(query)\n logger.debug('Flushed PostgreSQL cache tables')\n\n def ttl(self, key):\n query = \"\"\"\n SELECT EXTRACT(SECOND FROM (ttl - now())) AS ttl\n FROM cache\n WHERE key = :key\n AND ttl IS NOT NULL;\n \"\"\"\n with self.client.connect(readonly=True) as conn:\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n return result.fetchone()['ttl']\n return -1\n\n def expire(self, key, ttl):\n query = \"\"\"\n UPDATE cache SET ttl = sec2ttl(:ttl) WHERE key = :key;\n \"\"\"\n with self.client.connect() as conn:\n conn.execute(query, dict(ttl=ttl, key=self.prefix + key))\n\n def set(self, key, value, ttl):\n if isinstance(value, bytes):\n raise TypeError(\"a string-like object is required, not 'bytes'\")\n\n query = \"\"\"\n INSERT INTO cache (key, value, ttl)\n VALUES (:key, :value, sec2ttl(:ttl))\n ON CONFLICT (key) DO UPDATE\n SET value = :value,\n ttl = sec2ttl(:ttl);\n \"\"\"\n value = json.dumps(value)\n with self.client.connect() as conn:\n conn.execute(query, dict(key=self.prefix + key,\n value=value, ttl=ttl))\n\n def get(self, key):\n purge = \"DELETE FROM cache WHERE ttl IS NOT NULL AND now() > ttl;\"\n query = \"SELECT value FROM cache WHERE key = :key;\"\n with self.client.connect() as conn:\n conn.execute(purge)\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n value = result.fetchone()['value']\n return json.loads(value)\n\n def delete(self, key):\n query = \"DELETE FROM cache WHERE key = :key RETURNING value;\"\n with self.client.connect() as conn:\n result = conn.execute(query, dict(key=self.prefix + key))\n if result.rowcount > 0:\n value = result.fetchone()['value']\n return json.loads(value)\n return None\n\n\ndef load_from_config(config):\n settings = config.get_settings()\n client = create_from_config(config, prefix='cache_', with_transaction=False)\n return Cache(client=client, cache_prefix=settings['cache_prefix'])\n", "path": "kinto/core/cache/postgresql/__init__.py"}]} | 2,348 | 232 |
gh_patches_debug_50307 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3345 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider konagrill is broken
During the global build at 2021-06-16-14-42-20, spider **konagrill** failed with **0 features** and **24 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/konagrill.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/konagrill.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/konagrill.geojson))
Spider konagrill is broken
During the global build at 2021-10-13-14-42-23, spider **konagrill** failed with **0 features** and **24 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/logs/konagrill.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/konagrill.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/konagrill.geojson))
</issue>
<code>
[start of locations/spiders/kona_grill.py]
1 # -*- coding: utf-8 -*-
2 import json
3 import scrapy
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8 STATES = [
9 "AL", "AK", "AZ", "AR", "CA", "CO", "CT", "DC", "DE", "FL",
10 "GA", "HI", "ID", "IL", "IN", "IA", "KS", "KY", "LA", "ME",
11 "MD", "MA", "MI", "MN", "MS", "MO", "MT", "NE", "NV", "NH",
12 "NJ", "NM", "NY", "NC", "ND", "OH", "OK", "OR", "PA", "RI",
13 "SC", "SD", "TN", "TX", "UT", "VT", "VA", "WA", "WV", "WI",
14 "WY",
15 ]
16
17 WEEKDAYS = ["Mo", "Tu", "We", "Th", "Fr", "Sa", "Su"]
18
19
20 class KonaGrillSpider(scrapy.Spider):
21 download_delay = 0.2
22 name = "konagrill"
23 item_attributes = {"brand": "Kona Grill", "brand_wikidata": "Q6428706"}
24 allowed_domains = ["konagrill.com"]
25
26 def start_requests(self):
27 url_by_state = "https://www.konagrill.com/ajax/getlocationsbystate"
28 headers = {"content-type": "application/x-www-form-urlencoded"}
29
30 # Get store id per state
31 for state in STATES:
32 yield scrapy.http.Request(
33 url_by_state,
34 method="POST",
35 body="state={}".format(state),
36 callback=self.parse,
37 headers=headers,
38 )
39
40 def parse(self, response):
41 store_data = json.loads(response.text)
42 url_location_details = "https://www.konagrill.com/ajax/getlocationdetails"
43 headers = {"content-type": "application/x-www-form-urlencoded"}
44 store_ids = []
45
46 if not store_data.get("data"):
47 return
48
49 store_ids += [s.get("id") for _, s in store_data.get("data").items()]
50
51 # Get store details
52 for i in store_ids:
53 yield scrapy.http.Request(
54 url_location_details,
55 method="POST",
56 body="id={}".format(i),
57 callback=self.parse_store,
58 headers=headers,
59 )
60
61 def parse_store(self, response):
62 response_data = json.loads(response.text)
63 if not response_data.get("data"):
64 return
65
66 store = response_data.get("data")
67 dh = store.get("dininghours")
68 # Data is inconsistent some keys were found with a trailing space
69 opening_hours = self.parse_hours(dh.get("dining hours") or dh.get("dining hours "))
70 properties = {
71 "addr_full": store.get("address"),
72 "city": store.get("city"),
73 "extras": {"email": store.get("email"),},
74 "lat": store.get("latitude"),
75 "lon": store.get("longitude"),
76 "name": store.get("title"),
77 "opening_hours": opening_hours,
78 "phone": store.get("phone_number"),
79 "postcode": store.get("zip"),
80 "ref": store.get("id"),
81 "state": store.get("state"),
82 "website": store.get("order_online_url"),
83 }
84
85 yield GeojsonPointItem(**properties)
86
87 def parse_hours(self, hours):
88 oh = OpeningHours()
89
90 for t in hours:
91 days = self.parse_days(t.get("days"))
92 open_time, close_time = t.get("hours").split("-")
93 ot = open_time.strip()
94 ct = close_time.strip()
95 for day in days:
96 oh.add_range(day=day, open_time=ot, close_time=ct, time_format="%I%p")
97
98 return oh.as_opening_hours()
99
100 def parse_days(self, days):
101 """Parse day ranges and returns a list of days it represent
102 The following formats are considered:
103 - Single day, e.g. "Mon", "Monday"
104 - Range, e.g. "Mon-Fri", "Tue-Sund", "Sat-Sunday"
105 - Two days, e.g. "Sat & Sun", "Friday & Su"
106
107 Returns a list with the weekdays
108 """
109 parsed_days = []
110
111 # Range
112 # Produce a list of weekdays between two days e.g. su-sa, mo-th, etc.
113 if "-" in days:
114 d = days.split("-")
115 r = [i.strip()[:2] for i in d]
116 s = WEEKDAYS.index(r[0].title())
117 e = WEEKDAYS.index(r[1].title())
118 if s <= e:
119 return WEEKDAYS[s : e + 1]
120 else:
121 return WEEKDAYS[s:] + WEEKDAYS[: e + 1]
122 # Two days
123 if "&" in days:
124 d = days.split("&")
125 return [i.strip()[:2].title() for i in d]
126 # Single days
127 else:
128 return [days.strip()[:2].title()]
129
[end of locations/spiders/kona_grill.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/kona_grill.py b/locations/spiders/kona_grill.py
--- a/locations/spiders/kona_grill.py
+++ b/locations/spiders/kona_grill.py
@@ -88,6 +88,10 @@
oh = OpeningHours()
for t in hours:
+ # Some day entries contain invalid week data, e.g. "Brunch"
+ # "Brunch" is a special dining hour that is contained in regular hours, ignore it
+ if "Brunch" in t.get("days"):
+ continue
days = self.parse_days(t.get("days"))
open_time, close_time = t.get("hours").split("-")
ot = open_time.strip()
| {"golden_diff": "diff --git a/locations/spiders/kona_grill.py b/locations/spiders/kona_grill.py\n--- a/locations/spiders/kona_grill.py\n+++ b/locations/spiders/kona_grill.py\n@@ -88,6 +88,10 @@\n oh = OpeningHours()\n \n for t in hours:\n+ # Some day entries contain invalid week data, e.g. \"Brunch\"\n+ # \"Brunch\" is a special dining hour that is contained in regular hours, ignore it\n+ if \"Brunch\" in t.get(\"days\"):\n+ continue\n days = self.parse_days(t.get(\"days\"))\n open_time, close_time = t.get(\"hours\").split(\"-\")\n ot = open_time.strip()\n", "issue": "Spider konagrill is broken\nDuring the global build at 2021-06-16-14-42-20, spider **konagrill** failed with **0 features** and **24 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/logs/konagrill.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/konagrill.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-16-14-42-20/output/konagrill.geojson))\nSpider konagrill is broken\nDuring the global build at 2021-10-13-14-42-23, spider **konagrill** failed with **0 features** and **24 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/logs/konagrill.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/konagrill.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-10-13-14-42-23/output/konagrill.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\nSTATES = [\n \"AL\", \"AK\", \"AZ\", \"AR\", \"CA\", \"CO\", \"CT\", \"DC\", \"DE\", \"FL\",\n \"GA\", \"HI\", \"ID\", \"IL\", \"IN\", \"IA\", \"KS\", \"KY\", \"LA\", \"ME\",\n \"MD\", \"MA\", \"MI\", \"MN\", \"MS\", \"MO\", \"MT\", \"NE\", \"NV\", \"NH\",\n \"NJ\", \"NM\", \"NY\", \"NC\", \"ND\", \"OH\", \"OK\", \"OR\", \"PA\", \"RI\",\n \"SC\", \"SD\", \"TN\", \"TX\", \"UT\", \"VT\", \"VA\", \"WA\", \"WV\", \"WI\",\n \"WY\",\n]\n\nWEEKDAYS = [\"Mo\", \"Tu\", \"We\", \"Th\", \"Fr\", \"Sa\", \"Su\"]\n\n\nclass KonaGrillSpider(scrapy.Spider):\n download_delay = 0.2\n name = \"konagrill\"\n item_attributes = {\"brand\": \"Kona Grill\", \"brand_wikidata\": \"Q6428706\"}\n allowed_domains = [\"konagrill.com\"]\n\n def start_requests(self):\n url_by_state = \"https://www.konagrill.com/ajax/getlocationsbystate\"\n headers = {\"content-type\": \"application/x-www-form-urlencoded\"}\n\n # Get store id per state\n for state in STATES:\n yield scrapy.http.Request(\n url_by_state,\n method=\"POST\",\n body=\"state={}\".format(state),\n callback=self.parse,\n headers=headers,\n )\n\n def parse(self, response):\n store_data = json.loads(response.text)\n url_location_details = \"https://www.konagrill.com/ajax/getlocationdetails\"\n headers = {\"content-type\": \"application/x-www-form-urlencoded\"}\n store_ids = []\n\n if not store_data.get(\"data\"):\n return\n\n store_ids += [s.get(\"id\") for _, s in store_data.get(\"data\").items()]\n\n # Get store details\n for i in store_ids:\n yield scrapy.http.Request(\n url_location_details,\n method=\"POST\",\n body=\"id={}\".format(i),\n callback=self.parse_store,\n headers=headers,\n )\n\n def parse_store(self, response):\n response_data = json.loads(response.text)\n if not response_data.get(\"data\"):\n return\n\n store = response_data.get(\"data\")\n dh = store.get(\"dininghours\")\n # Data is inconsistent some keys were found with a trailing space\n opening_hours = self.parse_hours(dh.get(\"dining hours\") or dh.get(\"dining hours \"))\n properties = {\n \"addr_full\": store.get(\"address\"),\n \"city\": store.get(\"city\"),\n \"extras\": {\"email\": store.get(\"email\"),},\n \"lat\": store.get(\"latitude\"),\n \"lon\": store.get(\"longitude\"),\n \"name\": store.get(\"title\"),\n \"opening_hours\": opening_hours,\n \"phone\": store.get(\"phone_number\"),\n \"postcode\": store.get(\"zip\"),\n \"ref\": store.get(\"id\"),\n \"state\": store.get(\"state\"),\n \"website\": store.get(\"order_online_url\"),\n }\n\n yield GeojsonPointItem(**properties)\n\n def parse_hours(self, hours):\n oh = OpeningHours()\n\n for t in hours:\n days = self.parse_days(t.get(\"days\"))\n open_time, close_time = t.get(\"hours\").split(\"-\")\n ot = open_time.strip()\n ct = close_time.strip()\n for day in days:\n oh.add_range(day=day, open_time=ot, close_time=ct, time_format=\"%I%p\")\n\n return oh.as_opening_hours()\n\n def parse_days(self, days):\n \"\"\"Parse day ranges and returns a list of days it represent\n The following formats are considered:\n - Single day, e.g. \"Mon\", \"Monday\"\n - Range, e.g. \"Mon-Fri\", \"Tue-Sund\", \"Sat-Sunday\"\n - Two days, e.g. \"Sat & Sun\", \"Friday & Su\"\n\n Returns a list with the weekdays\n \"\"\"\n parsed_days = []\n\n # Range\n # Produce a list of weekdays between two days e.g. su-sa, mo-th, etc.\n if \"-\" in days:\n d = days.split(\"-\")\n r = [i.strip()[:2] for i in d]\n s = WEEKDAYS.index(r[0].title())\n e = WEEKDAYS.index(r[1].title())\n if s <= e:\n return WEEKDAYS[s : e + 1]\n else:\n return WEEKDAYS[s:] + WEEKDAYS[: e + 1]\n # Two days\n if \"&\" in days:\n d = days.split(\"&\")\n return [i.strip()[:2].title() for i in d]\n # Single days\n else:\n return [days.strip()[:2].title()]\n", "path": "locations/spiders/kona_grill.py"}]} | 2,315 | 164 |
gh_patches_debug_3571 | rasdani/github-patches | git_diff | nerfstudio-project__nerfstudio-2076 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Doc Description Wrong
Hello,
I find python doc in [get_depth_image_from_path](https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/data/utils/data_utils.py) is wrong about the return tensor shape,
it should be [height, width, 1] not [width, height, 1].

a simple verification:

</issue>
<code>
[start of nerfstudio/data/utils/data_utils.py]
1 # Copyright 2022 the Regents of the University of California, Nerfstudio Team and contributors. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Utility functions to allow easy re-use of common operations across dataloaders"""
16 from pathlib import Path
17 from typing import List, Tuple, Union
18
19 import cv2
20 import numpy as np
21 import torch
22 from PIL import Image
23
24
25 def get_image_mask_tensor_from_path(filepath: Path, scale_factor: float = 1.0) -> torch.Tensor:
26 """
27 Utility function to read a mask image from the given path and return a boolean tensor
28 """
29 pil_mask = Image.open(filepath)
30 if scale_factor != 1.0:
31 width, height = pil_mask.size
32 newsize = (int(width * scale_factor), int(height * scale_factor))
33 pil_mask = pil_mask.resize(newsize, resample=Image.NEAREST)
34 mask_tensor = torch.from_numpy(np.array(pil_mask)).unsqueeze(-1).bool()
35 if len(mask_tensor.shape) != 3:
36 raise ValueError("The mask image should have 1 channel")
37 return mask_tensor
38
39
40 def get_semantics_and_mask_tensors_from_path(
41 filepath: Path, mask_indices: Union[List, torch.Tensor], scale_factor: float = 1.0
42 ) -> Tuple[torch.Tensor, torch.Tensor]:
43 """
44 Utility function to read segmentation from the given filepath
45 If no mask is required - use mask_indices = []
46 """
47 if isinstance(mask_indices, List):
48 mask_indices = torch.tensor(mask_indices, dtype=torch.int64).view(1, 1, -1)
49 pil_image = Image.open(filepath)
50 if scale_factor != 1.0:
51 width, height = pil_image.size
52 newsize = (int(width * scale_factor), int(height * scale_factor))
53 pil_image = pil_image.resize(newsize, resample=Image.NEAREST)
54 semantics = torch.from_numpy(np.array(pil_image, dtype="int64"))[..., None]
55 mask = torch.sum(semantics == mask_indices, dim=-1, keepdim=True) == 0
56 return semantics, mask
57
58
59 def get_depth_image_from_path(
60 filepath: Path,
61 height: int,
62 width: int,
63 scale_factor: float,
64 interpolation: int = cv2.INTER_NEAREST,
65 ) -> torch.Tensor:
66 """Loads, rescales and resizes depth images.
67 Filepath points to a 16-bit or 32-bit depth image, or a numpy array `*.npy`.
68
69 Args:
70 filepath: Path to depth image.
71 height: Target depth image height.
72 width: Target depth image width.
73 scale_factor: Factor by which to scale depth image.
74 interpolation: Depth value interpolation for resizing.
75
76 Returns:
77 Depth image torch tensor with shape [width, height, 1].
78 """
79 if filepath.suffix == ".npy":
80 image = np.load(filepath) * scale_factor
81 image = cv2.resize(image, (width, height), interpolation=interpolation)
82 else:
83 image = cv2.imread(str(filepath.absolute()), cv2.IMREAD_ANYDEPTH)
84 image = image.astype(np.float64) * scale_factor
85 image = cv2.resize(image, (width, height), interpolation=interpolation)
86 return torch.from_numpy(image[:, :, np.newaxis])
87
[end of nerfstudio/data/utils/data_utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/nerfstudio/data/utils/data_utils.py b/nerfstudio/data/utils/data_utils.py
--- a/nerfstudio/data/utils/data_utils.py
+++ b/nerfstudio/data/utils/data_utils.py
@@ -74,7 +74,7 @@
interpolation: Depth value interpolation for resizing.
Returns:
- Depth image torch tensor with shape [width, height, 1].
+ Depth image torch tensor with shape [height, width, 1].
"""
if filepath.suffix == ".npy":
image = np.load(filepath) * scale_factor
| {"golden_diff": "diff --git a/nerfstudio/data/utils/data_utils.py b/nerfstudio/data/utils/data_utils.py\n--- a/nerfstudio/data/utils/data_utils.py\n+++ b/nerfstudio/data/utils/data_utils.py\n@@ -74,7 +74,7 @@\n interpolation: Depth value interpolation for resizing.\n \n Returns:\n- Depth image torch tensor with shape [width, height, 1].\n+ Depth image torch tensor with shape [height, width, 1].\n \"\"\"\n if filepath.suffix == \".npy\":\n image = np.load(filepath) * scale_factor\n", "issue": "Doc Description Wrong\nHello, \r\n\r\nI find python doc in [get_depth_image_from_path](https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/data/utils/data_utils.py) is wrong about the return tensor shape, \r\n\r\nit should be [height, width, 1] not [width, height, 1].\r\n\r\n\r\n\r\na simple verification:\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2022 the Regents of the University of California, Nerfstudio Team and contributors. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Utility functions to allow easy re-use of common operations across dataloaders\"\"\"\nfrom pathlib import Path\nfrom typing import List, Tuple, Union\n\nimport cv2\nimport numpy as np\nimport torch\nfrom PIL import Image\n\n\ndef get_image_mask_tensor_from_path(filepath: Path, scale_factor: float = 1.0) -> torch.Tensor:\n \"\"\"\n Utility function to read a mask image from the given path and return a boolean tensor\n \"\"\"\n pil_mask = Image.open(filepath)\n if scale_factor != 1.0:\n width, height = pil_mask.size\n newsize = (int(width * scale_factor), int(height * scale_factor))\n pil_mask = pil_mask.resize(newsize, resample=Image.NEAREST)\n mask_tensor = torch.from_numpy(np.array(pil_mask)).unsqueeze(-1).bool()\n if len(mask_tensor.shape) != 3:\n raise ValueError(\"The mask image should have 1 channel\")\n return mask_tensor\n\n\ndef get_semantics_and_mask_tensors_from_path(\n filepath: Path, mask_indices: Union[List, torch.Tensor], scale_factor: float = 1.0\n) -> Tuple[torch.Tensor, torch.Tensor]:\n \"\"\"\n Utility function to read segmentation from the given filepath\n If no mask is required - use mask_indices = []\n \"\"\"\n if isinstance(mask_indices, List):\n mask_indices = torch.tensor(mask_indices, dtype=torch.int64).view(1, 1, -1)\n pil_image = Image.open(filepath)\n if scale_factor != 1.0:\n width, height = pil_image.size\n newsize = (int(width * scale_factor), int(height * scale_factor))\n pil_image = pil_image.resize(newsize, resample=Image.NEAREST)\n semantics = torch.from_numpy(np.array(pil_image, dtype=\"int64\"))[..., None]\n mask = torch.sum(semantics == mask_indices, dim=-1, keepdim=True) == 0\n return semantics, mask\n\n\ndef get_depth_image_from_path(\n filepath: Path,\n height: int,\n width: int,\n scale_factor: float,\n interpolation: int = cv2.INTER_NEAREST,\n) -> torch.Tensor:\n \"\"\"Loads, rescales and resizes depth images.\n Filepath points to a 16-bit or 32-bit depth image, or a numpy array `*.npy`.\n\n Args:\n filepath: Path to depth image.\n height: Target depth image height.\n width: Target depth image width.\n scale_factor: Factor by which to scale depth image.\n interpolation: Depth value interpolation for resizing.\n\n Returns:\n Depth image torch tensor with shape [width, height, 1].\n \"\"\"\n if filepath.suffix == \".npy\":\n image = np.load(filepath) * scale_factor\n image = cv2.resize(image, (width, height), interpolation=interpolation)\n else:\n image = cv2.imread(str(filepath.absolute()), cv2.IMREAD_ANYDEPTH)\n image = image.astype(np.float64) * scale_factor\n image = cv2.resize(image, (width, height), interpolation=interpolation)\n return torch.from_numpy(image[:, :, np.newaxis])\n", "path": "nerfstudio/data/utils/data_utils.py"}]} | 1,731 | 128 |
gh_patches_debug_30107 | rasdani/github-patches | git_diff | getsentry__sentry-python-432 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Enforce functools.wraps on all monkeypatches
I'm using [`celery-once`](https://github.com/cameronmaske/celery-once) to make sure some tasks are only run one at a time. The lock is acquired in `apply_async` and released in `after_return`.
When I use the `CeleryIntegration` I'm not seeing any of the locks being released after the task has finished running. I'm not very familiar with the inner workings of celery, but any ideas what's going on?
</issue>
<code>
[start of sentry_sdk/integrations/celery.py]
1 from __future__ import absolute_import
2
3 import sys
4
5 from celery.exceptions import ( # type: ignore
6 SoftTimeLimitExceeded,
7 Retry,
8 Ignore,
9 Reject,
10 )
11
12 from sentry_sdk.hub import Hub
13 from sentry_sdk.utils import capture_internal_exceptions, event_from_exception
14 from sentry_sdk.tracing import Span
15 from sentry_sdk._compat import reraise
16 from sentry_sdk.integrations import Integration
17 from sentry_sdk.integrations.logging import ignore_logger
18 from sentry_sdk._types import MYPY
19
20 if MYPY:
21 from typing import Any
22
23
24 CELERY_CONTROL_FLOW_EXCEPTIONS = (Retry, Ignore, Reject)
25
26
27 class CeleryIntegration(Integration):
28 identifier = "celery"
29
30 def __init__(self, propagate_traces=True):
31 # type: (bool) -> None
32 self.propagate_traces = propagate_traces
33
34 @staticmethod
35 def setup_once():
36 # type: () -> None
37 import celery.app.trace as trace # type: ignore
38
39 old_build_tracer = trace.build_tracer
40
41 def sentry_build_tracer(name, task, *args, **kwargs):
42 if not getattr(task, "_sentry_is_patched", False):
43 # Need to patch both methods because older celery sometimes
44 # short-circuits to task.run if it thinks it's safe.
45 task.__call__ = _wrap_task_call(task, task.__call__)
46 task.run = _wrap_task_call(task, task.run)
47 task.apply_async = _wrap_apply_async(task, task.apply_async)
48
49 # `build_tracer` is apparently called for every task
50 # invocation. Can't wrap every celery task for every invocation
51 # or we will get infinitely nested wrapper functions.
52 task._sentry_is_patched = True
53
54 return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))
55
56 trace.build_tracer = sentry_build_tracer
57
58 _patch_worker_exit()
59
60 # This logger logs every status of every task that ran on the worker.
61 # Meaning that every task's breadcrumbs are full of stuff like "Task
62 # <foo> raised unexpected <bar>".
63 ignore_logger("celery.worker.job")
64
65
66 def _wrap_apply_async(task, f):
67 def apply_async(*args, **kwargs):
68 hub = Hub.current
69 integration = hub.get_integration(CeleryIntegration)
70 if integration is not None and integration.propagate_traces:
71 headers = None
72 for key, value in hub.iter_trace_propagation_headers():
73 if headers is None:
74 headers = dict(kwargs.get("headers") or {})
75 headers[key] = value
76 if headers is not None:
77 kwargs["headers"] = headers
78 return f(*args, **kwargs)
79
80 return apply_async
81
82
83 def _wrap_tracer(task, f):
84 # Need to wrap tracer for pushing the scope before prerun is sent, and
85 # popping it after postrun is sent.
86 #
87 # This is the reason we don't use signals for hooking in the first place.
88 # Also because in Celery 3, signal dispatch returns early if one handler
89 # crashes.
90 def _inner(*args, **kwargs):
91 hub = Hub.current
92 if hub.get_integration(CeleryIntegration) is None:
93 return f(*args, **kwargs)
94
95 with hub.push_scope() as scope:
96 scope._name = "celery"
97 scope.clear_breadcrumbs()
98 scope.add_event_processor(_make_event_processor(task, *args, **kwargs))
99
100 span = Span.continue_from_headers(args[3].get("headers") or {})
101 span.transaction = "unknown celery task"
102
103 with capture_internal_exceptions():
104 # Celery task objects are not a thing to be trusted. Even
105 # something such as attribute access can fail.
106 span.transaction = task.name
107
108 with hub.span(span):
109 return f(*args, **kwargs)
110
111 return _inner
112
113
114 def _wrap_task_call(task, f):
115 # Need to wrap task call because the exception is caught before we get to
116 # see it. Also celery's reported stacktrace is untrustworthy.
117 def _inner(*args, **kwargs):
118 try:
119 return f(*args, **kwargs)
120 except Exception:
121 exc_info = sys.exc_info()
122 with capture_internal_exceptions():
123 _capture_exception(task, exc_info)
124 reraise(*exc_info)
125
126 return _inner
127
128
129 def _make_event_processor(task, uuid, args, kwargs, request=None):
130 def event_processor(event, hint):
131 with capture_internal_exceptions():
132 extra = event.setdefault("extra", {})
133 extra["celery-job"] = {
134 "task_name": task.name,
135 "args": args,
136 "kwargs": kwargs,
137 }
138
139 if "exc_info" in hint:
140 with capture_internal_exceptions():
141 if issubclass(hint["exc_info"][0], SoftTimeLimitExceeded):
142 event["fingerprint"] = [
143 "celery",
144 "SoftTimeLimitExceeded",
145 getattr(task, "name", task),
146 ]
147
148 return event
149
150 return event_processor
151
152
153 def _capture_exception(task, exc_info):
154 hub = Hub.current
155
156 if hub.get_integration(CeleryIntegration) is None:
157 return
158 if isinstance(exc_info[1], CELERY_CONTROL_FLOW_EXCEPTIONS):
159 return
160 if hasattr(task, "throws") and isinstance(exc_info[1], task.throws):
161 return
162
163 # If an integration is there, a client has to be there.
164 client = hub.client # type: Any
165
166 event, hint = event_from_exception(
167 exc_info,
168 client_options=client.options,
169 mechanism={"type": "celery", "handled": False},
170 )
171
172 hub.capture_event(event, hint=hint)
173
174
175 def _patch_worker_exit():
176 # Need to flush queue before worker shutdown because a crashing worker will
177 # call os._exit
178 from billiard.pool import Worker # type: ignore
179
180 old_workloop = Worker.workloop
181
182 def sentry_workloop(*args, **kwargs):
183 try:
184 return old_workloop(*args, **kwargs)
185 finally:
186 with capture_internal_exceptions():
187 hub = Hub.current
188 if hub.get_integration(CeleryIntegration) is not None:
189 hub.flush()
190
191 Worker.workloop = sentry_workloop
192
[end of sentry_sdk/integrations/celery.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py
--- a/sentry_sdk/integrations/celery.py
+++ b/sentry_sdk/integrations/celery.py
@@ -1,5 +1,6 @@
from __future__ import absolute_import
+import functools
import sys
from celery.exceptions import ( # type: ignore
@@ -64,6 +65,7 @@
def _wrap_apply_async(task, f):
+ @functools.wraps(f)
def apply_async(*args, **kwargs):
hub = Hub.current
integration = hub.get_integration(CeleryIntegration)
@@ -87,6 +89,7 @@
# This is the reason we don't use signals for hooking in the first place.
# Also because in Celery 3, signal dispatch returns early if one handler
# crashes.
+ @functools.wraps(f)
def _inner(*args, **kwargs):
hub = Hub.current
if hub.get_integration(CeleryIntegration) is None:
@@ -114,6 +117,11 @@
def _wrap_task_call(task, f):
# Need to wrap task call because the exception is caught before we get to
# see it. Also celery's reported stacktrace is untrustworthy.
+
+ # functools.wraps is important here because celery-once looks at this
+ # method's name.
+ # https://github.com/getsentry/sentry-python/issues/421
+ @functools.wraps(f)
def _inner(*args, **kwargs):
try:
return f(*args, **kwargs)
| {"golden_diff": "diff --git a/sentry_sdk/integrations/celery.py b/sentry_sdk/integrations/celery.py\n--- a/sentry_sdk/integrations/celery.py\n+++ b/sentry_sdk/integrations/celery.py\n@@ -1,5 +1,6 @@\n from __future__ import absolute_import\n \n+import functools\n import sys\n \n from celery.exceptions import ( # type: ignore\n@@ -64,6 +65,7 @@\n \n \n def _wrap_apply_async(task, f):\n+ @functools.wraps(f)\n def apply_async(*args, **kwargs):\n hub = Hub.current\n integration = hub.get_integration(CeleryIntegration)\n@@ -87,6 +89,7 @@\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n+ @functools.wraps(f)\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n@@ -114,6 +117,11 @@\n def _wrap_task_call(task, f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n+\n+ # functools.wraps is important here because celery-once looks at this\n+ # method's name.\n+ # https://github.com/getsentry/sentry-python/issues/421\n+ @functools.wraps(f)\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n", "issue": "Enforce functools.wraps on all monkeypatches\nI'm using [`celery-once`](https://github.com/cameronmaske/celery-once) to make sure some tasks are only run one at a time. The lock is acquired in `apply_async` and released in `after_return`. \r\n\r\nWhen I use the `CeleryIntegration` I'm not seeing any of the locks being released after the task has finished running. I'm not very familiar with the inner workings of celery, but any ideas what's going on?\r\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport sys\n\nfrom celery.exceptions import ( # type: ignore\n SoftTimeLimitExceeded,\n Retry,\n Ignore,\n Reject,\n)\n\nfrom sentry_sdk.hub import Hub\nfrom sentry_sdk.utils import capture_internal_exceptions, event_from_exception\nfrom sentry_sdk.tracing import Span\nfrom sentry_sdk._compat import reraise\nfrom sentry_sdk.integrations import Integration\nfrom sentry_sdk.integrations.logging import ignore_logger\nfrom sentry_sdk._types import MYPY\n\nif MYPY:\n from typing import Any\n\n\nCELERY_CONTROL_FLOW_EXCEPTIONS = (Retry, Ignore, Reject)\n\n\nclass CeleryIntegration(Integration):\n identifier = \"celery\"\n\n def __init__(self, propagate_traces=True):\n # type: (bool) -> None\n self.propagate_traces = propagate_traces\n\n @staticmethod\n def setup_once():\n # type: () -> None\n import celery.app.trace as trace # type: ignore\n\n old_build_tracer = trace.build_tracer\n\n def sentry_build_tracer(name, task, *args, **kwargs):\n if not getattr(task, \"_sentry_is_patched\", False):\n # Need to patch both methods because older celery sometimes\n # short-circuits to task.run if it thinks it's safe.\n task.__call__ = _wrap_task_call(task, task.__call__)\n task.run = _wrap_task_call(task, task.run)\n task.apply_async = _wrap_apply_async(task, task.apply_async)\n\n # `build_tracer` is apparently called for every task\n # invocation. Can't wrap every celery task for every invocation\n # or we will get infinitely nested wrapper functions.\n task._sentry_is_patched = True\n\n return _wrap_tracer(task, old_build_tracer(name, task, *args, **kwargs))\n\n trace.build_tracer = sentry_build_tracer\n\n _patch_worker_exit()\n\n # This logger logs every status of every task that ran on the worker.\n # Meaning that every task's breadcrumbs are full of stuff like \"Task\n # <foo> raised unexpected <bar>\".\n ignore_logger(\"celery.worker.job\")\n\n\ndef _wrap_apply_async(task, f):\n def apply_async(*args, **kwargs):\n hub = Hub.current\n integration = hub.get_integration(CeleryIntegration)\n if integration is not None and integration.propagate_traces:\n headers = None\n for key, value in hub.iter_trace_propagation_headers():\n if headers is None:\n headers = dict(kwargs.get(\"headers\") or {})\n headers[key] = value\n if headers is not None:\n kwargs[\"headers\"] = headers\n return f(*args, **kwargs)\n\n return apply_async\n\n\ndef _wrap_tracer(task, f):\n # Need to wrap tracer for pushing the scope before prerun is sent, and\n # popping it after postrun is sent.\n #\n # This is the reason we don't use signals for hooking in the first place.\n # Also because in Celery 3, signal dispatch returns early if one handler\n # crashes.\n def _inner(*args, **kwargs):\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is None:\n return f(*args, **kwargs)\n\n with hub.push_scope() as scope:\n scope._name = \"celery\"\n scope.clear_breadcrumbs()\n scope.add_event_processor(_make_event_processor(task, *args, **kwargs))\n\n span = Span.continue_from_headers(args[3].get(\"headers\") or {})\n span.transaction = \"unknown celery task\"\n\n with capture_internal_exceptions():\n # Celery task objects are not a thing to be trusted. Even\n # something such as attribute access can fail.\n span.transaction = task.name\n\n with hub.span(span):\n return f(*args, **kwargs)\n\n return _inner\n\n\ndef _wrap_task_call(task, f):\n # Need to wrap task call because the exception is caught before we get to\n # see it. Also celery's reported stacktrace is untrustworthy.\n def _inner(*args, **kwargs):\n try:\n return f(*args, **kwargs)\n except Exception:\n exc_info = sys.exc_info()\n with capture_internal_exceptions():\n _capture_exception(task, exc_info)\n reraise(*exc_info)\n\n return _inner\n\n\ndef _make_event_processor(task, uuid, args, kwargs, request=None):\n def event_processor(event, hint):\n with capture_internal_exceptions():\n extra = event.setdefault(\"extra\", {})\n extra[\"celery-job\"] = {\n \"task_name\": task.name,\n \"args\": args,\n \"kwargs\": kwargs,\n }\n\n if \"exc_info\" in hint:\n with capture_internal_exceptions():\n if issubclass(hint[\"exc_info\"][0], SoftTimeLimitExceeded):\n event[\"fingerprint\"] = [\n \"celery\",\n \"SoftTimeLimitExceeded\",\n getattr(task, \"name\", task),\n ]\n\n return event\n\n return event_processor\n\n\ndef _capture_exception(task, exc_info):\n hub = Hub.current\n\n if hub.get_integration(CeleryIntegration) is None:\n return\n if isinstance(exc_info[1], CELERY_CONTROL_FLOW_EXCEPTIONS):\n return\n if hasattr(task, \"throws\") and isinstance(exc_info[1], task.throws):\n return\n\n # If an integration is there, a client has to be there.\n client = hub.client # type: Any\n\n event, hint = event_from_exception(\n exc_info,\n client_options=client.options,\n mechanism={\"type\": \"celery\", \"handled\": False},\n )\n\n hub.capture_event(event, hint=hint)\n\n\ndef _patch_worker_exit():\n # Need to flush queue before worker shutdown because a crashing worker will\n # call os._exit\n from billiard.pool import Worker # type: ignore\n\n old_workloop = Worker.workloop\n\n def sentry_workloop(*args, **kwargs):\n try:\n return old_workloop(*args, **kwargs)\n finally:\n with capture_internal_exceptions():\n hub = Hub.current\n if hub.get_integration(CeleryIntegration) is not None:\n hub.flush()\n\n Worker.workloop = sentry_workloop\n", "path": "sentry_sdk/integrations/celery.py"}]} | 2,520 | 372 |
gh_patches_debug_10752 | rasdani/github-patches | git_diff | pypa__pip-3398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip 8.0 returns pre-release files for pip list -o without --pre
When running `pip list -o` pip 8.0 is listing pre-release files even though previously it did not. I suspect that this might be because of some of the refactors done by @xavfernandez.
</issue>
<code>
[start of pip/commands/list.py]
1 from __future__ import absolute_import
2
3 import logging
4 import warnings
5
6 from pip.basecommand import Command
7 from pip.exceptions import CommandError
8 from pip.index import PackageFinder
9 from pip.utils import (
10 get_installed_distributions, dist_is_editable)
11 from pip.utils.deprecation import RemovedInPip10Warning
12 from pip.cmdoptions import make_option_group, index_group
13
14
15 logger = logging.getLogger(__name__)
16
17
18 class ListCommand(Command):
19 """
20 List installed packages, including editables.
21
22 Packages are listed in a case-insensitive sorted order.
23 """
24 name = 'list'
25 usage = """
26 %prog [options]"""
27 summary = 'List installed packages.'
28
29 def __init__(self, *args, **kw):
30 super(ListCommand, self).__init__(*args, **kw)
31
32 cmd_opts = self.cmd_opts
33
34 cmd_opts.add_option(
35 '-o', '--outdated',
36 action='store_true',
37 default=False,
38 help='List outdated packages')
39 cmd_opts.add_option(
40 '-u', '--uptodate',
41 action='store_true',
42 default=False,
43 help='List uptodate packages')
44 cmd_opts.add_option(
45 '-e', '--editable',
46 action='store_true',
47 default=False,
48 help='List editable projects.')
49 cmd_opts.add_option(
50 '-l', '--local',
51 action='store_true',
52 default=False,
53 help=('If in a virtualenv that has global access, do not list '
54 'globally-installed packages.'),
55 )
56 self.cmd_opts.add_option(
57 '--user',
58 dest='user',
59 action='store_true',
60 default=False,
61 help='Only output packages installed in user-site.')
62
63 cmd_opts.add_option(
64 '--pre',
65 action='store_true',
66 default=False,
67 help=("Include pre-release and development versions. By default, "
68 "pip only finds stable versions."),
69 )
70
71 index_opts = make_option_group(index_group, self.parser)
72
73 self.parser.insert_option_group(0, index_opts)
74 self.parser.insert_option_group(0, cmd_opts)
75
76 def _build_package_finder(self, options, index_urls, session):
77 """
78 Create a package finder appropriate to this list command.
79 """
80 return PackageFinder(
81 find_links=options.find_links,
82 index_urls=index_urls,
83 allow_all_prereleases=options.pre,
84 trusted_hosts=options.trusted_hosts,
85 process_dependency_links=options.process_dependency_links,
86 session=session,
87 )
88
89 def run(self, options, args):
90 if options.allow_external:
91 warnings.warn(
92 "--allow-external has been deprecated and will be removed in "
93 "the future. Due to changes in the repository protocol, it no "
94 "longer has any effect.",
95 RemovedInPip10Warning,
96 )
97
98 if options.allow_all_external:
99 warnings.warn(
100 "--allow-all-external has been deprecated and will be removed "
101 "in the future. Due to changes in the repository protocol, it "
102 "no longer has any effect.",
103 RemovedInPip10Warning,
104 )
105
106 if options.allow_unverified:
107 warnings.warn(
108 "--allow-unverified has been deprecated and will be removed "
109 "in the future. Due to changes in the repository protocol, it "
110 "no longer has any effect.",
111 RemovedInPip10Warning,
112 )
113 if options.outdated and options.uptodate:
114 raise CommandError(
115 "Options --outdated and --uptodate cannot be combined.")
116
117 if options.outdated:
118 self.run_outdated(options)
119 elif options.uptodate:
120 self.run_uptodate(options)
121 else:
122 self.run_listing(options)
123
124 def run_outdated(self, options):
125 for dist, latest_version, typ in sorted(
126 self.find_packages_latest_versions(options),
127 key=lambda p: p[0].project_name.lower()):
128 if latest_version > dist.parsed_version:
129 logger.info(
130 '%s - Latest: %s [%s]',
131 self.output_package(dist), latest_version, typ,
132 )
133
134 def find_packages_latest_versions(self, options):
135 index_urls = [options.index_url] + options.extra_index_urls
136 if options.no_index:
137 logger.info('Ignoring indexes: %s', ','.join(index_urls))
138 index_urls = []
139
140 dependency_links = []
141 for dist in get_installed_distributions(
142 local_only=options.local,
143 user_only=options.user,
144 editables_only=options.editable):
145 if dist.has_metadata('dependency_links.txt'):
146 dependency_links.extend(
147 dist.get_metadata_lines('dependency_links.txt'),
148 )
149
150 with self._build_session(options) as session:
151 finder = self._build_package_finder(options, index_urls, session)
152 finder.add_dependency_links(dependency_links)
153
154 installed_packages = get_installed_distributions(
155 local_only=options.local,
156 user_only=options.user,
157 editables_only=options.editable,
158 )
159 for dist in installed_packages:
160 typ = 'unknown'
161 all_candidates = finder.find_all_candidates(dist.key)
162 if not all_candidates:
163 continue
164 best_candidate = max(all_candidates,
165 key=finder._candidate_sort_key)
166 remote_version = best_candidate.version
167 if best_candidate.location.is_wheel:
168 typ = 'wheel'
169 else:
170 typ = 'sdist'
171 yield dist, remote_version, typ
172
173 def run_listing(self, options):
174 installed_packages = get_installed_distributions(
175 local_only=options.local,
176 user_only=options.user,
177 editables_only=options.editable,
178 )
179 self.output_package_listing(installed_packages)
180
181 def output_package(self, dist):
182 if dist_is_editable(dist):
183 return '%s (%s, %s)' % (
184 dist.project_name,
185 dist.version,
186 dist.location,
187 )
188 else:
189 return '%s (%s)' % (dist.project_name, dist.version)
190
191 def output_package_listing(self, installed_packages):
192 installed_packages = sorted(
193 installed_packages,
194 key=lambda dist: dist.project_name.lower(),
195 )
196 for dist in installed_packages:
197 logger.info(self.output_package(dist))
198
199 def run_uptodate(self, options):
200 uptodate = []
201 for dist, version, typ in self.find_packages_latest_versions(options):
202 if dist.parsed_version == version:
203 uptodate.append(dist)
204 self.output_package_listing(uptodate)
205
[end of pip/commands/list.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pip/commands/list.py b/pip/commands/list.py
--- a/pip/commands/list.py
+++ b/pip/commands/list.py
@@ -159,6 +159,11 @@
for dist in installed_packages:
typ = 'unknown'
all_candidates = finder.find_all_candidates(dist.key)
+ if not options.pre:
+ # Remove prereleases
+ all_candidates = [candidate for candidate in all_candidates
+ if not candidate.version.is_prerelease]
+
if not all_candidates:
continue
best_candidate = max(all_candidates,
| {"golden_diff": "diff --git a/pip/commands/list.py b/pip/commands/list.py\n--- a/pip/commands/list.py\n+++ b/pip/commands/list.py\n@@ -159,6 +159,11 @@\n for dist in installed_packages:\n typ = 'unknown'\n all_candidates = finder.find_all_candidates(dist.key)\n+ if not options.pre:\n+ # Remove prereleases\n+ all_candidates = [candidate for candidate in all_candidates\n+ if not candidate.version.is_prerelease]\n+\n if not all_candidates:\n continue\n best_candidate = max(all_candidates,\n", "issue": "pip 8.0 returns pre-release files for pip list -o without --pre\nWhen running `pip list -o` pip 8.0 is listing pre-release files even though previously it did not. I suspect that this might be because of some of the refactors done by @xavfernandez.\n\n", "before_files": [{"content": "from __future__ import absolute_import\n\nimport logging\nimport warnings\n\nfrom pip.basecommand import Command\nfrom pip.exceptions import CommandError\nfrom pip.index import PackageFinder\nfrom pip.utils import (\n get_installed_distributions, dist_is_editable)\nfrom pip.utils.deprecation import RemovedInPip10Warning\nfrom pip.cmdoptions import make_option_group, index_group\n\n\nlogger = logging.getLogger(__name__)\n\n\nclass ListCommand(Command):\n \"\"\"\n List installed packages, including editables.\n\n Packages are listed in a case-insensitive sorted order.\n \"\"\"\n name = 'list'\n usage = \"\"\"\n %prog [options]\"\"\"\n summary = 'List installed packages.'\n\n def __init__(self, *args, **kw):\n super(ListCommand, self).__init__(*args, **kw)\n\n cmd_opts = self.cmd_opts\n\n cmd_opts.add_option(\n '-o', '--outdated',\n action='store_true',\n default=False,\n help='List outdated packages')\n cmd_opts.add_option(\n '-u', '--uptodate',\n action='store_true',\n default=False,\n help='List uptodate packages')\n cmd_opts.add_option(\n '-e', '--editable',\n action='store_true',\n default=False,\n help='List editable projects.')\n cmd_opts.add_option(\n '-l', '--local',\n action='store_true',\n default=False,\n help=('If in a virtualenv that has global access, do not list '\n 'globally-installed packages.'),\n )\n self.cmd_opts.add_option(\n '--user',\n dest='user',\n action='store_true',\n default=False,\n help='Only output packages installed in user-site.')\n\n cmd_opts.add_option(\n '--pre',\n action='store_true',\n default=False,\n help=(\"Include pre-release and development versions. By default, \"\n \"pip only finds stable versions.\"),\n )\n\n index_opts = make_option_group(index_group, self.parser)\n\n self.parser.insert_option_group(0, index_opts)\n self.parser.insert_option_group(0, cmd_opts)\n\n def _build_package_finder(self, options, index_urls, session):\n \"\"\"\n Create a package finder appropriate to this list command.\n \"\"\"\n return PackageFinder(\n find_links=options.find_links,\n index_urls=index_urls,\n allow_all_prereleases=options.pre,\n trusted_hosts=options.trusted_hosts,\n process_dependency_links=options.process_dependency_links,\n session=session,\n )\n\n def run(self, options, args):\n if options.allow_external:\n warnings.warn(\n \"--allow-external has been deprecated and will be removed in \"\n \"the future. Due to changes in the repository protocol, it no \"\n \"longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.allow_all_external:\n warnings.warn(\n \"--allow-all-external has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n\n if options.allow_unverified:\n warnings.warn(\n \"--allow-unverified has been deprecated and will be removed \"\n \"in the future. Due to changes in the repository protocol, it \"\n \"no longer has any effect.\",\n RemovedInPip10Warning,\n )\n if options.outdated and options.uptodate:\n raise CommandError(\n \"Options --outdated and --uptodate cannot be combined.\")\n\n if options.outdated:\n self.run_outdated(options)\n elif options.uptodate:\n self.run_uptodate(options)\n else:\n self.run_listing(options)\n\n def run_outdated(self, options):\n for dist, latest_version, typ in sorted(\n self.find_packages_latest_versions(options),\n key=lambda p: p[0].project_name.lower()):\n if latest_version > dist.parsed_version:\n logger.info(\n '%s - Latest: %s [%s]',\n self.output_package(dist), latest_version, typ,\n )\n\n def find_packages_latest_versions(self, options):\n index_urls = [options.index_url] + options.extra_index_urls\n if options.no_index:\n logger.info('Ignoring indexes: %s', ','.join(index_urls))\n index_urls = []\n\n dependency_links = []\n for dist in get_installed_distributions(\n local_only=options.local,\n user_only=options.user,\n editables_only=options.editable):\n if dist.has_metadata('dependency_links.txt'):\n dependency_links.extend(\n dist.get_metadata_lines('dependency_links.txt'),\n )\n\n with self._build_session(options) as session:\n finder = self._build_package_finder(options, index_urls, session)\n finder.add_dependency_links(dependency_links)\n\n installed_packages = get_installed_distributions(\n local_only=options.local,\n user_only=options.user,\n editables_only=options.editable,\n )\n for dist in installed_packages:\n typ = 'unknown'\n all_candidates = finder.find_all_candidates(dist.key)\n if not all_candidates:\n continue\n best_candidate = max(all_candidates,\n key=finder._candidate_sort_key)\n remote_version = best_candidate.version\n if best_candidate.location.is_wheel:\n typ = 'wheel'\n else:\n typ = 'sdist'\n yield dist, remote_version, typ\n\n def run_listing(self, options):\n installed_packages = get_installed_distributions(\n local_only=options.local,\n user_only=options.user,\n editables_only=options.editable,\n )\n self.output_package_listing(installed_packages)\n\n def output_package(self, dist):\n if dist_is_editable(dist):\n return '%s (%s, %s)' % (\n dist.project_name,\n dist.version,\n dist.location,\n )\n else:\n return '%s (%s)' % (dist.project_name, dist.version)\n\n def output_package_listing(self, installed_packages):\n installed_packages = sorted(\n installed_packages,\n key=lambda dist: dist.project_name.lower(),\n )\n for dist in installed_packages:\n logger.info(self.output_package(dist))\n\n def run_uptodate(self, options):\n uptodate = []\n for dist, version, typ in self.find_packages_latest_versions(options):\n if dist.parsed_version == version:\n uptodate.append(dist)\n self.output_package_listing(uptodate)\n", "path": "pip/commands/list.py"}]} | 2,461 | 131 |
gh_patches_debug_9412 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-3099 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow_remote=false does not prevent remote access
##### Steps to reproduce the problem:
1. Run mitmproxy on a publicly routable host, with default configuration including `listen_host=""`, `listen_port=8080`, and `allow_remote=false`.
2. From a host on a different network, send a request through that instance of mitmproxy, e.g. with `curl --proxy http://your-host.example:8080`
The default `allow_remote=false` should prevent this request from succeeding. However, it is served by mitmproxy just fine.
##### Any other comments? What have you tried so far?
I have a laptop sitting in the βDMZβ of a home router, which is globally IPv4 routable. I also have a VPS which is globally IPv4 routable. Both the laptop and the VPS are running Ubuntu 16.04 βXenial Xerusβ. I can reproduce the problem with mitmproxy running on the VPS and curl on the laptop, as well as vice-versa.
Both tcpdump and mitmproxyβs own Details pane show the request as originating from a remote network.
I only noticed this because I saw strange flows in a mitmproxy instance that I spun up on the laptop.
##### System information
Mitmproxy: 3.0.3 binary
Python: 3.5.2
OpenSSL: OpenSSL 1.1.0g 2 Nov 2017
Platform: Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid
</issue>
<code>
[start of mitmproxy/addons/allowremote.py]
1 import ipaddress
2 from mitmproxy import ctx
3
4
5 class AllowRemote:
6 def load(self, loader):
7 loader.add_option(
8 "allow_remote", bool, False,
9 """
10 Allow remote clients to connect to proxy. If set to false,
11 client will not be able to connect to proxy unless it is on the same network
12 or the proxyauth option is set
13 """
14 )
15
16 def clientconnect(self, layer):
17 address = layer.client_conn.address
18
19 accept_connection = (
20 ctx.options.allow_remote or
21 ipaddress.ip_address(address[0]).is_private or
22 ctx.options.proxyauth is not None
23 )
24
25 if not accept_connection:
26 layer.reply.kill()
27 ctx.log.warn("Client connection was killed because allow_remote option is set to false, "
28 "client IP was not a private IP and proxyauth was not set.\n"
29 "To allow remote connections set allow_remote option to true or set proxyauth option.")
30
[end of mitmproxy/addons/allowremote.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/addons/allowremote.py b/mitmproxy/addons/allowremote.py
--- a/mitmproxy/addons/allowremote.py
+++ b/mitmproxy/addons/allowremote.py
@@ -14,11 +14,13 @@
)
def clientconnect(self, layer):
- address = layer.client_conn.address
+ address = ipaddress.ip_address(layer.client_conn.address[0])
+ if isinstance(address, ipaddress.IPv6Address):
+ address = address.ipv4_mapped or address
accept_connection = (
ctx.options.allow_remote or
- ipaddress.ip_address(address[0]).is_private or
+ ipaddress.ip_address(address).is_private or
ctx.options.proxyauth is not None
)
| {"golden_diff": "diff --git a/mitmproxy/addons/allowremote.py b/mitmproxy/addons/allowremote.py\n--- a/mitmproxy/addons/allowremote.py\n+++ b/mitmproxy/addons/allowremote.py\n@@ -14,11 +14,13 @@\n )\n \n def clientconnect(self, layer):\n- address = layer.client_conn.address\n+ address = ipaddress.ip_address(layer.client_conn.address[0])\n+ if isinstance(address, ipaddress.IPv6Address):\n+ address = address.ipv4_mapped or address\n \n accept_connection = (\n ctx.options.allow_remote or\n- ipaddress.ip_address(address[0]).is_private or\n+ ipaddress.ip_address(address).is_private or\n ctx.options.proxyauth is not None\n )\n", "issue": "allow_remote=false does not prevent remote access\n##### Steps to reproduce the problem:\r\n\r\n1. Run mitmproxy on a publicly routable host, with default configuration including `listen_host=\"\"`, `listen_port=8080`, and `allow_remote=false`.\r\n2. From a host on a different network, send a request through that instance of mitmproxy, e.g. with `curl --proxy http://your-host.example:8080`\r\n\r\nThe default `allow_remote=false` should prevent this request from succeeding. However, it is served by mitmproxy just fine.\r\n\r\n##### Any other comments? What have you tried so far?\r\n\r\nI have a laptop sitting in the \u201cDMZ\u201d of a home router, which is globally IPv4 routable. I also have a VPS which is globally IPv4 routable. Both the laptop and the VPS are running Ubuntu 16.04 \u201cXenial Xerus\u201d. I can reproduce the problem with mitmproxy running on the VPS and curl on the laptop, as well as vice-versa.\r\n\r\nBoth tcpdump and mitmproxy\u2019s own Details pane show the request as originating from a remote network.\r\n\r\nI only noticed this because I saw strange flows in a mitmproxy instance that I spun up on the laptop.\r\n\r\n##### System information\r\n\r\nMitmproxy: 3.0.3 binary\r\nPython: 3.5.2\r\nOpenSSL: OpenSSL 1.1.0g 2 Nov 2017\r\nPlatform: Linux-4.4.0-116-generic-x86_64-with-debian-stretch-sid\r\n\n", "before_files": [{"content": "import ipaddress\nfrom mitmproxy import ctx\n\n\nclass AllowRemote:\n def load(self, loader):\n loader.add_option(\n \"allow_remote\", bool, False,\n \"\"\"\n Allow remote clients to connect to proxy. If set to false,\n client will not be able to connect to proxy unless it is on the same network\n or the proxyauth option is set\n \"\"\"\n )\n\n def clientconnect(self, layer):\n address = layer.client_conn.address\n\n accept_connection = (\n ctx.options.allow_remote or\n ipaddress.ip_address(address[0]).is_private or\n ctx.options.proxyauth is not None\n )\n\n if not accept_connection:\n layer.reply.kill()\n ctx.log.warn(\"Client connection was killed because allow_remote option is set to false, \"\n \"client IP was not a private IP and proxyauth was not set.\\n\"\n \"To allow remote connections set allow_remote option to true or set proxyauth option.\")\n", "path": "mitmproxy/addons/allowremote.py"}]} | 1,144 | 170 |
gh_patches_debug_10238 | rasdani/github-patches | git_diff | quantumlib__Cirq-4003 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
optimized_for_sycamore fails when you measure more than 16 qubits
```python
def thing():
q = cirq.LineQubit.range(17)
c = cirq.Circuit(cirq.H.on_each(*q), cirq.measure(*q, key='z'))
c = cg.optimized_for_sycamore(c)
return c
thing()
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-44-accaace1183a> in <module>
4 c = cg.optimized_for_sycamore(c)
5 return c
----> 6 thing()
<ipython-input-44-accaace1183a> in thing()
2 q = cirq.LineQubit.range(17)
3 c = cirq.Circuit(cirq.H.on_each(*q), cirq.measure(*q, key='z'))
----> 4 c = cg.optimized_for_sycamore(c)
5 return c
6 thing()
~/ecc/cirq/cirq/google/optimizers/optimize_for_sycamore.py in optimized_for_sycamore(circuit, new_device, qubit_map, optimizer_type, tolerance, tabulation_resolution)
159 opts = _OPTIMIZER_TYPES[optimizer_type](tolerance=tolerance, tabulation=tabulation)
160 for optimizer in opts:
--> 161 optimizer(copy)
162
163 return circuits.Circuit(
~/ecc/cirq/cirq/optimizers/drop_negligible.py in optimize_circuit(self, circuit)
37 for moment_index, moment in enumerate(circuit):
38 for op in moment.operations:
---> 39 if op is not None and protocols.trace_distance_bound(op) <= self.tolerance:
40 deletions.append((moment_index, op))
41 circuit.batch_remove(deletions)
~/ecc/cirq/cirq/protocols/trace_distance_bound.py in trace_distance_bound(val)
71
72 for strat in strats:
---> 73 result = strat(val)
74 if result is None:
75 break
~/ecc/cirq/cirq/protocols/trace_distance_bound.py in _strat_distance_from_unitary(val)
96 def _strat_distance_from_unitary(val: Any) -> Optional[float]:
97 """Attempts to compute a value's trace_distance_bound from its unitary."""
---> 98 u = unitary_protocol.unitary(val, default=None)
99
100 if u is None:
~/ecc/cirq/cirq/protocols/unitary_protocol.py in unitary(val, default)
127 ]
128 for strat in strats:
--> 129 result = strat(val)
130 if result is None:
131 break
~/ecc/cirq/cirq/protocols/unitary_protocol.py in _strat_unitary_from_apply_unitary(val)
173
174 # Apply unitary effect to an identity matrix.
--> 175 state = qis.eye_tensor(val_qid_shape, dtype=np.complex128)
176 buffer = np.empty_like(state)
177 result = method(ApplyUnitaryArgs(state, buffer, range(len(val_qid_shape))))
~/ecc/cirq/cirq/qis/states.py in eye_tensor(half_shape, dtype)
669 """
670 identity = np.eye(np.prod(half_shape, dtype=int), dtype=dtype)
--> 671 identity.shape = half_shape * 2
672 return identity
ValueError: maximum supported dimension for an ndarray is 32, found 34
```
</issue>
<code>
[start of cirq/optimizers/drop_negligible.py]
1 # Copyright 2018 The Cirq Developers
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """An optimization pass that removes operations with tiny effects."""
16
17 from typing import List, Tuple, TYPE_CHECKING
18
19 from cirq import protocols
20 from cirq.circuits import circuit as _circuit
21
22 if TYPE_CHECKING:
23 from cirq import ops
24
25
26 class DropNegligible:
27 """An optimization pass that removes operations with tiny effects."""
28
29 def __init__(self, tolerance: float = 1e-8) -> None:
30 self.tolerance = tolerance
31
32 def __call__(self, circuit: _circuit.Circuit):
33 self.optimize_circuit(circuit)
34
35 def optimize_circuit(self, circuit: _circuit.Circuit) -> None:
36 deletions: List[Tuple[int, ops.Operation]] = []
37 for moment_index, moment in enumerate(circuit):
38 for op in moment.operations:
39 if op is not None and protocols.trace_distance_bound(op) <= self.tolerance:
40 deletions.append((moment_index, op))
41 circuit.batch_remove(deletions)
42
[end of cirq/optimizers/drop_negligible.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cirq/optimizers/drop_negligible.py b/cirq/optimizers/drop_negligible.py
--- a/cirq/optimizers/drop_negligible.py
+++ b/cirq/optimizers/drop_negligible.py
@@ -36,6 +36,8 @@
deletions: List[Tuple[int, ops.Operation]] = []
for moment_index, moment in enumerate(circuit):
for op in moment.operations:
- if op is not None and protocols.trace_distance_bound(op) <= self.tolerance:
+ if protocols.is_measurement(op):
+ continue
+ if protocols.trace_distance_bound(op) <= self.tolerance:
deletions.append((moment_index, op))
circuit.batch_remove(deletions)
| {"golden_diff": "diff --git a/cirq/optimizers/drop_negligible.py b/cirq/optimizers/drop_negligible.py\n--- a/cirq/optimizers/drop_negligible.py\n+++ b/cirq/optimizers/drop_negligible.py\n@@ -36,6 +36,8 @@\n deletions: List[Tuple[int, ops.Operation]] = []\n for moment_index, moment in enumerate(circuit):\n for op in moment.operations:\n- if op is not None and protocols.trace_distance_bound(op) <= self.tolerance:\n+ if protocols.is_measurement(op):\n+ continue\n+ if protocols.trace_distance_bound(op) <= self.tolerance:\n deletions.append((moment_index, op))\n circuit.batch_remove(deletions)\n", "issue": "optimized_for_sycamore fails when you measure more than 16 qubits\n```python\r\ndef thing():\r\n q = cirq.LineQubit.range(17)\r\n c = cirq.Circuit(cirq.H.on_each(*q), cirq.measure(*q, key='z'))\r\n c = cg.optimized_for_sycamore(c)\r\n return c\r\nthing() \r\n```\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\n<ipython-input-44-accaace1183a> in <module>\r\n 4 c = cg.optimized_for_sycamore(c)\r\n 5 return c\r\n----> 6 thing()\r\n\r\n<ipython-input-44-accaace1183a> in thing()\r\n 2 q = cirq.LineQubit.range(17)\r\n 3 c = cirq.Circuit(cirq.H.on_each(*q), cirq.measure(*q, key='z'))\r\n----> 4 c = cg.optimized_for_sycamore(c)\r\n 5 return c\r\n 6 thing()\r\n\r\n~/ecc/cirq/cirq/google/optimizers/optimize_for_sycamore.py in optimized_for_sycamore(circuit, new_device, qubit_map, optimizer_type, tolerance, tabulation_resolution)\r\n 159 opts = _OPTIMIZER_TYPES[optimizer_type](tolerance=tolerance, tabulation=tabulation)\r\n 160 for optimizer in opts:\r\n--> 161 optimizer(copy)\r\n 162 \r\n 163 return circuits.Circuit(\r\n\r\n~/ecc/cirq/cirq/optimizers/drop_negligible.py in optimize_circuit(self, circuit)\r\n 37 for moment_index, moment in enumerate(circuit):\r\n 38 for op in moment.operations:\r\n---> 39 if op is not None and protocols.trace_distance_bound(op) <= self.tolerance:\r\n 40 deletions.append((moment_index, op))\r\n 41 circuit.batch_remove(deletions)\r\n\r\n~/ecc/cirq/cirq/protocols/trace_distance_bound.py in trace_distance_bound(val)\r\n 71 \r\n 72 for strat in strats:\r\n---> 73 result = strat(val)\r\n 74 if result is None:\r\n 75 break\r\n\r\n~/ecc/cirq/cirq/protocols/trace_distance_bound.py in _strat_distance_from_unitary(val)\r\n 96 def _strat_distance_from_unitary(val: Any) -> Optional[float]:\r\n 97 \"\"\"Attempts to compute a value's trace_distance_bound from its unitary.\"\"\"\r\n---> 98 u = unitary_protocol.unitary(val, default=None)\r\n 99 \r\n 100 if u is None:\r\n\r\n~/ecc/cirq/cirq/protocols/unitary_protocol.py in unitary(val, default)\r\n 127 ]\r\n 128 for strat in strats:\r\n--> 129 result = strat(val)\r\n 130 if result is None:\r\n 131 break\r\n\r\n~/ecc/cirq/cirq/protocols/unitary_protocol.py in _strat_unitary_from_apply_unitary(val)\r\n 173 \r\n 174 # Apply unitary effect to an identity matrix.\r\n--> 175 state = qis.eye_tensor(val_qid_shape, dtype=np.complex128)\r\n 176 buffer = np.empty_like(state)\r\n 177 result = method(ApplyUnitaryArgs(state, buffer, range(len(val_qid_shape))))\r\n\r\n~/ecc/cirq/cirq/qis/states.py in eye_tensor(half_shape, dtype)\r\n 669 \"\"\"\r\n 670 identity = np.eye(np.prod(half_shape, dtype=int), dtype=dtype)\r\n--> 671 identity.shape = half_shape * 2\r\n 672 return identity\r\n\r\nValueError: maximum supported dimension for an ndarray is 32, found 34\r\n```\n", "before_files": [{"content": "# Copyright 2018 The Cirq Developers\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"An optimization pass that removes operations with tiny effects.\"\"\"\n\nfrom typing import List, Tuple, TYPE_CHECKING\n\nfrom cirq import protocols\nfrom cirq.circuits import circuit as _circuit\n\nif TYPE_CHECKING:\n from cirq import ops\n\n\nclass DropNegligible:\n \"\"\"An optimization pass that removes operations with tiny effects.\"\"\"\n\n def __init__(self, tolerance: float = 1e-8) -> None:\n self.tolerance = tolerance\n\n def __call__(self, circuit: _circuit.Circuit):\n self.optimize_circuit(circuit)\n\n def optimize_circuit(self, circuit: _circuit.Circuit) -> None:\n deletions: List[Tuple[int, ops.Operation]] = []\n for moment_index, moment in enumerate(circuit):\n for op in moment.operations:\n if op is not None and protocols.trace_distance_bound(op) <= self.tolerance:\n deletions.append((moment_index, op))\n circuit.batch_remove(deletions)\n", "path": "cirq/optimizers/drop_negligible.py"}]} | 1,855 | 163 |
gh_patches_debug_32634 | rasdani/github-patches | git_diff | huggingface__trl-555 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
StackLLAMA examples uses sentiment model as reward model
The script to train the StackLLAMA model uses a sentiment pipeline for training the reward model while it should use the model that was trained before.
https://github.com/lvwerra/trl/blob/main/examples/stack_llama/scripts/rl_training.py#L216
</issue>
<code>
[start of examples/research_projects/stack_llama/scripts/rl_training.py]
1 # coding=utf-8
2 # Copyright 2022 The HuggingFace Inc. team. All rights reserved.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 from dataclasses import dataclass, field
16 from typing import Optional
17
18 import torch
19 from accelerate import Accelerator
20 from datasets import load_dataset
21 from peft import LoraConfig
22 from tqdm import tqdm
23 from transformers import Adafactor, AutoTokenizer, HfArgumentParser, pipeline
24
25 from trl import AutoModelForCausalLMWithValueHead, PPOConfig, PPOTrainer, set_seed
26 from trl.core import LengthSampler
27
28
29 tqdm.pandas()
30
31
32 @dataclass
33 class ScriptArguments:
34 """
35 The name of the Casual LM model we wish to fine with PPO
36 """
37
38 # NOTE: gpt2 models use Conv1D instead of Linear layers which are not yet supported in 8 bit mode
39 # models like gpt-neo* models are more suitable.
40 model_name: Optional[str] = field(default="", metadata={"help": "the model name"})
41 tokenizer_name: Optional[str] = field(default="", metadata={"help": "the tokenizer name"})
42 reward_model_name: Optional[str] = field(default="", metadata={"help": "the reward model name"})
43 log_with: Optional[str] = field(default=None, metadata={"help": "use 'wandb' to log with wandb"})
44 learning_rate: Optional[float] = field(default=1.41e-5, metadata={"help": "the learning rate"})
45 output_max_length: Optional[int] = field(default=128, metadata={"help": "maximum length for generation"})
46 mini_batch_size: Optional[int] = field(default=1, metadata={"help": "the PPO minibatch size"})
47 batch_size: Optional[int] = field(default=32, metadata={"help": "the batch size"})
48 ppo_epochs: Optional[int] = field(default=4, metadata={"help": "the number of ppo epochs"})
49 gradient_accumulation_steps: Optional[int] = field(
50 default=4, metadata={"help": "the number of gradient accumulation steps"}
51 )
52 adafactor: Optional[bool] = field(default=False, metadata={"help": "whether to use the adafactor optimizer"})
53 early_stopping: Optional[bool] = field(default=False, metadata={"help": "whether to early stop"})
54 target_kl: Optional[float] = field(default=0.1, metadata={"help": "kl target for early stopping"})
55 reward_baseline: Optional[float] = field(
56 default=0.0,
57 metadata={"help": "a baseline value that is subtracted from the reward"},
58 )
59 batched_gen: Optional[bool] = field(default=False, metadata={"help": "whether to use the batched text gen"})
60 save_freq: Optional[int] = field(default=None, metadata={"help": "n steps to save the model"})
61 output_dir: Optional[str] = field(default="runs/", metadata={"help": "n steps to save the model"})
62 seed: Optional[int] = field(default=0, metadata={"help": "the seed"})
63 steps: Optional[int] = field(default=20000, metadata={"help": "number of epochs"})
64 init_kl_coef: Optional[float] = field(
65 default=0.2,
66 metadata={"help": "Initial KL penalty coefficient (used for adaptive and linear control)"},
67 )
68
69 adap_kl_ctrl: Optional[bool] = field(default=True, metadata={"help": "Use adaptive KL control, otherwise linear"})
70
71
72 parser = HfArgumentParser(ScriptArguments)
73 script_args: ScriptArguments = parser.parse_args_into_dataclasses()[0]
74 reward_model_name = script_args.reward_model_name
75 dataset_name = "lvwerra/stack-exchange-paired"
76 config = PPOConfig(
77 steps=script_args.steps,
78 model_name=script_args.model_name,
79 learning_rate=script_args.learning_rate,
80 log_with=script_args.log_with,
81 batch_size=script_args.batch_size,
82 mini_batch_size=script_args.mini_batch_size,
83 gradient_accumulation_steps=script_args.gradient_accumulation_steps,
84 optimize_cuda_cache=True,
85 early_stopping=script_args.early_stopping,
86 target_kl=script_args.target_kl,
87 ppo_epochs=script_args.ppo_epochs,
88 seed=script_args.seed,
89 init_kl_coef=script_args.init_kl_coef,
90 adap_kl_ctrl=script_args.adap_kl_ctrl,
91 )
92
93 train_dataset = load_dataset("lvwerra/stack-exchange-paired", data_dir="data/rl", split="train")
94 train_dataset = train_dataset.select(range(100000))
95 # We then define the arguments to pass to the sentiment analysis pipeline.
96 # We set `return_all_scores` to True to get the sentiment score for each token.
97 sent_kwargs = {
98 "return_all_scores": True,
99 "function_to_apply": "none",
100 "batch_size": 16,
101 "truncation": True,
102 }
103
104 tokenizer = AutoTokenizer.from_pretrained(script_args.tokenizer_name)
105 # GPT-2 tokenizer has a pad token, but it is not eos_token by default. We need to set it to eos_token.
106 # only for this model.
107
108 if getattr(tokenizer, "pad_token", None) is None:
109 tokenizer.pad_token = tokenizer.eos_token
110
111
112 # Below is an example function to build the dataset. In our case, we use the IMDB dataset
113 # from the `datasets` library. One should customize this function to train the model on
114 # its own dataset.
115 def build_dataset(
116 tokenizer,
117 dataset_name="lvwerra/stack-exchange-paired",
118 ):
119 """
120 Build dataset for training. This builds the dataset from `load_dataset`, one should
121 customize this function to train the model on its own dataset.
122
123 Args:
124 dataset_name (`str`):
125 The name of the dataset to be loaded.
126
127 Returns:
128 dataloader (`torch.utils.data.DataLoader`):
129 The dataloader for the dataset.
130 """
131
132 # load imdb with datasets
133 ds = load_dataset(dataset_name, data_dir="data/rl", split="train")
134 original_columns = ds.column_names
135 num_proc = 24
136
137 def preprocess_function(examples):
138 new_examples = {
139 "query": [],
140 "input_ids": [],
141 }
142 for question in examples["question"]:
143 query = "Question: " + question + "\n\nAnswer: "
144 tokenized_question = tokenizer(query, truncation=True)
145 new_examples["query"].append(query)
146 new_examples["input_ids"].append(tokenized_question["input_ids"])
147
148 return new_examples
149
150 ds = train_dataset.map(
151 preprocess_function,
152 batched=True,
153 num_proc=num_proc,
154 remove_columns=original_columns,
155 )
156 ds = ds.filter(lambda x: len(x["input_ids"]) < 512, batched=False)
157
158 ds.set_format(type="torch")
159 return ds
160
161
162 # We retrieve the dataloader by calling the `build_dataset` function.
163 dataset = build_dataset(tokenizer)
164
165
166 def collator(data):
167 return dict((key, [d[key] for d in data]) for key in data[0])
168
169
170 # set seed before initializing value head for deterministic eval
171 set_seed(config.seed)
172
173 # Now let's build the model, the reference model, and the tokenizer.
174 current_device = Accelerator().local_process_index
175
176 lora_config = LoraConfig(
177 r=16,
178 lora_alpha=32,
179 lora_dropout=0.05,
180 bias="none",
181 task_type="CAUSAL_LM",
182 )
183 model = AutoModelForCausalLMWithValueHead.from_pretrained(
184 config.model_name,
185 load_in_8bit=True,
186 device_map={"": current_device},
187 peft_config=lora_config,
188 )
189
190 optimizer = None
191 if script_args.adafactor:
192 optimizer = Adafactor(
193 filter(lambda p: p.requires_grad, model.parameters()),
194 scale_parameter=False,
195 relative_step=False,
196 warmup_init=False,
197 lr=config.learning_rate,
198 )
199 # We then build the PPOTrainer, passing the model, the reference model, the tokenizer
200 ppo_trainer = PPOTrainer(
201 config,
202 model,
203 ref_model=None,
204 tokenizer=tokenizer,
205 dataset=dataset,
206 data_collator=collator,
207 optimizer=optimizer,
208 )
209
210 # We then build the sentiment analysis pipeline, passing the model name and the
211 # sentiment analysis pipeline arguments. Let's also make sure to set the device
212 # to the same device as the PPOTrainer.
213 device = ppo_trainer.accelerator.device
214 if ppo_trainer.accelerator.num_processes == 1:
215 device = 0 if torch.cuda.is_available() else "cpu" # to avoid a ` pipeline` bug
216 sentiment_pipe = pipeline(
217 "sentiment-analysis",
218 model=reward_model_name,
219 device_map={"": current_device},
220 model_kwargs={"load_in_8bit": True},
221 tokenizer=tokenizer,
222 return_token_type_ids=False,
223 )
224
225 # We then define the arguments to pass to the `generate` function. These arguments
226 # are passed to the `generate` function of the PPOTrainer, which is a wrapper around
227 # the `generate` function of the trained model.
228 generation_kwargs = {
229 # "min_length": -1,
230 "top_k": 0.0,
231 "top_p": 1.0,
232 "do_sample": True,
233 "pad_token_id": tokenizer.pad_token_id,
234 "eos_token_id": 100_000,
235 }
236 output_min_length = 32
237 output_max_length = script_args.output_max_length
238 output_length_sampler = LengthSampler(output_min_length, output_max_length)
239
240 for epoch, batch in tqdm(enumerate(ppo_trainer.dataloader)):
241 if epoch >= config.total_ppo_epochs:
242 break
243
244 question_tensors = batch["input_ids"]
245
246 response_tensors = ppo_trainer.generate(
247 question_tensors,
248 return_prompt=False,
249 length_sampler=output_length_sampler,
250 **generation_kwargs,
251 )
252 batch["response"] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True)
253
254 # Compute sentiment score
255 texts = [q + r for q, r in zip(batch["query"], batch["response"])]
256 pipe_outputs = sentiment_pipe(texts, **sent_kwargs)
257 rewards = [torch.tensor(output[0]["score"] - script_args.reward_baseline) for output in pipe_outputs]
258
259 # Run PPO step
260 stats = ppo_trainer.step(question_tensors, response_tensors, rewards)
261 ppo_trainer.log_stats(stats, batch, rewards)
262
263 if script_args.save_freq and epoch and epoch % script_args.save_freq == 0:
264 ppo_trainer.save_pretrained(script_args.output_dir + f"step_{epoch}")
265
[end of examples/research_projects/stack_llama/scripts/rl_training.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/examples/research_projects/stack_llama/scripts/rl_training.py b/examples/research_projects/stack_llama/scripts/rl_training.py
--- a/examples/research_projects/stack_llama/scripts/rl_training.py
+++ b/examples/research_projects/stack_llama/scripts/rl_training.py
@@ -92,6 +92,7 @@
train_dataset = load_dataset("lvwerra/stack-exchange-paired", data_dir="data/rl", split="train")
train_dataset = train_dataset.select(range(100000))
+
# We then define the arguments to pass to the sentiment analysis pipeline.
# We set `return_all_scores` to True to get the sentiment score for each token.
sent_kwargs = {
@@ -207,9 +208,9 @@
optimizer=optimizer,
)
-# We then build the sentiment analysis pipeline, passing the model name and the
-# sentiment analysis pipeline arguments. Let's also make sure to set the device
-# to the same device as the PPOTrainer.
+# We then build the sentiment analysis pipeline using our reward model, passing the
+# model name and the sentiment analysis pipeline arguments. Let's also make sure to
+# set the device to the same device as the PPOTrainer.
device = ppo_trainer.accelerator.device
if ppo_trainer.accelerator.num_processes == 1:
device = 0 if torch.cuda.is_available() else "cpu" # to avoid a ` pipeline` bug
@@ -251,7 +252,7 @@
)
batch["response"] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True)
- # Compute sentiment score
+ # Compute reward score (using the sentiment analysis pipeline)
texts = [q + r for q, r in zip(batch["query"], batch["response"])]
pipe_outputs = sentiment_pipe(texts, **sent_kwargs)
rewards = [torch.tensor(output[0]["score"] - script_args.reward_baseline) for output in pipe_outputs]
| {"golden_diff": "diff --git a/examples/research_projects/stack_llama/scripts/rl_training.py b/examples/research_projects/stack_llama/scripts/rl_training.py\n--- a/examples/research_projects/stack_llama/scripts/rl_training.py\n+++ b/examples/research_projects/stack_llama/scripts/rl_training.py\n@@ -92,6 +92,7 @@\n \n train_dataset = load_dataset(\"lvwerra/stack-exchange-paired\", data_dir=\"data/rl\", split=\"train\")\n train_dataset = train_dataset.select(range(100000))\n+\n # We then define the arguments to pass to the sentiment analysis pipeline.\n # We set `return_all_scores` to True to get the sentiment score for each token.\n sent_kwargs = {\n@@ -207,9 +208,9 @@\n optimizer=optimizer,\n )\n \n-# We then build the sentiment analysis pipeline, passing the model name and the\n-# sentiment analysis pipeline arguments. Let's also make sure to set the device\n-# to the same device as the PPOTrainer.\n+# We then build the sentiment analysis pipeline using our reward model, passing the\n+# model name and the sentiment analysis pipeline arguments. Let's also make sure to\n+# set the device to the same device as the PPOTrainer.\n device = ppo_trainer.accelerator.device\n if ppo_trainer.accelerator.num_processes == 1:\n device = 0 if torch.cuda.is_available() else \"cpu\" # to avoid a ` pipeline` bug\n@@ -251,7 +252,7 @@\n )\n batch[\"response\"] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True)\n \n- # Compute sentiment score\n+ # Compute reward score (using the sentiment analysis pipeline)\n texts = [q + r for q, r in zip(batch[\"query\"], batch[\"response\"])]\n pipe_outputs = sentiment_pipe(texts, **sent_kwargs)\n rewards = [torch.tensor(output[0][\"score\"] - script_args.reward_baseline) for output in pipe_outputs]\n", "issue": "StackLLAMA examples uses sentiment model as reward model\nThe script to train the StackLLAMA model uses a sentiment pipeline for training the reward model while it should use the model that was trained before. \r\n\r\nhttps://github.com/lvwerra/trl/blob/main/examples/stack_llama/scripts/rl_training.py#L216\n", "before_files": [{"content": "# coding=utf-8\n# Copyright 2022 The HuggingFace Inc. team. All rights reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom dataclasses import dataclass, field\nfrom typing import Optional\n\nimport torch\nfrom accelerate import Accelerator\nfrom datasets import load_dataset\nfrom peft import LoraConfig\nfrom tqdm import tqdm\nfrom transformers import Adafactor, AutoTokenizer, HfArgumentParser, pipeline\n\nfrom trl import AutoModelForCausalLMWithValueHead, PPOConfig, PPOTrainer, set_seed\nfrom trl.core import LengthSampler\n\n\ntqdm.pandas()\n\n\n@dataclass\nclass ScriptArguments:\n \"\"\"\n The name of the Casual LM model we wish to fine with PPO\n \"\"\"\n\n # NOTE: gpt2 models use Conv1D instead of Linear layers which are not yet supported in 8 bit mode\n # models like gpt-neo* models are more suitable.\n model_name: Optional[str] = field(default=\"\", metadata={\"help\": \"the model name\"})\n tokenizer_name: Optional[str] = field(default=\"\", metadata={\"help\": \"the tokenizer name\"})\n reward_model_name: Optional[str] = field(default=\"\", metadata={\"help\": \"the reward model name\"})\n log_with: Optional[str] = field(default=None, metadata={\"help\": \"use 'wandb' to log with wandb\"})\n learning_rate: Optional[float] = field(default=1.41e-5, metadata={\"help\": \"the learning rate\"})\n output_max_length: Optional[int] = field(default=128, metadata={\"help\": \"maximum length for generation\"})\n mini_batch_size: Optional[int] = field(default=1, metadata={\"help\": \"the PPO minibatch size\"})\n batch_size: Optional[int] = field(default=32, metadata={\"help\": \"the batch size\"})\n ppo_epochs: Optional[int] = field(default=4, metadata={\"help\": \"the number of ppo epochs\"})\n gradient_accumulation_steps: Optional[int] = field(\n default=4, metadata={\"help\": \"the number of gradient accumulation steps\"}\n )\n adafactor: Optional[bool] = field(default=False, metadata={\"help\": \"whether to use the adafactor optimizer\"})\n early_stopping: Optional[bool] = field(default=False, metadata={\"help\": \"whether to early stop\"})\n target_kl: Optional[float] = field(default=0.1, metadata={\"help\": \"kl target for early stopping\"})\n reward_baseline: Optional[float] = field(\n default=0.0,\n metadata={\"help\": \"a baseline value that is subtracted from the reward\"},\n )\n batched_gen: Optional[bool] = field(default=False, metadata={\"help\": \"whether to use the batched text gen\"})\n save_freq: Optional[int] = field(default=None, metadata={\"help\": \"n steps to save the model\"})\n output_dir: Optional[str] = field(default=\"runs/\", metadata={\"help\": \"n steps to save the model\"})\n seed: Optional[int] = field(default=0, metadata={\"help\": \"the seed\"})\n steps: Optional[int] = field(default=20000, metadata={\"help\": \"number of epochs\"})\n init_kl_coef: Optional[float] = field(\n default=0.2,\n metadata={\"help\": \"Initial KL penalty coefficient (used for adaptive and linear control)\"},\n )\n\n adap_kl_ctrl: Optional[bool] = field(default=True, metadata={\"help\": \"Use adaptive KL control, otherwise linear\"})\n\n\nparser = HfArgumentParser(ScriptArguments)\nscript_args: ScriptArguments = parser.parse_args_into_dataclasses()[0]\nreward_model_name = script_args.reward_model_name\ndataset_name = \"lvwerra/stack-exchange-paired\"\nconfig = PPOConfig(\n steps=script_args.steps,\n model_name=script_args.model_name,\n learning_rate=script_args.learning_rate,\n log_with=script_args.log_with,\n batch_size=script_args.batch_size,\n mini_batch_size=script_args.mini_batch_size,\n gradient_accumulation_steps=script_args.gradient_accumulation_steps,\n optimize_cuda_cache=True,\n early_stopping=script_args.early_stopping,\n target_kl=script_args.target_kl,\n ppo_epochs=script_args.ppo_epochs,\n seed=script_args.seed,\n init_kl_coef=script_args.init_kl_coef,\n adap_kl_ctrl=script_args.adap_kl_ctrl,\n)\n\ntrain_dataset = load_dataset(\"lvwerra/stack-exchange-paired\", data_dir=\"data/rl\", split=\"train\")\ntrain_dataset = train_dataset.select(range(100000))\n# We then define the arguments to pass to the sentiment analysis pipeline.\n# We set `return_all_scores` to True to get the sentiment score for each token.\nsent_kwargs = {\n \"return_all_scores\": True,\n \"function_to_apply\": \"none\",\n \"batch_size\": 16,\n \"truncation\": True,\n}\n\ntokenizer = AutoTokenizer.from_pretrained(script_args.tokenizer_name)\n# GPT-2 tokenizer has a pad token, but it is not eos_token by default. We need to set it to eos_token.\n# only for this model.\n\nif getattr(tokenizer, \"pad_token\", None) is None:\n tokenizer.pad_token = tokenizer.eos_token\n\n\n# Below is an example function to build the dataset. In our case, we use the IMDB dataset\n# from the `datasets` library. One should customize this function to train the model on\n# its own dataset.\ndef build_dataset(\n tokenizer,\n dataset_name=\"lvwerra/stack-exchange-paired\",\n):\n \"\"\"\n Build dataset for training. This builds the dataset from `load_dataset`, one should\n customize this function to train the model on its own dataset.\n\n Args:\n dataset_name (`str`):\n The name of the dataset to be loaded.\n\n Returns:\n dataloader (`torch.utils.data.DataLoader`):\n The dataloader for the dataset.\n \"\"\"\n\n # load imdb with datasets\n ds = load_dataset(dataset_name, data_dir=\"data/rl\", split=\"train\")\n original_columns = ds.column_names\n num_proc = 24\n\n def preprocess_function(examples):\n new_examples = {\n \"query\": [],\n \"input_ids\": [],\n }\n for question in examples[\"question\"]:\n query = \"Question: \" + question + \"\\n\\nAnswer: \"\n tokenized_question = tokenizer(query, truncation=True)\n new_examples[\"query\"].append(query)\n new_examples[\"input_ids\"].append(tokenized_question[\"input_ids\"])\n\n return new_examples\n\n ds = train_dataset.map(\n preprocess_function,\n batched=True,\n num_proc=num_proc,\n remove_columns=original_columns,\n )\n ds = ds.filter(lambda x: len(x[\"input_ids\"]) < 512, batched=False)\n\n ds.set_format(type=\"torch\")\n return ds\n\n\n# We retrieve the dataloader by calling the `build_dataset` function.\ndataset = build_dataset(tokenizer)\n\n\ndef collator(data):\n return dict((key, [d[key] for d in data]) for key in data[0])\n\n\n# set seed before initializing value head for deterministic eval\nset_seed(config.seed)\n\n# Now let's build the model, the reference model, and the tokenizer.\ncurrent_device = Accelerator().local_process_index\n\nlora_config = LoraConfig(\n r=16,\n lora_alpha=32,\n lora_dropout=0.05,\n bias=\"none\",\n task_type=\"CAUSAL_LM\",\n)\nmodel = AutoModelForCausalLMWithValueHead.from_pretrained(\n config.model_name,\n load_in_8bit=True,\n device_map={\"\": current_device},\n peft_config=lora_config,\n)\n\noptimizer = None\nif script_args.adafactor:\n optimizer = Adafactor(\n filter(lambda p: p.requires_grad, model.parameters()),\n scale_parameter=False,\n relative_step=False,\n warmup_init=False,\n lr=config.learning_rate,\n )\n# We then build the PPOTrainer, passing the model, the reference model, the tokenizer\nppo_trainer = PPOTrainer(\n config,\n model,\n ref_model=None,\n tokenizer=tokenizer,\n dataset=dataset,\n data_collator=collator,\n optimizer=optimizer,\n)\n\n# We then build the sentiment analysis pipeline, passing the model name and the\n# sentiment analysis pipeline arguments. Let's also make sure to set the device\n# to the same device as the PPOTrainer.\ndevice = ppo_trainer.accelerator.device\nif ppo_trainer.accelerator.num_processes == 1:\n device = 0 if torch.cuda.is_available() else \"cpu\" # to avoid a ` pipeline` bug\nsentiment_pipe = pipeline(\n \"sentiment-analysis\",\n model=reward_model_name,\n device_map={\"\": current_device},\n model_kwargs={\"load_in_8bit\": True},\n tokenizer=tokenizer,\n return_token_type_ids=False,\n)\n\n# We then define the arguments to pass to the `generate` function. These arguments\n# are passed to the `generate` function of the PPOTrainer, which is a wrapper around\n# the `generate` function of the trained model.\ngeneration_kwargs = {\n # \"min_length\": -1,\n \"top_k\": 0.0,\n \"top_p\": 1.0,\n \"do_sample\": True,\n \"pad_token_id\": tokenizer.pad_token_id,\n \"eos_token_id\": 100_000,\n}\noutput_min_length = 32\noutput_max_length = script_args.output_max_length\noutput_length_sampler = LengthSampler(output_min_length, output_max_length)\n\nfor epoch, batch in tqdm(enumerate(ppo_trainer.dataloader)):\n if epoch >= config.total_ppo_epochs:\n break\n\n question_tensors = batch[\"input_ids\"]\n\n response_tensors = ppo_trainer.generate(\n question_tensors,\n return_prompt=False,\n length_sampler=output_length_sampler,\n **generation_kwargs,\n )\n batch[\"response\"] = tokenizer.batch_decode(response_tensors, skip_special_tokens=True)\n\n # Compute sentiment score\n texts = [q + r for q, r in zip(batch[\"query\"], batch[\"response\"])]\n pipe_outputs = sentiment_pipe(texts, **sent_kwargs)\n rewards = [torch.tensor(output[0][\"score\"] - script_args.reward_baseline) for output in pipe_outputs]\n\n # Run PPO step\n stats = ppo_trainer.step(question_tensors, response_tensors, rewards)\n ppo_trainer.log_stats(stats, batch, rewards)\n\n if script_args.save_freq and epoch and epoch % script_args.save_freq == 0:\n ppo_trainer.save_pretrained(script_args.output_dir + f\"step_{epoch}\")\n", "path": "examples/research_projects/stack_llama/scripts/rl_training.py"}]} | 3,762 | 434 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.