problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
18.9k
| golden_diff
stringlengths 145
5.13k
| verification_info
stringlengths 465
23.6k
| num_tokens_prompt
int64 556
4.1k
| num_tokens_diff
int64 47
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_50456
|
rasdani/github-patches
|
git_diff
|
jupyterhub__jupyterhub-263
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Single user server launch is broken
I think that #261 broke the launching of the single user server. I am seeing the following errors in the nbgrader tests:
```
Traceback (most recent call last):
File "/Users/jhamrick/.virtualenvs/nbgrader/bin/jupyterhub-singleuser", line 6, in <module>
exec(compile(open(__file__).read(), __file__, 'exec'))
File "/Users/jhamrick/project/tools/jupyterhub/scripts/jupyterhub-singleuser", line 4, in <module>
main()
File "/Users/jhamrick/project/tools/jupyterhub/jupyterhub/singleuser.py", line 221, in main
return SingleUserNotebookApp.launch_instance()
File "/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/IPython/config/application.py", line 573, in launch_instance
app.initialize(argv)
File "<string>", line 2, in initialize
File "/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/IPython/config/application.py", line 75, in catch_config_error
return method(app, *args, **kwargs)
File "/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/IPython/html/notebookapp.py", line 1015, in initialize
self.init_webapp()
File "/Users/jhamrick/project/tools/jupyterhub/jupyterhub/singleuser.py", line 191, in init_webapp
s['user'] = self.user
File "/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/traitlets/traitlets.py", line 438, in __get__
% (self.name, obj))
traitlets.traitlets.TraitError: No default value found for None trait of <jupyterhub.singleuser.SingleUserNotebookApp object at 0x102953b00>
```
If I revert to the version of jupyterhub prior to that PR, this error does not occur. @epifanio reported on gitter seeing the same thing as well, so I don't think it's isolated to nbgrader.
Given the error message, I suspect this has to do with ipython/traitlets#39 and/or ipython/traitlets#40 though I haven't actually tested it. I tried giving the `user` trait a default value but it did not seem to fix the error. I will try to do a bit more debugging, but I fear I don't really understand the internals of traitlets well enough to know exactly what's going on here.
Ping @takluyver and @minrk ?
</issue>
<code>
[start of jupyterhub/singleuser.py]
1 #!/usr/bin/env python3
2 """Extend regular notebook server to be aware of multiuser things."""
3
4 # Copyright (c) Jupyter Development Team.
5 # Distributed under the terms of the Modified BSD License.
6
7 import os
8 try:
9 from urllib.parse import quote
10 except ImportError:
11 # PY2 Compat
12 from urllib import quote
13
14 import requests
15 from jinja2 import ChoiceLoader, FunctionLoader
16
17 from tornado import ioloop
18 from tornado.web import HTTPError
19
20 from traitlets import (
21 Integer,
22 Unicode,
23 CUnicode,
24 )
25
26 from IPython.html.notebookapp import NotebookApp, aliases as notebook_aliases
27 from IPython.html.auth.login import LoginHandler
28 from IPython.html.auth.logout import LogoutHandler
29
30 from IPython.html.utils import url_path_join
31
32
33 from distutils.version import LooseVersion as V
34
35 import IPython
36 if V(IPython.__version__) < V('3.0'):
37 raise ImportError("JupyterHub Requires IPython >= 3.0, found %s" % IPython.__version__)
38
39 # Define two methods to attach to AuthenticatedHandler,
40 # which authenticate via the central auth server.
41
42 class JupyterHubLoginHandler(LoginHandler):
43 @staticmethod
44 def login_available(settings):
45 return True
46
47 @staticmethod
48 def verify_token(self, cookie_name, encrypted_cookie):
49 """method for token verification"""
50 cookie_cache = self.settings['cookie_cache']
51 if encrypted_cookie in cookie_cache:
52 # we've seen this token before, don't ask upstream again
53 return cookie_cache[encrypted_cookie]
54
55 hub_api_url = self.settings['hub_api_url']
56 hub_api_key = self.settings['hub_api_key']
57 r = requests.get(url_path_join(
58 hub_api_url, "authorizations/cookie", cookie_name, quote(encrypted_cookie, safe=''),
59 ),
60 headers = {'Authorization' : 'token %s' % hub_api_key},
61 )
62 if r.status_code == 404:
63 data = None
64 elif r.status_code == 403:
65 self.log.error("I don't have permission to verify cookies, my auth token may have expired: [%i] %s", r.status_code, r.reason)
66 raise HTTPError(500, "Permission failure checking authorization, I may need to be restarted")
67 elif r.status_code >= 500:
68 self.log.error("Upstream failure verifying auth token: [%i] %s", r.status_code, r.reason)
69 raise HTTPError(502, "Failed to check authorization (upstream problem)")
70 elif r.status_code >= 400:
71 self.log.warn("Failed to check authorization: [%i] %s", r.status_code, r.reason)
72 raise HTTPError(500, "Failed to check authorization")
73 else:
74 data = r.json()
75 cookie_cache[encrypted_cookie] = data
76 return data
77
78 @staticmethod
79 def get_user(self):
80 """alternative get_current_user to query the central server"""
81 # only allow this to be called once per handler
82 # avoids issues if an error is raised,
83 # since this may be called again when trying to render the error page
84 if hasattr(self, '_cached_user'):
85 return self._cached_user
86
87 self._cached_user = None
88 my_user = self.settings['user']
89 encrypted_cookie = self.get_cookie(self.cookie_name)
90 if encrypted_cookie:
91 auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)
92 if not auth_data:
93 # treat invalid token the same as no token
94 return None
95 user = auth_data['name']
96 if user == my_user:
97 self._cached_user = user
98 return user
99 else:
100 return None
101 else:
102 self.log.debug("No token cookie")
103 return None
104
105
106 class JupyterHubLogoutHandler(LogoutHandler):
107 def get(self):
108 self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))
109
110
111 # register new hub related command-line aliases
112 aliases = dict(notebook_aliases)
113 aliases.update({
114 'user' : 'SingleUserNotebookApp.user',
115 'cookie-name': 'SingleUserNotebookApp.cookie_name',
116 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',
117 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',
118 'base-url': 'SingleUserNotebookApp.base_url',
119 })
120
121 page_template = """
122 {% extends "templates/page.html" %}
123
124 {% block header_buttons %}
125 {{super()}}
126
127 <a href='{{hub_control_panel_url}}'
128 class='btn btn-default btn-sm navbar-btn pull-right'
129 style='margin-right: 4px; margin-left: 2px;'
130 >
131 Control Panel</a>
132 {% endblock %}
133 """
134
135 class SingleUserNotebookApp(NotebookApp):
136 """A Subclass of the regular NotebookApp that is aware of the parent multiuser context."""
137 user = CUnicode(config=True)
138 def _user_changed(self, name, old, new):
139 self.log.name = new
140 cookie_name = Unicode(config=True)
141 hub_prefix = Unicode(config=True)
142 hub_api_url = Unicode(config=True)
143 aliases = aliases
144 open_browser = False
145 trust_xheaders = True
146 login_handler_class = JupyterHubLoginHandler
147 logout_handler_class = JupyterHubLogoutHandler
148
149 cookie_cache_lifetime = Integer(
150 config=True,
151 default_value=300,
152 allow_none=True,
153 help="""
154 Time, in seconds, that we cache a validated cookie before requiring
155 revalidation with the hub.
156 """,
157 )
158
159 def _log_datefmt_default(self):
160 """Exclude date from default date format"""
161 return "%Y-%m-%d %H:%M:%S"
162
163 def _log_format_default(self):
164 """override default log format to include time"""
165 return "%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s"
166
167 def _confirm_exit(self):
168 # disable the exit confirmation for background notebook processes
169 ioloop.IOLoop.instance().stop()
170
171 def _clear_cookie_cache(self):
172 self.log.debug("Clearing cookie cache")
173 self.tornado_settings['cookie_cache'].clear()
174
175 def start(self):
176 # Start a PeriodicCallback to clear cached cookies. This forces us to
177 # revalidate our user with the Hub at least every
178 # `cookie_cache_lifetime` seconds.
179 if self.cookie_cache_lifetime:
180 ioloop.PeriodicCallback(
181 self._clear_cookie_cache,
182 self.cookie_cache_lifetime * 1e3,
183 ).start()
184 super(SingleUserNotebookApp, self).start()
185
186 def init_webapp(self):
187 # load the hub related settings into the tornado settings dict
188 env = os.environ
189 s = self.tornado_settings
190 s['cookie_cache'] = {}
191 s['user'] = self.user
192 s['hub_api_key'] = env.pop('JPY_API_TOKEN')
193 s['hub_prefix'] = self.hub_prefix
194 s['cookie_name'] = self.cookie_name
195 s['login_url'] = self.hub_prefix
196 s['hub_api_url'] = self.hub_api_url
197
198 super(SingleUserNotebookApp, self).init_webapp()
199 self.patch_templates()
200
201 def patch_templates(self):
202 """Patch page templates to add Hub-related buttons"""
203 env = self.web_app.settings['jinja2_env']
204
205 env.globals['hub_control_panel_url'] = \
206 url_path_join(self.hub_prefix, 'home')
207
208 # patch jinja env loading to modify page template
209 def get_page(name):
210 if name == 'page.html':
211 return page_template
212
213 orig_loader = env.loader
214 env.loader = ChoiceLoader([
215 FunctionLoader(get_page),
216 orig_loader,
217 ])
218
219
220 def main():
221 return SingleUserNotebookApp.launch_instance()
222
223
224 if __name__ == "__main__":
225 main()
226
[end of jupyterhub/singleuser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py
--- a/jupyterhub/singleuser.py
+++ b/jupyterhub/singleuser.py
@@ -17,7 +17,7 @@
from tornado import ioloop
from tornado.web import HTTPError
-from traitlets import (
+from IPython.utils.traitlets import (
Integer,
Unicode,
CUnicode,
|
{"golden_diff": "diff --git a/jupyterhub/singleuser.py b/jupyterhub/singleuser.py\n--- a/jupyterhub/singleuser.py\n+++ b/jupyterhub/singleuser.py\n@@ -17,7 +17,7 @@\n from tornado import ioloop\n from tornado.web import HTTPError\n \n-from traitlets import (\n+from IPython.utils.traitlets import (\n Integer,\n Unicode,\n CUnicode,\n", "issue": "Single user server launch is broken\nI think that #261 broke the launching of the single user server. I am seeing the following errors in the nbgrader tests:\n\n```\nTraceback (most recent call last):\n File \"/Users/jhamrick/.virtualenvs/nbgrader/bin/jupyterhub-singleuser\", line 6, in <module>\n exec(compile(open(__file__).read(), __file__, 'exec'))\n File \"/Users/jhamrick/project/tools/jupyterhub/scripts/jupyterhub-singleuser\", line 4, in <module>\n main()\n File \"/Users/jhamrick/project/tools/jupyterhub/jupyterhub/singleuser.py\", line 221, in main\n return SingleUserNotebookApp.launch_instance()\n File \"/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/IPython/config/application.py\", line 573, in launch_instance\n app.initialize(argv)\n File \"<string>\", line 2, in initialize\n File \"/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/IPython/config/application.py\", line 75, in catch_config_error\n return method(app, *args, **kwargs)\n File \"/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/IPython/html/notebookapp.py\", line 1015, in initialize\n self.init_webapp()\n File \"/Users/jhamrick/project/tools/jupyterhub/jupyterhub/singleuser.py\", line 191, in init_webapp\n s['user'] = self.user\n File \"/Users/jhamrick/.virtualenvs/nbgrader/lib/python3.4/site-packages/traitlets/traitlets.py\", line 438, in __get__\n % (self.name, obj))\ntraitlets.traitlets.TraitError: No default value found for None trait of <jupyterhub.singleuser.SingleUserNotebookApp object at 0x102953b00>\n```\n\nIf I revert to the version of jupyterhub prior to that PR, this error does not occur. @epifanio reported on gitter seeing the same thing as well, so I don't think it's isolated to nbgrader.\n\nGiven the error message, I suspect this has to do with ipython/traitlets#39 and/or ipython/traitlets#40 though I haven't actually tested it. I tried giving the `user` trait a default value but it did not seem to fix the error. I will try to do a bit more debugging, but I fear I don't really understand the internals of traitlets well enough to know exactly what's going on here.\n\nPing @takluyver and @minrk ?\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Extend regular notebook server to be aware of multiuser things.\"\"\"\n\n# Copyright (c) Jupyter Development Team.\n# Distributed under the terms of the Modified BSD License.\n\nimport os\ntry:\n from urllib.parse import quote\nexcept ImportError:\n # PY2 Compat\n from urllib import quote\n\nimport requests\nfrom jinja2 import ChoiceLoader, FunctionLoader\n\nfrom tornado import ioloop\nfrom tornado.web import HTTPError\n\nfrom traitlets import (\n Integer,\n Unicode,\n CUnicode,\n)\n\nfrom IPython.html.notebookapp import NotebookApp, aliases as notebook_aliases\nfrom IPython.html.auth.login import LoginHandler\nfrom IPython.html.auth.logout import LogoutHandler\n\nfrom IPython.html.utils import url_path_join\n\n\nfrom distutils.version import LooseVersion as V\n\nimport IPython\nif V(IPython.__version__) < V('3.0'):\n raise ImportError(\"JupyterHub Requires IPython >= 3.0, found %s\" % IPython.__version__)\n\n# Define two methods to attach to AuthenticatedHandler,\n# which authenticate via the central auth server.\n\nclass JupyterHubLoginHandler(LoginHandler):\n @staticmethod\n def login_available(settings):\n return True\n \n @staticmethod\n def verify_token(self, cookie_name, encrypted_cookie):\n \"\"\"method for token verification\"\"\"\n cookie_cache = self.settings['cookie_cache']\n if encrypted_cookie in cookie_cache:\n # we've seen this token before, don't ask upstream again\n return cookie_cache[encrypted_cookie]\n \n hub_api_url = self.settings['hub_api_url']\n hub_api_key = self.settings['hub_api_key']\n r = requests.get(url_path_join(\n hub_api_url, \"authorizations/cookie\", cookie_name, quote(encrypted_cookie, safe=''),\n ),\n headers = {'Authorization' : 'token %s' % hub_api_key},\n )\n if r.status_code == 404:\n data = None\n elif r.status_code == 403:\n self.log.error(\"I don't have permission to verify cookies, my auth token may have expired: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(500, \"Permission failure checking authorization, I may need to be restarted\")\n elif r.status_code >= 500:\n self.log.error(\"Upstream failure verifying auth token: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(502, \"Failed to check authorization (upstream problem)\")\n elif r.status_code >= 400:\n self.log.warn(\"Failed to check authorization: [%i] %s\", r.status_code, r.reason)\n raise HTTPError(500, \"Failed to check authorization\")\n else:\n data = r.json()\n cookie_cache[encrypted_cookie] = data\n return data\n \n @staticmethod\n def get_user(self):\n \"\"\"alternative get_current_user to query the central server\"\"\"\n # only allow this to be called once per handler\n # avoids issues if an error is raised,\n # since this may be called again when trying to render the error page\n if hasattr(self, '_cached_user'):\n return self._cached_user\n \n self._cached_user = None\n my_user = self.settings['user']\n encrypted_cookie = self.get_cookie(self.cookie_name)\n if encrypted_cookie:\n auth_data = JupyterHubLoginHandler.verify_token(self, self.cookie_name, encrypted_cookie)\n if not auth_data:\n # treat invalid token the same as no token\n return None\n user = auth_data['name']\n if user == my_user:\n self._cached_user = user\n return user\n else:\n return None\n else:\n self.log.debug(\"No token cookie\")\n return None\n\n\nclass JupyterHubLogoutHandler(LogoutHandler):\n def get(self):\n self.redirect(url_path_join(self.settings['hub_prefix'], 'logout'))\n\n\n# register new hub related command-line aliases\naliases = dict(notebook_aliases)\naliases.update({\n 'user' : 'SingleUserNotebookApp.user',\n 'cookie-name': 'SingleUserNotebookApp.cookie_name',\n 'hub-prefix': 'SingleUserNotebookApp.hub_prefix',\n 'hub-api-url': 'SingleUserNotebookApp.hub_api_url',\n 'base-url': 'SingleUserNotebookApp.base_url',\n})\n\npage_template = \"\"\"\n{% extends \"templates/page.html\" %}\n\n{% block header_buttons %}\n{{super()}}\n\n<a href='{{hub_control_panel_url}}'\n class='btn btn-default btn-sm navbar-btn pull-right'\n style='margin-right: 4px; margin-left: 2px;'\n>\nControl Panel</a>\n{% endblock %}\n\"\"\"\n\nclass SingleUserNotebookApp(NotebookApp):\n \"\"\"A Subclass of the regular NotebookApp that is aware of the parent multiuser context.\"\"\"\n user = CUnicode(config=True)\n def _user_changed(self, name, old, new):\n self.log.name = new\n cookie_name = Unicode(config=True)\n hub_prefix = Unicode(config=True)\n hub_api_url = Unicode(config=True)\n aliases = aliases\n open_browser = False\n trust_xheaders = True\n login_handler_class = JupyterHubLoginHandler\n logout_handler_class = JupyterHubLogoutHandler\n\n cookie_cache_lifetime = Integer(\n config=True,\n default_value=300,\n allow_none=True,\n help=\"\"\"\n Time, in seconds, that we cache a validated cookie before requiring\n revalidation with the hub.\n \"\"\",\n )\n\n def _log_datefmt_default(self):\n \"\"\"Exclude date from default date format\"\"\"\n return \"%Y-%m-%d %H:%M:%S\"\n\n def _log_format_default(self):\n \"\"\"override default log format to include time\"\"\"\n return \"%(color)s[%(levelname)1.1s %(asctime)s.%(msecs).03d %(name)s %(module)s:%(lineno)d]%(end_color)s %(message)s\"\n\n def _confirm_exit(self):\n # disable the exit confirmation for background notebook processes\n ioloop.IOLoop.instance().stop()\n\n def _clear_cookie_cache(self):\n self.log.debug(\"Clearing cookie cache\")\n self.tornado_settings['cookie_cache'].clear()\n \n def start(self):\n # Start a PeriodicCallback to clear cached cookies. This forces us to\n # revalidate our user with the Hub at least every\n # `cookie_cache_lifetime` seconds.\n if self.cookie_cache_lifetime:\n ioloop.PeriodicCallback(\n self._clear_cookie_cache,\n self.cookie_cache_lifetime * 1e3,\n ).start()\n super(SingleUserNotebookApp, self).start()\n \n def init_webapp(self):\n # load the hub related settings into the tornado settings dict\n env = os.environ\n s = self.tornado_settings\n s['cookie_cache'] = {}\n s['user'] = self.user\n s['hub_api_key'] = env.pop('JPY_API_TOKEN')\n s['hub_prefix'] = self.hub_prefix\n s['cookie_name'] = self.cookie_name\n s['login_url'] = self.hub_prefix\n s['hub_api_url'] = self.hub_api_url\n \n super(SingleUserNotebookApp, self).init_webapp()\n self.patch_templates()\n \n def patch_templates(self):\n \"\"\"Patch page templates to add Hub-related buttons\"\"\"\n env = self.web_app.settings['jinja2_env']\n \n env.globals['hub_control_panel_url'] = \\\n url_path_join(self.hub_prefix, 'home')\n \n # patch jinja env loading to modify page template\n def get_page(name):\n if name == 'page.html':\n return page_template\n \n orig_loader = env.loader\n env.loader = ChoiceLoader([\n FunctionLoader(get_page),\n orig_loader,\n ])\n\n\ndef main():\n return SingleUserNotebookApp.launch_instance()\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "jupyterhub/singleuser.py"}]}
| 3,453 | 89 |
gh_patches_debug_774
|
rasdani/github-patches
|
git_diff
|
getredash__redash-2501
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Non blocking widget refresh indicator
When refreshing a dashboard widget the previous results are hidden by the refresh animation. This can be an issue when refreshing a dashboard frequently, as you might happen to see the spinner for long period of times.
To solve this we can keep showing the old data until new one is available, while showing some indication that refresh is in progress.
Is the following animation enough?

After refreshing a dashboard, widgets become draggable even when not in edit mode
</issue>
<code>
[start of redash/handlers/widgets.py]
1 import json
2
3 from flask import request
4 from redash import models
5 from redash.handlers.base import BaseResource
6 from redash.permissions import (require_access,
7 require_object_modify_permission,
8 require_permission, view_only)
9
10
11 class WidgetListResource(BaseResource):
12 @require_permission('edit_dashboard')
13 def post(self):
14 """
15 Add a widget to a dashboard.
16
17 :<json number dashboard_id: The ID for the dashboard being added to
18 :<json visualization_id: The ID of the visualization to put in this widget
19 :<json object options: Widget options
20 :<json string text: Text box contents
21 :<json number width: Width for widget display
22
23 :>json object widget: The created widget
24 """
25 widget_properties = request.get_json(force=True)
26 dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)
27 require_object_modify_permission(dashboard, self.current_user)
28
29 widget_properties['options'] = json.dumps(widget_properties['options'])
30 widget_properties.pop('id', None)
31 widget_properties['dashboard'] = dashboard
32
33 visualization_id = widget_properties.pop('visualization_id')
34 if visualization_id:
35 visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)
36 require_access(visualization.query_rel.groups, self.current_user, view_only)
37 else:
38 visualization = None
39
40 widget_properties['visualization'] = visualization
41
42 widget = models.Widget(**widget_properties)
43 models.db.session.add(widget)
44 models.db.session.commit()
45
46 models.db.session.commit()
47 return {'widget': widget.to_dict()}
48
49
50 class WidgetResource(BaseResource):
51 @require_permission('edit_dashboard')
52 def post(self, widget_id):
53 """
54 Updates a widget in a dashboard.
55 This method currently handles Text Box widgets only.
56
57 :param number widget_id: The ID of the widget to modify
58
59 :<json string text: The new contents of the text box
60 """
61 widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
62 require_object_modify_permission(widget.dashboard, self.current_user)
63 widget_properties = request.get_json(force=True)
64 widget.text = widget_properties['text']
65 widget.options = json.dumps(widget_properties['options'])
66 models.db.session.commit()
67 return widget.to_dict()
68
69 @require_permission('edit_dashboard')
70 def delete(self, widget_id):
71 """
72 Remove a widget from a dashboard.
73
74 :param number widget_id: ID of widget to remove
75 """
76 widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)
77 require_object_modify_permission(widget.dashboard, self.current_user)
78 models.db.session.delete(widget)
79 models.db.session.commit()
80
[end of redash/handlers/widgets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py
--- a/redash/handlers/widgets.py
+++ b/redash/handlers/widgets.py
@@ -44,7 +44,7 @@
models.db.session.commit()
models.db.session.commit()
- return {'widget': widget.to_dict()}
+ return widget.to_dict()
class WidgetResource(BaseResource):
|
{"golden_diff": "diff --git a/redash/handlers/widgets.py b/redash/handlers/widgets.py\n--- a/redash/handlers/widgets.py\n+++ b/redash/handlers/widgets.py\n@@ -44,7 +44,7 @@\n models.db.session.commit()\n \n models.db.session.commit()\n- return {'widget': widget.to_dict()}\n+ return widget.to_dict()\n \n \n class WidgetResource(BaseResource):\n", "issue": "Non blocking widget refresh indicator\nWhen refreshing a dashboard widget the previous results are hidden by the refresh animation. This can be an issue when refreshing a dashboard frequently, as you might happen to see the spinner for long period of times.\r\n\r\nTo solve this we can keep showing the old data until new one is available, while showing some indication that refresh is in progress.\r\n\r\nIs the following animation enough?\r\n\r\n\nAfter refreshing a dashboard, widgets become draggable even when not in edit mode\n\n", "before_files": [{"content": "import json\n\nfrom flask import request\nfrom redash import models\nfrom redash.handlers.base import BaseResource\nfrom redash.permissions import (require_access,\n require_object_modify_permission,\n require_permission, view_only)\n\n\nclass WidgetListResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self):\n \"\"\"\n Add a widget to a dashboard.\n\n :<json number dashboard_id: The ID for the dashboard being added to\n :<json visualization_id: The ID of the visualization to put in this widget\n :<json object options: Widget options\n :<json string text: Text box contents\n :<json number width: Width for widget display\n\n :>json object widget: The created widget\n \"\"\"\n widget_properties = request.get_json(force=True)\n dashboard = models.Dashboard.get_by_id_and_org(widget_properties.pop('dashboard_id'), self.current_org)\n require_object_modify_permission(dashboard, self.current_user)\n\n widget_properties['options'] = json.dumps(widget_properties['options'])\n widget_properties.pop('id', None)\n widget_properties['dashboard'] = dashboard\n\n visualization_id = widget_properties.pop('visualization_id')\n if visualization_id:\n visualization = models.Visualization.get_by_id_and_org(visualization_id, self.current_org)\n require_access(visualization.query_rel.groups, self.current_user, view_only)\n else:\n visualization = None\n\n widget_properties['visualization'] = visualization\n\n widget = models.Widget(**widget_properties)\n models.db.session.add(widget)\n models.db.session.commit()\n\n models.db.session.commit()\n return {'widget': widget.to_dict()}\n\n\nclass WidgetResource(BaseResource):\n @require_permission('edit_dashboard')\n def post(self, widget_id):\n \"\"\"\n Updates a widget in a dashboard.\n This method currently handles Text Box widgets only.\n\n :param number widget_id: The ID of the widget to modify\n\n :<json string text: The new contents of the text box\n \"\"\"\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_object_modify_permission(widget.dashboard, self.current_user)\n widget_properties = request.get_json(force=True)\n widget.text = widget_properties['text']\n widget.options = json.dumps(widget_properties['options'])\n models.db.session.commit()\n return widget.to_dict()\n\n @require_permission('edit_dashboard')\n def delete(self, widget_id):\n \"\"\"\n Remove a widget from a dashboard.\n\n :param number widget_id: ID of widget to remove\n \"\"\"\n widget = models.Widget.get_by_id_and_org(widget_id, self.current_org)\n require_object_modify_permission(widget.dashboard, self.current_user)\n models.db.session.delete(widget)\n models.db.session.commit()\n", "path": "redash/handlers/widgets.py"}]}
| 1,382 | 89 |
gh_patches_debug_845
|
rasdani/github-patches
|
git_diff
|
PrefectHQ__prefect-2056
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AuthorizationError when watching logs from CLI
When running with `prefect run cloud --logs`, after a few minutes I see the following error:
```
prefect.utilities.exceptions.AuthorizationError: [{'message': 'AuthenticationError', 'locations': [], 'path': ['flow_run'], 'extensions': {'code': 'UNAUTHENTICATED'}}]
```
The run itself succeeds but the logs stop at that point, so I guess the token is initially valid but just expires...?
cc @joshmeek @cicdw
</issue>
<code>
[start of src/prefect/cli/run.py]
1 import json
2 import time
3
4 import click
5 from tabulate import tabulate
6
7 from prefect.client import Client
8 from prefect.utilities.graphql import EnumValue, with_args
9
10
11 @click.group(hidden=True)
12 def run():
13 """
14 Run Prefect flows.
15
16 \b
17 Usage:
18 $ prefect run [STORAGE/PLATFORM]
19
20 \b
21 Arguments:
22 cloud Run flows in Prefect Cloud
23
24 \b
25 Examples:
26 $ prefect run cloud --name Test-Flow --project My-Project
27 Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9
28
29 \b
30 $ prefect run cloud --name Test-Flow --project My-Project --watch
31 Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9
32 Scheduled -> Submitted -> Running -> Success
33 """
34
35
36 @run.command(hidden=True)
37 @click.option(
38 "--name", "-n", required=True, help="The name of a flow to run.", hidden=True
39 )
40 @click.option(
41 "--project",
42 "-p",
43 required=True,
44 help="The project that contains the flow.",
45 hidden=True,
46 )
47 @click.option("--version", "-v", type=int, help="A flow version to run.", hidden=True)
48 @click.option(
49 "--parameters-file",
50 "-pf",
51 help="A parameters JSON file.",
52 hidden=True,
53 type=click.Path(exists=True),
54 )
55 @click.option(
56 "--parameters-string", "-ps", help="A parameters JSON string.", hidden=True
57 )
58 @click.option("--run-name", "-rn", help="A name to assign for this run.", hidden=True)
59 @click.option(
60 "--watch",
61 "-w",
62 is_flag=True,
63 help="Watch current state of the flow run.",
64 hidden=True,
65 )
66 @click.option(
67 "--logs", "-l", is_flag=True, help="Live logs of the flow run.", hidden=True
68 )
69 def cloud(
70 name, project, version, parameters_file, parameters_string, run_name, watch, logs
71 ):
72 """
73 Run a registered flow in Prefect Cloud.
74
75 \b
76 Options:
77 --name, -n TEXT The name of a flow to run [required]
78 --project, -p TEXT The name of a project that contains the flow [required]
79 --version, -v INTEGER A flow version to run
80 --parameters-file, -pf FILE PATH A filepath of a JSON file containing parameters
81 --parameters-string, -ps TEXT A string of JSON parameters
82 --run-name, -rn TEXT A name to assign for this run
83 --watch, -w Watch current state of the flow run, stream output to stdout
84 --logs, -l Get logs of the flow run, stream output to stdout
85
86 \b
87 If both `--parameters-file` and `--parameters-string` are provided then the values passed
88 in through the string will override the values provided from the file.
89
90 \b
91 e.g.
92 File contains: {"a": 1, "b": 2}
93 String: '{"a": 3}'
94 Parameters passed to the flow run: {"a": 3, "b": 2}
95 """
96
97 if watch and logs:
98 click.secho(
99 "Streaming state and logs not currently supported together.", fg="red"
100 )
101 return
102
103 query = {
104 "query": {
105 with_args(
106 "flow",
107 {
108 "where": {
109 "_and": {
110 "name": {"_eq": name},
111 "version": {"_eq": version},
112 "project": {"name": {"_eq": project}},
113 }
114 },
115 "order_by": {
116 "name": EnumValue("asc"),
117 "version": EnumValue("desc"),
118 },
119 "distinct_on": EnumValue("name"),
120 },
121 ): {"id": True}
122 }
123 }
124
125 client = Client()
126 result = client.graphql(query)
127
128 flow_data = result.data.flow
129
130 if flow_data:
131 flow_id = flow_data[0].id
132 else:
133 click.secho("{} not found".format(name), fg="red")
134 return
135
136 # Load parameters from file if provided
137 file_params = {}
138 if parameters_file:
139 with open(parameters_file) as params_file:
140 file_params = json.load(params_file)
141
142 # Load parameters from string if provided
143 string_params = {}
144 if parameters_string:
145 string_params = json.loads(parameters_string)
146
147 flow_run_id = client.create_flow_run(
148 flow_id=flow_id, parameters={**file_params, **string_params}, run_name=run_name
149 )
150 click.echo("Flow Run ID: {}".format(flow_run_id))
151
152 if watch:
153 current_states = []
154 while True:
155 query = {
156 "query": {
157 with_args("flow_run_by_pk", {"id": flow_run_id}): {
158 with_args(
159 "states",
160 {"order_by": {EnumValue("timestamp"): EnumValue("asc")}},
161 ): {"state": True, "timestamp": True}
162 }
163 }
164 }
165
166 result = client.graphql(query)
167
168 # Filter through retrieved states and output in order
169 for state_index in result.data.flow_run_by_pk.states:
170 state = state_index.state
171 if state not in current_states:
172 if state != "Success" and state != "Failed":
173 click.echo("{} -> ".format(state), nl=False)
174 else:
175 click.echo(state)
176 return
177
178 current_states.append(state)
179
180 time.sleep(3)
181
182 if logs:
183 all_logs = []
184
185 log_query = {
186 with_args(
187 "logs", {"order_by": {EnumValue("timestamp"): EnumValue("asc")}}
188 ): {"timestamp": True, "message": True, "level": True},
189 "start_time": True,
190 }
191
192 query = {
193 "query": {
194 with_args(
195 "flow_run",
196 {
197 "where": {"id": {"_eq": flow_run_id}},
198 "order_by": {EnumValue("start_time"): EnumValue("desc")},
199 },
200 ): log_query
201 }
202 }
203
204 while True:
205 result = Client().graphql(query)
206
207 flow_run = result.data.flow_run
208 if not flow_run:
209 click.secho("{} not found".format(flow_run_id), fg="red")
210 return
211
212 new_run = flow_run[0]
213 logs = new_run.logs
214 output = []
215
216 for i in logs:
217 if [i.timestamp, i.level, i.message] not in all_logs:
218
219 if not len(all_logs):
220 click.echo(
221 tabulate(
222 [[i.timestamp, i.level, i.message]],
223 headers=["TIMESTAMP", "LEVEL", "MESSAGE"],
224 tablefmt="plain",
225 numalign="left",
226 stralign="left",
227 )
228 )
229 all_logs.append([i.timestamp, i.level, i.message])
230 continue
231
232 output.append([i.timestamp, i.level, i.message])
233 all_logs.append([i.timestamp, i.level, i.message])
234
235 if output:
236 click.echo(
237 tabulate(output, tablefmt="plain", numalign="left", stralign="left")
238 )
239
240 # Check if state is either Success or Failed, exit if it is
241 pk_query = {
242 "query": {
243 with_args("flow_run_by_pk", {"id": flow_run_id}): {"state": True}
244 }
245 }
246 result = client.graphql(pk_query)
247
248 if (
249 result.data.flow_run_by_pk.state == "Success"
250 or result.data.flow_run_by_pk.state == "Failed"
251 ):
252 return
253
254 time.sleep(3)
255
[end of src/prefect/cli/run.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/prefect/cli/run.py b/src/prefect/cli/run.py
--- a/src/prefect/cli/run.py
+++ b/src/prefect/cli/run.py
@@ -202,7 +202,7 @@
}
while True:
- result = Client().graphql(query)
+ result = client.graphql(query)
flow_run = result.data.flow_run
if not flow_run:
|
{"golden_diff": "diff --git a/src/prefect/cli/run.py b/src/prefect/cli/run.py\n--- a/src/prefect/cli/run.py\n+++ b/src/prefect/cli/run.py\n@@ -202,7 +202,7 @@\n }\n \n while True:\n- result = Client().graphql(query)\n+ result = client.graphql(query)\n \n flow_run = result.data.flow_run\n if not flow_run:\n", "issue": "AuthorizationError when watching logs from CLI\nWhen running with `prefect run cloud --logs`, after a few minutes I see the following error:\r\n```\r\nprefect.utilities.exceptions.AuthorizationError: [{'message': 'AuthenticationError', 'locations': [], 'path': ['flow_run'], 'extensions': {'code': 'UNAUTHENTICATED'}}]\r\n```\r\nThe run itself succeeds but the logs stop at that point, so I guess the token is initially valid but just expires...?\r\n\r\ncc @joshmeek @cicdw \n", "before_files": [{"content": "import json\nimport time\n\nimport click\nfrom tabulate import tabulate\n\nfrom prefect.client import Client\nfrom prefect.utilities.graphql import EnumValue, with_args\n\n\[email protected](hidden=True)\ndef run():\n \"\"\"\n Run Prefect flows.\n\n \\b\n Usage:\n $ prefect run [STORAGE/PLATFORM]\n\n \\b\n Arguments:\n cloud Run flows in Prefect Cloud\n\n \\b\n Examples:\n $ prefect run cloud --name Test-Flow --project My-Project\n Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9\n\n \\b\n $ prefect run cloud --name Test-Flow --project My-Project --watch\n Flow Run ID: 2ba3rrfd-411c-4d99-bb2a-f64a6dea78f9\n Scheduled -> Submitted -> Running -> Success\n \"\"\"\n\n\[email protected](hidden=True)\[email protected](\n \"--name\", \"-n\", required=True, help=\"The name of a flow to run.\", hidden=True\n)\[email protected](\n \"--project\",\n \"-p\",\n required=True,\n help=\"The project that contains the flow.\",\n hidden=True,\n)\[email protected](\"--version\", \"-v\", type=int, help=\"A flow version to run.\", hidden=True)\[email protected](\n \"--parameters-file\",\n \"-pf\",\n help=\"A parameters JSON file.\",\n hidden=True,\n type=click.Path(exists=True),\n)\[email protected](\n \"--parameters-string\", \"-ps\", help=\"A parameters JSON string.\", hidden=True\n)\[email protected](\"--run-name\", \"-rn\", help=\"A name to assign for this run.\", hidden=True)\[email protected](\n \"--watch\",\n \"-w\",\n is_flag=True,\n help=\"Watch current state of the flow run.\",\n hidden=True,\n)\[email protected](\n \"--logs\", \"-l\", is_flag=True, help=\"Live logs of the flow run.\", hidden=True\n)\ndef cloud(\n name, project, version, parameters_file, parameters_string, run_name, watch, logs\n):\n \"\"\"\n Run a registered flow in Prefect Cloud.\n\n \\b\n Options:\n --name, -n TEXT The name of a flow to run [required]\n --project, -p TEXT The name of a project that contains the flow [required]\n --version, -v INTEGER A flow version to run\n --parameters-file, -pf FILE PATH A filepath of a JSON file containing parameters\n --parameters-string, -ps TEXT A string of JSON parameters\n --run-name, -rn TEXT A name to assign for this run\n --watch, -w Watch current state of the flow run, stream output to stdout\n --logs, -l Get logs of the flow run, stream output to stdout\n\n \\b\n If both `--parameters-file` and `--parameters-string` are provided then the values passed\n in through the string will override the values provided from the file.\n\n \\b\n e.g.\n File contains: {\"a\": 1, \"b\": 2}\n String: '{\"a\": 3}'\n Parameters passed to the flow run: {\"a\": 3, \"b\": 2}\n \"\"\"\n\n if watch and logs:\n click.secho(\n \"Streaming state and logs not currently supported together.\", fg=\"red\"\n )\n return\n\n query = {\n \"query\": {\n with_args(\n \"flow\",\n {\n \"where\": {\n \"_and\": {\n \"name\": {\"_eq\": name},\n \"version\": {\"_eq\": version},\n \"project\": {\"name\": {\"_eq\": project}},\n }\n },\n \"order_by\": {\n \"name\": EnumValue(\"asc\"),\n \"version\": EnumValue(\"desc\"),\n },\n \"distinct_on\": EnumValue(\"name\"),\n },\n ): {\"id\": True}\n }\n }\n\n client = Client()\n result = client.graphql(query)\n\n flow_data = result.data.flow\n\n if flow_data:\n flow_id = flow_data[0].id\n else:\n click.secho(\"{} not found\".format(name), fg=\"red\")\n return\n\n # Load parameters from file if provided\n file_params = {}\n if parameters_file:\n with open(parameters_file) as params_file:\n file_params = json.load(params_file)\n\n # Load parameters from string if provided\n string_params = {}\n if parameters_string:\n string_params = json.loads(parameters_string)\n\n flow_run_id = client.create_flow_run(\n flow_id=flow_id, parameters={**file_params, **string_params}, run_name=run_name\n )\n click.echo(\"Flow Run ID: {}\".format(flow_run_id))\n\n if watch:\n current_states = []\n while True:\n query = {\n \"query\": {\n with_args(\"flow_run_by_pk\", {\"id\": flow_run_id}): {\n with_args(\n \"states\",\n {\"order_by\": {EnumValue(\"timestamp\"): EnumValue(\"asc\")}},\n ): {\"state\": True, \"timestamp\": True}\n }\n }\n }\n\n result = client.graphql(query)\n\n # Filter through retrieved states and output in order\n for state_index in result.data.flow_run_by_pk.states:\n state = state_index.state\n if state not in current_states:\n if state != \"Success\" and state != \"Failed\":\n click.echo(\"{} -> \".format(state), nl=False)\n else:\n click.echo(state)\n return\n\n current_states.append(state)\n\n time.sleep(3)\n\n if logs:\n all_logs = []\n\n log_query = {\n with_args(\n \"logs\", {\"order_by\": {EnumValue(\"timestamp\"): EnumValue(\"asc\")}}\n ): {\"timestamp\": True, \"message\": True, \"level\": True},\n \"start_time\": True,\n }\n\n query = {\n \"query\": {\n with_args(\n \"flow_run\",\n {\n \"where\": {\"id\": {\"_eq\": flow_run_id}},\n \"order_by\": {EnumValue(\"start_time\"): EnumValue(\"desc\")},\n },\n ): log_query\n }\n }\n\n while True:\n result = Client().graphql(query)\n\n flow_run = result.data.flow_run\n if not flow_run:\n click.secho(\"{} not found\".format(flow_run_id), fg=\"red\")\n return\n\n new_run = flow_run[0]\n logs = new_run.logs\n output = []\n\n for i in logs:\n if [i.timestamp, i.level, i.message] not in all_logs:\n\n if not len(all_logs):\n click.echo(\n tabulate(\n [[i.timestamp, i.level, i.message]],\n headers=[\"TIMESTAMP\", \"LEVEL\", \"MESSAGE\"],\n tablefmt=\"plain\",\n numalign=\"left\",\n stralign=\"left\",\n )\n )\n all_logs.append([i.timestamp, i.level, i.message])\n continue\n\n output.append([i.timestamp, i.level, i.message])\n all_logs.append([i.timestamp, i.level, i.message])\n\n if output:\n click.echo(\n tabulate(output, tablefmt=\"plain\", numalign=\"left\", stralign=\"left\")\n )\n\n # Check if state is either Success or Failed, exit if it is\n pk_query = {\n \"query\": {\n with_args(\"flow_run_by_pk\", {\"id\": flow_run_id}): {\"state\": True}\n }\n }\n result = client.graphql(pk_query)\n\n if (\n result.data.flow_run_by_pk.state == \"Success\"\n or result.data.flow_run_by_pk.state == \"Failed\"\n ):\n return\n\n time.sleep(3)\n", "path": "src/prefect/cli/run.py"}]}
| 3,021 | 95 |
gh_patches_debug_34407
|
rasdani/github-patches
|
git_diff
|
svthalia__concrexit-2726
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Liked photos API endpoint
To add an overview of liked photos to ThaliApp, we need a new endpoint for liked photos.
I think it would be best to have `api/v2/photos/photos/` with `liked` boolean GET filter. It will need to do some filtering to prevent photos that are not published in an album from being returned.
</issue>
<code>
[start of website/photos/api/v2/urls.py]
1 """Photos app API v2 urls."""
2 from django.urls import include, path
3
4 from photos.api.v2.views import AlbumDetailView, AlbumListView, PhotoLikeView
5
6 app_name = "photos"
7
8 urlpatterns = [
9 path(
10 "photos/",
11 include(
12 [
13 path("albums/", AlbumListView.as_view(), name="album-list"),
14 path(
15 "albums/<slug:slug>/",
16 AlbumDetailView.as_view(),
17 name="album-detail",
18 ),
19 path(
20 "photos/<int:pk>/like/", PhotoLikeView.as_view(), name="photo-like"
21 ),
22 ]
23 ),
24 ),
25 ]
26
[end of website/photos/api/v2/urls.py]
[start of website/photos/api/v2/views.py]
1 from django.db.models import Count, Prefetch, Q
2
3 from oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope
4 from rest_framework import filters, status
5 from rest_framework.exceptions import PermissionDenied
6 from rest_framework.generics import ListAPIView, RetrieveAPIView
7 from rest_framework.response import Response
8 from rest_framework.views import APIView
9
10 from photos import services
11 from photos.api.v2.serializers.album import AlbumListSerializer, AlbumSerializer
12 from photos.models import Album, Like, Photo
13
14
15 class AlbumListView(ListAPIView):
16 """Returns an overview of all albums."""
17
18 serializer_class = AlbumListSerializer
19 queryset = Album.objects.filter(hidden=False)
20 permission_classes = [
21 IsAuthenticatedOrTokenHasScope,
22 ]
23 required_scopes = ["photos:read"]
24 filter_backends = (filters.SearchFilter,)
25 search_fields = ("title", "date", "slug")
26
27
28 class AlbumDetailView(RetrieveAPIView):
29 """Returns the details of an album."""
30
31 serializer_class = AlbumSerializer
32 permission_classes = [
33 IsAuthenticatedOrTokenHasScope,
34 ]
35 required_scopes = ["photos:read"]
36 lookup_field = "slug"
37
38 def retrieve(self, request, *args, **kwargs):
39 if not services.is_album_accessible(request, self.get_object()):
40 raise PermissionDenied
41 return super().retrieve(request, *args, **kwargs)
42
43 def get_queryset(self):
44 photos = Photo.objects.select_properties("num_likes")
45 if self.request.member:
46 photos = photos.annotate(
47 member_likes=Count("likes", filter=Q(likes__member=self.request.member))
48 )
49 return Album.objects.filter(hidden=False).prefetch_related(
50 Prefetch("photo_set", queryset=photos)
51 )
52
53
54 class PhotoLikeView(APIView):
55 permission_classes = [IsAuthenticatedOrTokenHasScope]
56 required_scopes = ["photos:read"]
57
58 def get(self, request, **kwargs):
59 photo_id = kwargs.get("pk")
60 try:
61 photo = Photo.objects.filter(album__hidden=False).get(pk=photo_id)
62 except Photo.DoesNotExist:
63 return Response(status=status.HTTP_404_NOT_FOUND)
64
65 return Response(
66 {
67 "liked": photo.likes.filter(member=request.member).exists(),
68 "num_likes": photo.num_likes,
69 },
70 status=status.HTTP_200_OK,
71 )
72
73 def post(self, request, **kwargs):
74 photo_id = kwargs.get("pk")
75 try:
76 photo = Photo.objects.filter(album__hidden=False).get(pk=photo_id)
77 except Photo.DoesNotExist:
78 return Response(status=status.HTTP_404_NOT_FOUND)
79
80 _, created = Like.objects.get_or_create(photo=photo, member=request.member)
81
82 if created:
83 return Response(
84 {
85 "liked": photo.likes.filter(member=request.member).exists(),
86 "num_likes": photo.num_likes,
87 },
88 status=status.HTTP_201_CREATED,
89 )
90 return Response(
91 {
92 "liked": photo.likes.filter(member=request.member).exists(),
93 "num_likes": photo.num_likes,
94 },
95 status=status.HTTP_200_OK,
96 )
97
98 def delete(self, request, **kwargs):
99 photo_id = kwargs.get("pk")
100 try:
101 photo = Photo.objects.filter(album__hidden=False).get(pk=photo_id)
102 except Photo.DoesNotExist:
103 return Response(status=status.HTTP_404_NOT_FOUND)
104
105 try:
106 like = Like.objects.filter(photo__album__hidden=False).get(
107 member=request.member, photo__pk=photo_id
108 )
109 except Like.DoesNotExist:
110 return Response(
111 {
112 "liked": False,
113 "num_likes": photo.num_likes,
114 },
115 status=status.HTTP_204_NO_CONTENT,
116 )
117
118 like.delete()
119
120 return Response(
121 {
122 "liked": False,
123 "num_likes": photo.num_likes,
124 },
125 status=status.HTTP_202_ACCEPTED,
126 )
127
[end of website/photos/api/v2/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/website/photos/api/v2/urls.py b/website/photos/api/v2/urls.py
--- a/website/photos/api/v2/urls.py
+++ b/website/photos/api/v2/urls.py
@@ -1,7 +1,12 @@
"""Photos app API v2 urls."""
from django.urls import include, path
-from photos.api.v2.views import AlbumDetailView, AlbumListView, PhotoLikeView
+from photos.api.v2.views import (
+ AlbumDetailView,
+ AlbumListView,
+ LikedPhotosListView,
+ PhotoLikeView,
+)
app_name = "photos"
@@ -19,6 +24,9 @@
path(
"photos/<int:pk>/like/", PhotoLikeView.as_view(), name="photo-like"
),
+ path(
+ "photos/liked/", LikedPhotosListView.as_view(), name="liked-photos"
+ ),
]
),
),
diff --git a/website/photos/api/v2/views.py b/website/photos/api/v2/views.py
--- a/website/photos/api/v2/views.py
+++ b/website/photos/api/v2/views.py
@@ -8,7 +8,11 @@
from rest_framework.views import APIView
from photos import services
-from photos.api.v2.serializers.album import AlbumListSerializer, AlbumSerializer
+from photos.api.v2.serializers.album import (
+ AlbumListSerializer,
+ AlbumSerializer,
+ PhotoListSerializer,
+)
from photos.models import Album, Like, Photo
@@ -51,6 +55,35 @@
)
+class LikedPhotosListView(ListAPIView):
+ """Returns the details the liked album."""
+
+ serializer_class = PhotoListSerializer
+ permission_classes = [
+ IsAuthenticatedOrTokenHasScope,
+ ]
+ required_scopes = ["photos:read"]
+
+ def get(self, request, *args, **kwargs):
+ if not self.request.member:
+ return Response(
+ data={
+ "detail": "You need to be a member in order to view your liked photos."
+ },
+ status=status.HTTP_403_FORBIDDEN,
+ )
+ return self.list(request, *args, **kwargs)
+
+ def get_queryset(self):
+ return (
+ Photo.objects.filter(likes__member=self.request.member, album__hidden=False)
+ .annotate(
+ member_likes=Count("likes", filter=Q(likes__member=self.request.member))
+ )
+ .select_properties("num_likes")
+ )
+
+
class PhotoLikeView(APIView):
permission_classes = [IsAuthenticatedOrTokenHasScope]
required_scopes = ["photos:read"]
|
{"golden_diff": "diff --git a/website/photos/api/v2/urls.py b/website/photos/api/v2/urls.py\n--- a/website/photos/api/v2/urls.py\n+++ b/website/photos/api/v2/urls.py\n@@ -1,7 +1,12 @@\n \"\"\"Photos app API v2 urls.\"\"\"\n from django.urls import include, path\n \n-from photos.api.v2.views import AlbumDetailView, AlbumListView, PhotoLikeView\n+from photos.api.v2.views import (\n+ AlbumDetailView,\n+ AlbumListView,\n+ LikedPhotosListView,\n+ PhotoLikeView,\n+)\n \n app_name = \"photos\"\n \n@@ -19,6 +24,9 @@\n path(\n \"photos/<int:pk>/like/\", PhotoLikeView.as_view(), name=\"photo-like\"\n ),\n+ path(\n+ \"photos/liked/\", LikedPhotosListView.as_view(), name=\"liked-photos\"\n+ ),\n ]\n ),\n ),\ndiff --git a/website/photos/api/v2/views.py b/website/photos/api/v2/views.py\n--- a/website/photos/api/v2/views.py\n+++ b/website/photos/api/v2/views.py\n@@ -8,7 +8,11 @@\n from rest_framework.views import APIView\n \n from photos import services\n-from photos.api.v2.serializers.album import AlbumListSerializer, AlbumSerializer\n+from photos.api.v2.serializers.album import (\n+ AlbumListSerializer,\n+ AlbumSerializer,\n+ PhotoListSerializer,\n+)\n from photos.models import Album, Like, Photo\n \n \n@@ -51,6 +55,35 @@\n )\n \n \n+class LikedPhotosListView(ListAPIView):\n+ \"\"\"Returns the details the liked album.\"\"\"\n+\n+ serializer_class = PhotoListSerializer\n+ permission_classes = [\n+ IsAuthenticatedOrTokenHasScope,\n+ ]\n+ required_scopes = [\"photos:read\"]\n+\n+ def get(self, request, *args, **kwargs):\n+ if not self.request.member:\n+ return Response(\n+ data={\n+ \"detail\": \"You need to be a member in order to view your liked photos.\"\n+ },\n+ status=status.HTTP_403_FORBIDDEN,\n+ )\n+ return self.list(request, *args, **kwargs)\n+\n+ def get_queryset(self):\n+ return (\n+ Photo.objects.filter(likes__member=self.request.member, album__hidden=False)\n+ .annotate(\n+ member_likes=Count(\"likes\", filter=Q(likes__member=self.request.member))\n+ )\n+ .select_properties(\"num_likes\")\n+ )\n+\n+\n class PhotoLikeView(APIView):\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"photos:read\"]\n", "issue": "Liked photos API endpoint\nTo add an overview of liked photos to ThaliApp, we need a new endpoint for liked photos.\r\n\r\nI think it would be best to have `api/v2/photos/photos/` with `liked` boolean GET filter. It will need to do some filtering to prevent photos that are not published in an album from being returned.\n", "before_files": [{"content": "\"\"\"Photos app API v2 urls.\"\"\"\nfrom django.urls import include, path\n\nfrom photos.api.v2.views import AlbumDetailView, AlbumListView, PhotoLikeView\n\napp_name = \"photos\"\n\nurlpatterns = [\n path(\n \"photos/\",\n include(\n [\n path(\"albums/\", AlbumListView.as_view(), name=\"album-list\"),\n path(\n \"albums/<slug:slug>/\",\n AlbumDetailView.as_view(),\n name=\"album-detail\",\n ),\n path(\n \"photos/<int:pk>/like/\", PhotoLikeView.as_view(), name=\"photo-like\"\n ),\n ]\n ),\n ),\n]\n", "path": "website/photos/api/v2/urls.py"}, {"content": "from django.db.models import Count, Prefetch, Q\n\nfrom oauth2_provider.contrib.rest_framework import IsAuthenticatedOrTokenHasScope\nfrom rest_framework import filters, status\nfrom rest_framework.exceptions import PermissionDenied\nfrom rest_framework.generics import ListAPIView, RetrieveAPIView\nfrom rest_framework.response import Response\nfrom rest_framework.views import APIView\n\nfrom photos import services\nfrom photos.api.v2.serializers.album import AlbumListSerializer, AlbumSerializer\nfrom photos.models import Album, Like, Photo\n\n\nclass AlbumListView(ListAPIView):\n \"\"\"Returns an overview of all albums.\"\"\"\n\n serializer_class = AlbumListSerializer\n queryset = Album.objects.filter(hidden=False)\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"photos:read\"]\n filter_backends = (filters.SearchFilter,)\n search_fields = (\"title\", \"date\", \"slug\")\n\n\nclass AlbumDetailView(RetrieveAPIView):\n \"\"\"Returns the details of an album.\"\"\"\n\n serializer_class = AlbumSerializer\n permission_classes = [\n IsAuthenticatedOrTokenHasScope,\n ]\n required_scopes = [\"photos:read\"]\n lookup_field = \"slug\"\n\n def retrieve(self, request, *args, **kwargs):\n if not services.is_album_accessible(request, self.get_object()):\n raise PermissionDenied\n return super().retrieve(request, *args, **kwargs)\n\n def get_queryset(self):\n photos = Photo.objects.select_properties(\"num_likes\")\n if self.request.member:\n photos = photos.annotate(\n member_likes=Count(\"likes\", filter=Q(likes__member=self.request.member))\n )\n return Album.objects.filter(hidden=False).prefetch_related(\n Prefetch(\"photo_set\", queryset=photos)\n )\n\n\nclass PhotoLikeView(APIView):\n permission_classes = [IsAuthenticatedOrTokenHasScope]\n required_scopes = [\"photos:read\"]\n\n def get(self, request, **kwargs):\n photo_id = kwargs.get(\"pk\")\n try:\n photo = Photo.objects.filter(album__hidden=False).get(pk=photo_id)\n except Photo.DoesNotExist:\n return Response(status=status.HTTP_404_NOT_FOUND)\n\n return Response(\n {\n \"liked\": photo.likes.filter(member=request.member).exists(),\n \"num_likes\": photo.num_likes,\n },\n status=status.HTTP_200_OK,\n )\n\n def post(self, request, **kwargs):\n photo_id = kwargs.get(\"pk\")\n try:\n photo = Photo.objects.filter(album__hidden=False).get(pk=photo_id)\n except Photo.DoesNotExist:\n return Response(status=status.HTTP_404_NOT_FOUND)\n\n _, created = Like.objects.get_or_create(photo=photo, member=request.member)\n\n if created:\n return Response(\n {\n \"liked\": photo.likes.filter(member=request.member).exists(),\n \"num_likes\": photo.num_likes,\n },\n status=status.HTTP_201_CREATED,\n )\n return Response(\n {\n \"liked\": photo.likes.filter(member=request.member).exists(),\n \"num_likes\": photo.num_likes,\n },\n status=status.HTTP_200_OK,\n )\n\n def delete(self, request, **kwargs):\n photo_id = kwargs.get(\"pk\")\n try:\n photo = Photo.objects.filter(album__hidden=False).get(pk=photo_id)\n except Photo.DoesNotExist:\n return Response(status=status.HTTP_404_NOT_FOUND)\n\n try:\n like = Like.objects.filter(photo__album__hidden=False).get(\n member=request.member, photo__pk=photo_id\n )\n except Like.DoesNotExist:\n return Response(\n {\n \"liked\": False,\n \"num_likes\": photo.num_likes,\n },\n status=status.HTTP_204_NO_CONTENT,\n )\n\n like.delete()\n\n return Response(\n {\n \"liked\": False,\n \"num_likes\": photo.num_likes,\n },\n status=status.HTTP_202_ACCEPTED,\n )\n", "path": "website/photos/api/v2/views.py"}]}
| 1,916 | 588 |
gh_patches_debug_2593
|
rasdani/github-patches
|
git_diff
|
secdev__scapy-3167
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Outdated Automotive Documentation
Reminder for myself.
Outdated:
https://github.com/secdev/scapy/blob/1aa0d8a849f7b102d18a3f65986e272aec5f518a/doc/scapy/layers/automotive.rst#L75-L85
SOME/IP:
https://github.com/secdev/scapy/blob/1aa0d8a849f7b102d18a3f65986e272aec5f518a/doc/scapy/layers/automotive.rst#L1011-L1030
Mentioned by @WebLabInt via gitter:
```Hi, I m having a problem creating a basic SOME IP service discovery following the example provided https://scapy.readthedocs.io/en/latest/layers/automotive.html?highlight=some%20ip#creating-a-some-ip-sd-message. The SOME IP package is working perfectly, however, the SD packet is not formed correctly thus not recognized as a SD packet by Wireshark and the SOME IP version is not correct. I did a capture with Wireshark reporting those issues http://fuiing.com/share/SD%20prob.png . I will be great if you can support me on this issue, thank you for making Scapy open source, it's really a great tool, have a great day ```
</issue>
<code>
[start of doc/scapy/conf.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Scapy documentation build configuration file, created by
4 # sphinx-quickstart on Wed Mar 07 19:02:35 2018.
5 #
6 # This file is execfile()d with the current directory set to its
7 # containing dir.
8 #
9 # Note that not all possible configuration values are present in this
10 # autogenerated file.
11 #
12 # All configuration values have a default; values that are commented out
13 # serve to show the default.
14
15 import datetime
16
17 # If extensions (or modules to document with autodoc) are in another directory,
18 # add these directories to sys.path here. If the directory is relative to the
19 # documentation root, use os.path.abspath to make it absolute, like shown here.
20 #
21 import os
22 import sys
23 sys.path.insert(0, os.path.abspath('../../'))
24 sys.path.append(os.path.abspath('_ext'))
25
26
27 # -- General configuration ------------------------------------------------
28
29 # If your documentation needs a minimal Sphinx version, state it here.
30 #
31 needs_sphinx = '3.0.0'
32
33 # Add any Sphinx extension module names here, as strings. They can be
34 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
35 # ones.
36 extensions = [
37 'sphinx.ext.autodoc',
38 'sphinx.ext.napoleon',
39 'sphinx.ext.todo',
40 'sphinx.ext.linkcode',
41 'scapy_doc'
42 ]
43
44 # Autodoc configuration
45 autodoc_inherit_docstrings = False
46 autodoc_default_options = {
47 'undoc-members': True
48 }
49
50 # Enable the todo module
51 todo_include_todos = True
52
53 # Linkcode resolver
54 from linkcode_res import linkcode_resolve
55
56 # Add any paths that contain templates here, relative to this directory.
57 templates_path = ['_templates']
58
59 # The suffix(es) of source filenames.
60 # You can specify multiple suffix as a list of string:
61 #
62 # source_suffix = ['.rst', '.md']
63 source_suffix = '.rst'
64
65 # The master toctree document.
66 master_doc = 'index'
67
68 # General information about the project.
69 project = 'Scapy'
70 year = datetime.datetime.now().year
71 copyright = '2008-%s Philippe Biondi and the Scapy community' % year
72
73 # The version info for the project you're documenting, acts as replacement for
74 # |version| and |release|, also used in various other places throughout the
75 # built documents.
76 from scapy import VERSION, VERSION_MAIN
77 # The short X.Y version.
78 release = VERSION_MAIN
79 # The full version, including alpha/beta/rc tags.
80 version = VERSION
81
82 # The language for content autogenerated by Sphinx. Refer to documentation
83 # for a list of supported languages.
84 #
85 # This is also used if you do content translation via gettext catalogs.
86 # Usually you set "language" from the command line for these cases.
87 language = None
88
89 # List of patterns, relative to source directory, that match files and
90 # directories to ignore when looking for source files.
91 # This patterns also effect to html_static_path and html_extra_path
92 exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
93
94 # The name of the Pygments (syntax highlighting) style to use.
95 pygments_style = 'sphinx'
96
97 # If true, `todo` and `todoList` produce output, else they produce nothing.
98 todo_include_todos = False
99
100
101 # -- Options for HTML output ----------------------------------------------
102
103 # The theme to use for HTML and HTML Help pages. See the documentation for
104 # a list of builtin themes.
105 #
106 html_theme = 'sphinx_rtd_theme'
107
108 # Theme options are theme-specific and customize the look and feel of a theme
109 # further. For a list of options available for each theme, see the
110 # documentation.
111 #
112 # html_theme_options = {}
113
114 # Add any paths that contain custom static files (such as style sheets) here,
115 # relative to this directory. They are copied after the builtin static files,
116 # so a file named "default.css" will overwrite the builtin "default.css".
117 html_static_path = ['_static']
118
119 # Custom sidebar templates, must be a dictionary that maps document names
120 # to template names.
121 #
122 # This is required for the alabaster theme
123 # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars
124 html_sidebars = {
125 '**': [
126 'relations.html', # needs 'show_related': True theme option to display
127 'searchbox.html',
128 ]
129 }
130
131 # Make :manpage directive work on HTML output.
132 manpages_url = 'https://manpages.debian.org/{path}'
133
134 # -- Options for HTMLHelp output ------------------------------------------
135
136 # Output file base name for HTML help builder.
137 htmlhelp_basename = 'Scapydoc'
138
139
140 # -- Options for LaTeX output ---------------------------------------------
141
142 latex_elements = {
143 # The paper size ('letterpaper' or 'a4paper').
144 #
145 'papersize': 'a4paper',
146
147 # The font size ('10pt', '11pt' or '12pt').
148 #
149 'pointsize': '11pt',
150
151 # Additional stuff for the LaTeX preamble.
152 #
153 # 'preamble': '',
154
155 # Latex figure (float) alignment
156 #
157 # 'figure_align': 'htbp',
158 }
159
160 # Grouping the document tree into LaTeX files. List of tuples
161 # (source start file, target name, title,
162 # author, documentclass [howto, manual, or own class]).
163 latex_documents = [
164 ('index', 'Scapy.tex', 'Scapy Documentation',
165 'Philippe Biondi and the Scapy community', 'manual'),
166 ]
167
168
169 # -- Options for manual page output ---------------------------------------
170
171 # One entry per manual page. List of tuples
172 # (source start file, name, description, authors, manual section).
173 man_pages = [
174 (master_doc, 'scapy', 'Scapy Documentation',
175 ['Philippe Biondi and the Scapy community'], 1)
176 ]
177
178
179 # -- Options for Texinfo output -------------------------------------------
180
181 # Grouping the document tree into Texinfo files. List of tuples
182 # (source start file, target name, title, author,
183 # dir menu entry, description, category)
184 texinfo_documents = [
185 (master_doc, 'Scapy', 'Scapy Documentation',
186 'Philippe Biondi and the Scapy community', 'Scapy',
187 '',
188 'Miscellaneous'),
189 ]
190
[end of doc/scapy/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/doc/scapy/conf.py b/doc/scapy/conf.py
--- a/doc/scapy/conf.py
+++ b/doc/scapy/conf.py
@@ -97,6 +97,9 @@
# If true, `todo` and `todoList` produce output, else they produce nothing.
todo_include_todos = False
+# Enable codeauthor and sectionauthor directives
+show_authors = True
+
# -- Options for HTML output ----------------------------------------------
|
{"golden_diff": "diff --git a/doc/scapy/conf.py b/doc/scapy/conf.py\n--- a/doc/scapy/conf.py\n+++ b/doc/scapy/conf.py\n@@ -97,6 +97,9 @@\n # If true, `todo` and `todoList` produce output, else they produce nothing.\n todo_include_todos = False\n \n+# Enable codeauthor and sectionauthor directives\n+show_authors = True\n+\n \n # -- Options for HTML output ----------------------------------------------\n", "issue": "Outdated Automotive Documentation\nReminder for myself.\r\n\r\nOutdated:\r\nhttps://github.com/secdev/scapy/blob/1aa0d8a849f7b102d18a3f65986e272aec5f518a/doc/scapy/layers/automotive.rst#L75-L85\r\n\r\nSOME/IP:\r\nhttps://github.com/secdev/scapy/blob/1aa0d8a849f7b102d18a3f65986e272aec5f518a/doc/scapy/layers/automotive.rst#L1011-L1030\r\nMentioned by @WebLabInt via gitter:\r\n```Hi, I m having a problem creating a basic SOME IP service discovery following the example provided https://scapy.readthedocs.io/en/latest/layers/automotive.html?highlight=some%20ip#creating-a-some-ip-sd-message. The SOME IP package is working perfectly, however, the SD packet is not formed correctly thus not recognized as a SD packet by Wireshark and the SOME IP version is not correct. I did a capture with Wireshark reporting those issues http://fuiing.com/share/SD%20prob.png . I will be great if you can support me on this issue, thank you for making Scapy open source, it's really a great tool, have a great day ```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Scapy documentation build configuration file, created by\n# sphinx-quickstart on Wed Mar 07 19:02:35 2018.\n#\n# This file is execfile()d with the current directory set to its\n# containing dir.\n#\n# Note that not all possible configuration values are present in this\n# autogenerated file.\n#\n# All configuration values have a default; values that are commented out\n# serve to show the default.\n\nimport datetime\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath('../../'))\nsys.path.append(os.path.abspath('_ext'))\n\n\n# -- General configuration ------------------------------------------------\n\n# If your documentation needs a minimal Sphinx version, state it here.\n#\nneeds_sphinx = '3.0.0'\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n 'sphinx.ext.autodoc',\n 'sphinx.ext.napoleon',\n 'sphinx.ext.todo',\n 'sphinx.ext.linkcode',\n 'scapy_doc'\n]\n\n# Autodoc configuration\nautodoc_inherit_docstrings = False\nautodoc_default_options = {\n 'undoc-members': True\n}\n\n# Enable the todo module\ntodo_include_todos = True\n\n# Linkcode resolver\nfrom linkcode_res import linkcode_resolve\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# The suffix(es) of source filenames.\n# You can specify multiple suffix as a list of string:\n#\n# source_suffix = ['.rst', '.md']\nsource_suffix = '.rst'\n\n# The master toctree document.\nmaster_doc = 'index'\n\n# General information about the project.\nproject = 'Scapy'\nyear = datetime.datetime.now().year\ncopyright = '2008-%s Philippe Biondi and the Scapy community' % year\n\n# The version info for the project you're documenting, acts as replacement for\n# |version| and |release|, also used in various other places throughout the\n# built documents.\nfrom scapy import VERSION, VERSION_MAIN\n# The short X.Y version.\nrelease = VERSION_MAIN\n# The full version, including alpha/beta/rc tags.\nversion = VERSION\n\n# The language for content autogenerated by Sphinx. Refer to documentation\n# for a list of supported languages.\n#\n# This is also used if you do content translation via gettext catalogs.\n# Usually you set \"language\" from the command line for these cases.\nlanguage = None\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This patterns also effect to html_static_path and html_extra_path\nexclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']\n\n# The name of the Pygments (syntax highlighting) style to use.\npygments_style = 'sphinx'\n\n# If true, `todo` and `todoList` produce output, else they produce nothing.\ntodo_include_todos = False\n\n\n# -- Options for HTML output ----------------------------------------------\n\n# The theme to use for HTML and HTML Help pages. See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'sphinx_rtd_theme'\n\n# Theme options are theme-specific and customize the look and feel of a theme\n# further. For a list of options available for each theme, see the\n# documentation.\n#\n# html_theme_options = {}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\n# Custom sidebar templates, must be a dictionary that maps document names\n# to template names.\n#\n# This is required for the alabaster theme\n# refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars\nhtml_sidebars = {\n '**': [\n 'relations.html', # needs 'show_related': True theme option to display\n 'searchbox.html',\n ]\n}\n\n# Make :manpage directive work on HTML output.\nmanpages_url = 'https://manpages.debian.org/{path}'\n\n# -- Options for HTMLHelp output ------------------------------------------\n\n# Output file base name for HTML help builder.\nhtmlhelp_basename = 'Scapydoc'\n\n\n# -- Options for LaTeX output ---------------------------------------------\n\nlatex_elements = {\n # The paper size ('letterpaper' or 'a4paper').\n #\n 'papersize': 'a4paper',\n\n # The font size ('10pt', '11pt' or '12pt').\n #\n 'pointsize': '11pt',\n\n # Additional stuff for the LaTeX preamble.\n #\n # 'preamble': '',\n\n # Latex figure (float) alignment\n #\n # 'figure_align': 'htbp',\n}\n\n# Grouping the document tree into LaTeX files. List of tuples\n# (source start file, target name, title,\n# author, documentclass [howto, manual, or own class]).\nlatex_documents = [\n ('index', 'Scapy.tex', 'Scapy Documentation',\n 'Philippe Biondi and the Scapy community', 'manual'),\n]\n\n\n# -- Options for manual page output ---------------------------------------\n\n# One entry per manual page. List of tuples\n# (source start file, name, description, authors, manual section).\nman_pages = [\n (master_doc, 'scapy', 'Scapy Documentation',\n ['Philippe Biondi and the Scapy community'], 1)\n]\n\n\n# -- Options for Texinfo output -------------------------------------------\n\n# Grouping the document tree into Texinfo files. List of tuples\n# (source start file, target name, title, author,\n# dir menu entry, description, category)\ntexinfo_documents = [\n (master_doc, 'Scapy', 'Scapy Documentation',\n 'Philippe Biondi and the Scapy community', 'Scapy',\n '',\n 'Miscellaneous'),\n]\n", "path": "doc/scapy/conf.py"}]}
| 2,696 | 97 |
gh_patches_debug_665
|
rasdani/github-patches
|
git_diff
|
scikit-image__scikit-image-1741
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
peak_local_max Incorrect output type
This [function](http://scikit-image.org/docs/dev/api/skimage.feature.html#peak-local-max) is returning a `list` instead of an `ndarray` if no peaks are detected.
I traced the problem till this [line](https://github.com/scikit-image/scikit-image/blob/master/skimage/feature/peak.py#L122). However, I have to check if there is other case (beyond this line) that produces an incorrect output.
I will work on it this weekend and submit a pull-request or a code snippet here
</issue>
<code>
[start of skimage/feature/peak.py]
1 import numpy as np
2 import scipy.ndimage as ndi
3 from ..filters import rank_order
4
5
6 def peak_local_max(image, min_distance=10, threshold_abs=0, threshold_rel=0.1,
7 exclude_border=True, indices=True, num_peaks=np.inf,
8 footprint=None, labels=None):
9 """
10 Find peaks in an image, and return them as coordinates or a boolean array.
11
12 Peaks are the local maxima in a region of `2 * min_distance + 1`
13 (i.e. peaks are separated by at least `min_distance`).
14
15 NOTE: If peaks are flat (i.e. multiple adjacent pixels have identical
16 intensities), the coordinates of all such pixels are returned.
17
18 Parameters
19 ----------
20 image : ndarray of floats
21 Input image.
22 min_distance : int
23 Minimum number of pixels separating peaks in a region of `2 *
24 min_distance + 1` (i.e. peaks are separated by at least
25 `min_distance`). If `exclude_border` is True, this value also excludes
26 a border `min_distance` from the image boundary.
27 To find the maximum number of peaks, use `min_distance=1`.
28 threshold_abs : float
29 Minimum intensity of peaks.
30 threshold_rel : float
31 Minimum intensity of peaks calculated as `max(image) * threshold_rel`.
32 exclude_border : bool
33 If True, `min_distance` excludes peaks from the border of the image as
34 well as from each other.
35 indices : bool
36 If True, the output will be an array representing peak coordinates.
37 If False, the output will be a boolean array shaped as `image.shape`
38 with peaks present at True elements.
39 num_peaks : int
40 Maximum number of peaks. When the number of peaks exceeds `num_peaks`,
41 return `num_peaks` peaks based on highest peak intensity.
42 footprint : ndarray of bools, optional
43 If provided, `footprint == 1` represents the local region within which
44 to search for peaks at every point in `image`. Overrides
45 `min_distance`, except for border exclusion if `exclude_border=True`.
46 labels : ndarray of ints, optional
47 If provided, each unique region `labels == value` represents a unique
48 region to search for peaks. Zero is reserved for background.
49
50 Returns
51 -------
52 output : ndarray or ndarray of bools
53
54 * If `indices = True` : (row, column, ...) coordinates of peaks.
55 * If `indices = False` : Boolean array shaped like `image`, with peaks
56 represented by True values.
57
58 Notes
59 -----
60 The peak local maximum function returns the coordinates of local peaks
61 (maxima) in a image. A maximum filter is used for finding local maxima.
62 This operation dilates the original image. After comparison between
63 dilated and original image, peak_local_max function returns the
64 coordinates of peaks where dilated image = original.
65
66 Examples
67 --------
68 >>> img1 = np.zeros((7, 7))
69 >>> img1[3, 4] = 1
70 >>> img1[3, 2] = 1.5
71 >>> img1
72 array([[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
73 [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
74 [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
75 [ 0. , 0. , 1.5, 0. , 1. , 0. , 0. ],
76 [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
77 [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],
78 [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ]])
79
80 >>> peak_local_max(img1, min_distance=1)
81 array([[3, 2],
82 [3, 4]])
83
84 >>> peak_local_max(img1, min_distance=2)
85 array([[3, 2]])
86
87 >>> img2 = np.zeros((20, 20, 20))
88 >>> img2[10, 10, 10] = 1
89 >>> peak_local_max(img2, exclude_border=False)
90 array([[10, 10, 10]])
91
92 """
93 out = np.zeros_like(image, dtype=np.bool)
94 # In the case of labels, recursively build and return an output
95 # operating on each label separately
96 if labels is not None:
97 label_values = np.unique(labels)
98 # Reorder label values to have consecutive integers (no gaps)
99 if np.any(np.diff(label_values) != 1):
100 mask = labels >= 1
101 labels[mask] = 1 + rank_order(labels[mask])[0].astype(labels.dtype)
102 labels = labels.astype(np.int32)
103
104 # New values for new ordering
105 label_values = np.unique(labels)
106 for label in label_values[label_values != 0]:
107 maskim = (labels == label)
108 out += peak_local_max(image * maskim, min_distance=min_distance,
109 threshold_abs=threshold_abs,
110 threshold_rel=threshold_rel,
111 exclude_border=exclude_border,
112 indices=False, num_peaks=np.inf,
113 footprint=footprint, labels=None)
114
115 if indices is True:
116 return np.transpose(out.nonzero())
117 else:
118 return out.astype(np.bool)
119
120 if np.all(image == image.flat[0]):
121 if indices is True:
122 return []
123 else:
124 return out
125
126 image = image.copy()
127 # Non maximum filter
128 if footprint is not None:
129 image_max = ndi.maximum_filter(image, footprint=footprint,
130 mode='constant')
131 else:
132 size = 2 * min_distance + 1
133 image_max = ndi.maximum_filter(image, size=size, mode='constant')
134 mask = (image == image_max)
135 image *= mask
136
137 if exclude_border:
138 # zero out the image borders
139 for i in range(image.ndim):
140 image = image.swapaxes(0, i)
141 image[:min_distance] = 0
142 image[-min_distance:] = 0
143 image = image.swapaxes(0, i)
144
145 # find top peak candidates above a threshold
146 peak_threshold = max(np.max(image.ravel()) * threshold_rel, threshold_abs)
147
148 # get coordinates of peaks
149 coordinates = np.argwhere(image > peak_threshold)
150
151 if coordinates.shape[0] > num_peaks:
152 intensities = image.flat[np.ravel_multi_index(coordinates.transpose(),image.shape)]
153 idx_maxsort = np.argsort(intensities)[::-1]
154 coordinates = coordinates[idx_maxsort][:num_peaks]
155
156 if indices is True:
157 return coordinates
158 else:
159 nd_indices = tuple(coordinates.T)
160 out[nd_indices] = True
161 return out
162
[end of skimage/feature/peak.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/skimage/feature/peak.py b/skimage/feature/peak.py
--- a/skimage/feature/peak.py
+++ b/skimage/feature/peak.py
@@ -119,7 +119,7 @@
if np.all(image == image.flat[0]):
if indices is True:
- return []
+ return np.empty((0, 2), np.int)
else:
return out
|
{"golden_diff": "diff --git a/skimage/feature/peak.py b/skimage/feature/peak.py\n--- a/skimage/feature/peak.py\n+++ b/skimage/feature/peak.py\n@@ -119,7 +119,7 @@\n \n if np.all(image == image.flat[0]):\n if indices is True:\n- return []\n+ return np.empty((0, 2), np.int)\n else:\n return out\n", "issue": "peak_local_max Incorrect output type \nThis [function](http://scikit-image.org/docs/dev/api/skimage.feature.html#peak-local-max) is returning a `list` instead of an `ndarray` if no peaks are detected.\n\nI traced the problem till this [line](https://github.com/scikit-image/scikit-image/blob/master/skimage/feature/peak.py#L122). However, I have to check if there is other case (beyond this line) that produces an incorrect output.\n\nI will work on it this weekend and submit a pull-request or a code snippet here\n\n", "before_files": [{"content": "import numpy as np\nimport scipy.ndimage as ndi\nfrom ..filters import rank_order\n\n\ndef peak_local_max(image, min_distance=10, threshold_abs=0, threshold_rel=0.1,\n exclude_border=True, indices=True, num_peaks=np.inf,\n footprint=None, labels=None):\n \"\"\"\n Find peaks in an image, and return them as coordinates or a boolean array.\n\n Peaks are the local maxima in a region of `2 * min_distance + 1`\n (i.e. peaks are separated by at least `min_distance`).\n\n NOTE: If peaks are flat (i.e. multiple adjacent pixels have identical\n intensities), the coordinates of all such pixels are returned.\n\n Parameters\n ----------\n image : ndarray of floats\n Input image.\n min_distance : int\n Minimum number of pixels separating peaks in a region of `2 *\n min_distance + 1` (i.e. peaks are separated by at least\n `min_distance`). If `exclude_border` is True, this value also excludes\n a border `min_distance` from the image boundary.\n To find the maximum number of peaks, use `min_distance=1`.\n threshold_abs : float\n Minimum intensity of peaks.\n threshold_rel : float\n Minimum intensity of peaks calculated as `max(image) * threshold_rel`.\n exclude_border : bool\n If True, `min_distance` excludes peaks from the border of the image as\n well as from each other.\n indices : bool\n If True, the output will be an array representing peak coordinates.\n If False, the output will be a boolean array shaped as `image.shape`\n with peaks present at True elements.\n num_peaks : int\n Maximum number of peaks. When the number of peaks exceeds `num_peaks`,\n return `num_peaks` peaks based on highest peak intensity.\n footprint : ndarray of bools, optional\n If provided, `footprint == 1` represents the local region within which\n to search for peaks at every point in `image`. Overrides\n `min_distance`, except for border exclusion if `exclude_border=True`.\n labels : ndarray of ints, optional\n If provided, each unique region `labels == value` represents a unique\n region to search for peaks. Zero is reserved for background.\n\n Returns\n -------\n output : ndarray or ndarray of bools\n\n * If `indices = True` : (row, column, ...) coordinates of peaks.\n * If `indices = False` : Boolean array shaped like `image`, with peaks\n represented by True values.\n\n Notes\n -----\n The peak local maximum function returns the coordinates of local peaks\n (maxima) in a image. A maximum filter is used for finding local maxima.\n This operation dilates the original image. After comparison between\n dilated and original image, peak_local_max function returns the\n coordinates of peaks where dilated image = original.\n\n Examples\n --------\n >>> img1 = np.zeros((7, 7))\n >>> img1[3, 4] = 1\n >>> img1[3, 2] = 1.5\n >>> img1\n array([[ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 0. , 1.5, 0. , 1. , 0. , 0. ],\n [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ],\n [ 0. , 0. , 0. , 0. , 0. , 0. , 0. ]])\n\n >>> peak_local_max(img1, min_distance=1)\n array([[3, 2],\n [3, 4]])\n\n >>> peak_local_max(img1, min_distance=2)\n array([[3, 2]])\n\n >>> img2 = np.zeros((20, 20, 20))\n >>> img2[10, 10, 10] = 1\n >>> peak_local_max(img2, exclude_border=False)\n array([[10, 10, 10]])\n\n \"\"\"\n out = np.zeros_like(image, dtype=np.bool)\n # In the case of labels, recursively build and return an output\n # operating on each label separately\n if labels is not None:\n label_values = np.unique(labels)\n # Reorder label values to have consecutive integers (no gaps)\n if np.any(np.diff(label_values) != 1):\n mask = labels >= 1\n labels[mask] = 1 + rank_order(labels[mask])[0].astype(labels.dtype)\n labels = labels.astype(np.int32)\n\n # New values for new ordering\n label_values = np.unique(labels)\n for label in label_values[label_values != 0]:\n maskim = (labels == label)\n out += peak_local_max(image * maskim, min_distance=min_distance,\n threshold_abs=threshold_abs,\n threshold_rel=threshold_rel,\n exclude_border=exclude_border,\n indices=False, num_peaks=np.inf,\n footprint=footprint, labels=None)\n\n if indices is True:\n return np.transpose(out.nonzero())\n else:\n return out.astype(np.bool)\n\n if np.all(image == image.flat[0]):\n if indices is True:\n return []\n else:\n return out\n\n image = image.copy()\n # Non maximum filter\n if footprint is not None:\n image_max = ndi.maximum_filter(image, footprint=footprint,\n mode='constant')\n else:\n size = 2 * min_distance + 1\n image_max = ndi.maximum_filter(image, size=size, mode='constant')\n mask = (image == image_max)\n image *= mask\n\n if exclude_border:\n # zero out the image borders\n for i in range(image.ndim):\n image = image.swapaxes(0, i)\n image[:min_distance] = 0\n image[-min_distance:] = 0\n image = image.swapaxes(0, i)\n\n # find top peak candidates above a threshold\n peak_threshold = max(np.max(image.ravel()) * threshold_rel, threshold_abs)\n\n # get coordinates of peaks\n coordinates = np.argwhere(image > peak_threshold)\n\n if coordinates.shape[0] > num_peaks:\n intensities = image.flat[np.ravel_multi_index(coordinates.transpose(),image.shape)]\n idx_maxsort = np.argsort(intensities)[::-1]\n coordinates = coordinates[idx_maxsort][:num_peaks]\n\n if indices is True:\n return coordinates\n else:\n nd_indices = tuple(coordinates.T)\n out[nd_indices] = True\n return out\n", "path": "skimage/feature/peak.py"}]}
| 2,653 | 101 |
gh_patches_debug_38309
|
rasdani/github-patches
|
git_diff
|
tornadoweb__tornado-2562
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update release notes and set version to 5.0b1
</issue>
<code>
[start of docs/conf.py]
1 # Ensure we get the local copy of tornado instead of what's on the standard path
2 import os
3 import sys
4 import time
5 sys.path.insert(0, os.path.abspath(".."))
6 import tornado
7
8 master_doc = "index"
9
10 project = "Tornado"
11 copyright = "2009-%s, The Tornado Authors" % time.strftime("%Y")
12
13 version = release = tornado.version
14
15 extensions = [
16 "sphinx.ext.autodoc",
17 "sphinx.ext.coverage",
18 "sphinx.ext.doctest",
19 "sphinx.ext.intersphinx",
20 "sphinx.ext.viewcode",
21 ]
22
23 primary_domain = 'py'
24 default_role = 'py:obj'
25
26 autodoc_member_order = "bysource"
27 autoclass_content = "both"
28 autodoc_inherit_docstrings = False
29
30 # Without this line sphinx includes a copy of object.__init__'s docstring
31 # on any class that doesn't define __init__.
32 # https://bitbucket.org/birkenfeld/sphinx/issue/1337/autoclass_content-both-uses-object__init__
33 autodoc_docstring_signature = False
34
35 coverage_skip_undoc_in_source = True
36 coverage_ignore_modules = [
37 "tornado.platform.asyncio",
38 "tornado.platform.caresresolver",
39 "tornado.platform.twisted",
40 ]
41 # I wish this could go in a per-module file...
42 coverage_ignore_classes = [
43 # tornado.gen
44 "Runner",
45
46 # tornado.web
47 "ChunkedTransferEncoding",
48 "GZipContentEncoding",
49 "OutputTransform",
50 "TemplateModule",
51 "url",
52
53 # tornado.websocket
54 "WebSocketProtocol",
55 "WebSocketProtocol13",
56 "WebSocketProtocol76",
57 ]
58
59 coverage_ignore_functions = [
60 # various modules
61 "doctests",
62 "main",
63
64 # tornado.escape
65 # parse_qs_bytes should probably be documented but it's complicated by
66 # having different implementations between py2 and py3.
67 "parse_qs_bytes",
68
69 # tornado.gen
70 "Multi",
71 ]
72
73 html_favicon = 'favicon.ico'
74
75 latex_documents = [
76 ('index', 'tornado.tex', 'Tornado Documentation', 'The Tornado Authors', 'manual', False),
77 ]
78
79 intersphinx_mapping = {
80 'python': ('https://docs.python.org/3.6/', None),
81 }
82
83 on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
84
85 # On RTD we can't import sphinx_rtd_theme, but it will be applied by
86 # default anyway. This block will use the same theme when building locally
87 # as on RTD.
88 if not on_rtd:
89 import sphinx_rtd_theme
90 html_theme = 'sphinx_rtd_theme'
91 html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
92
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -1,14 +1,14 @@
# Ensure we get the local copy of tornado instead of what's on the standard path
import os
import sys
-import time
+
sys.path.insert(0, os.path.abspath(".."))
import tornado
master_doc = "index"
project = "Tornado"
-copyright = "2009-%s, The Tornado Authors" % time.strftime("%Y")
+copyright = "The Tornado Authors"
version = release = tornado.version
@@ -20,8 +20,8 @@
"sphinx.ext.viewcode",
]
-primary_domain = 'py'
-default_role = 'py:obj'
+primary_domain = "py"
+default_role = "py:obj"
autodoc_member_order = "bysource"
autoclass_content = "both"
@@ -42,14 +42,12 @@
coverage_ignore_classes = [
# tornado.gen
"Runner",
-
# tornado.web
"ChunkedTransferEncoding",
"GZipContentEncoding",
"OutputTransform",
"TemplateModule",
"url",
-
# tornado.websocket
"WebSocketProtocol",
"WebSocketProtocol13",
@@ -60,32 +58,36 @@
# various modules
"doctests",
"main",
-
# tornado.escape
# parse_qs_bytes should probably be documented but it's complicated by
# having different implementations between py2 and py3.
"parse_qs_bytes",
-
# tornado.gen
"Multi",
]
-html_favicon = 'favicon.ico'
+html_favicon = "favicon.ico"
latex_documents = [
- ('index', 'tornado.tex', 'Tornado Documentation', 'The Tornado Authors', 'manual', False),
+ (
+ "index",
+ "tornado.tex",
+ "Tornado Documentation",
+ "The Tornado Authors",
+ "manual",
+ False,
+ )
]
-intersphinx_mapping = {
- 'python': ('https://docs.python.org/3.6/', None),
-}
+intersphinx_mapping = {"python": ("https://docs.python.org/3.6/", None)}
-on_rtd = os.environ.get('READTHEDOCS', None) == 'True'
+on_rtd = os.environ.get("READTHEDOCS", None) == "True"
# On RTD we can't import sphinx_rtd_theme, but it will be applied by
# default anyway. This block will use the same theme when building locally
# as on RTD.
if not on_rtd:
import sphinx_rtd_theme
- html_theme = 'sphinx_rtd_theme'
+
+ html_theme = "sphinx_rtd_theme"
html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
|
{"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -1,14 +1,14 @@\n # Ensure we get the local copy of tornado instead of what's on the standard path\n import os\n import sys\n-import time\n+\n sys.path.insert(0, os.path.abspath(\"..\"))\n import tornado\n \n master_doc = \"index\"\n \n project = \"Tornado\"\n-copyright = \"2009-%s, The Tornado Authors\" % time.strftime(\"%Y\")\n+copyright = \"The Tornado Authors\"\n \n version = release = tornado.version\n \n@@ -20,8 +20,8 @@\n \"sphinx.ext.viewcode\",\n ]\n \n-primary_domain = 'py'\n-default_role = 'py:obj'\n+primary_domain = \"py\"\n+default_role = \"py:obj\"\n \n autodoc_member_order = \"bysource\"\n autoclass_content = \"both\"\n@@ -42,14 +42,12 @@\n coverage_ignore_classes = [\n # tornado.gen\n \"Runner\",\n-\n # tornado.web\n \"ChunkedTransferEncoding\",\n \"GZipContentEncoding\",\n \"OutputTransform\",\n \"TemplateModule\",\n \"url\",\n-\n # tornado.websocket\n \"WebSocketProtocol\",\n \"WebSocketProtocol13\",\n@@ -60,32 +58,36 @@\n # various modules\n \"doctests\",\n \"main\",\n-\n # tornado.escape\n # parse_qs_bytes should probably be documented but it's complicated by\n # having different implementations between py2 and py3.\n \"parse_qs_bytes\",\n-\n # tornado.gen\n \"Multi\",\n ]\n \n-html_favicon = 'favicon.ico'\n+html_favicon = \"favicon.ico\"\n \n latex_documents = [\n- ('index', 'tornado.tex', 'Tornado Documentation', 'The Tornado Authors', 'manual', False),\n+ (\n+ \"index\",\n+ \"tornado.tex\",\n+ \"Tornado Documentation\",\n+ \"The Tornado Authors\",\n+ \"manual\",\n+ False,\n+ )\n ]\n \n-intersphinx_mapping = {\n- 'python': ('https://docs.python.org/3.6/', None),\n-}\n+intersphinx_mapping = {\"python\": (\"https://docs.python.org/3.6/\", None)}\n \n-on_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n+on_rtd = os.environ.get(\"READTHEDOCS\", None) == \"True\"\n \n # On RTD we can't import sphinx_rtd_theme, but it will be applied by\n # default anyway. This block will use the same theme when building locally\n # as on RTD.\n if not on_rtd:\n import sphinx_rtd_theme\n- html_theme = 'sphinx_rtd_theme'\n+\n+ html_theme = \"sphinx_rtd_theme\"\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n", "issue": "Update release notes and set version to 5.0b1\n\n", "before_files": [{"content": "# Ensure we get the local copy of tornado instead of what's on the standard path\nimport os\nimport sys\nimport time\nsys.path.insert(0, os.path.abspath(\"..\"))\nimport tornado\n\nmaster_doc = \"index\"\n\nproject = \"Tornado\"\ncopyright = \"2009-%s, The Tornado Authors\" % time.strftime(\"%Y\")\n\nversion = release = tornado.version\n\nextensions = [\n \"sphinx.ext.autodoc\",\n \"sphinx.ext.coverage\",\n \"sphinx.ext.doctest\",\n \"sphinx.ext.intersphinx\",\n \"sphinx.ext.viewcode\",\n]\n\nprimary_domain = 'py'\ndefault_role = 'py:obj'\n\nautodoc_member_order = \"bysource\"\nautoclass_content = \"both\"\nautodoc_inherit_docstrings = False\n\n# Without this line sphinx includes a copy of object.__init__'s docstring\n# on any class that doesn't define __init__.\n# https://bitbucket.org/birkenfeld/sphinx/issue/1337/autoclass_content-both-uses-object__init__\nautodoc_docstring_signature = False\n\ncoverage_skip_undoc_in_source = True\ncoverage_ignore_modules = [\n \"tornado.platform.asyncio\",\n \"tornado.platform.caresresolver\",\n \"tornado.platform.twisted\",\n]\n# I wish this could go in a per-module file...\ncoverage_ignore_classes = [\n # tornado.gen\n \"Runner\",\n\n # tornado.web\n \"ChunkedTransferEncoding\",\n \"GZipContentEncoding\",\n \"OutputTransform\",\n \"TemplateModule\",\n \"url\",\n\n # tornado.websocket\n \"WebSocketProtocol\",\n \"WebSocketProtocol13\",\n \"WebSocketProtocol76\",\n]\n\ncoverage_ignore_functions = [\n # various modules\n \"doctests\",\n \"main\",\n\n # tornado.escape\n # parse_qs_bytes should probably be documented but it's complicated by\n # having different implementations between py2 and py3.\n \"parse_qs_bytes\",\n\n # tornado.gen\n \"Multi\",\n]\n\nhtml_favicon = 'favicon.ico'\n\nlatex_documents = [\n ('index', 'tornado.tex', 'Tornado Documentation', 'The Tornado Authors', 'manual', False),\n]\n\nintersphinx_mapping = {\n 'python': ('https://docs.python.org/3.6/', None),\n}\n\non_rtd = os.environ.get('READTHEDOCS', None) == 'True'\n\n# On RTD we can't import sphinx_rtd_theme, but it will be applied by\n# default anyway. This block will use the same theme when building locally\n# as on RTD.\nif not on_rtd:\n import sphinx_rtd_theme\n html_theme = 'sphinx_rtd_theme'\n html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]\n", "path": "docs/conf.py"}]}
| 1,337 | 641 |
gh_patches_debug_15664
|
rasdani/github-patches
|
git_diff
|
getredash__redash-909
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error on adding modules to python datasource
I'm trying to add a module to a python datasource, but it's failing with this traceback
```
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python2.7/dist-packages/flask/app.py", line 1461, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 477, in wrapper
resp = resource(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask_login.py", line 792, in decorated_view
return func(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask/views.py", line 84, in view
return self.dispatch_request(*args, **kwargs)
File "/opt/redash/redash.0.9.2.b1536/redash/handlers/base.py", line 19, in dispatch_request
return super(BaseResource, self).dispatch_request(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py", line 587, in dispatch_request
resp = meth(*args, **kwargs)
File "/opt/redash/redash.0.9.2.b1536/redash/permissions.py", line 40, in decorated
return fn(*args, **kwargs)
File "/opt/redash/redash.0.9.2.b1536/redash/handlers/data_sources.py", line 38, in post
data_source.options.update(req['options'])
File "/opt/redash/redash.0.9.2.b1536/redash/utils/configuration.py", line 56, in update
if k in self.schema['secret'] and v == SECRET_PLACEHOLDER:
KeyError: 'secret'
```
</issue>
<code>
[start of redash/utils/configuration.py]
1 import json
2 import jsonschema
3 from jsonschema import ValidationError
4
5 SECRET_PLACEHOLDER = '--------'
6
7
8 class ConfigurationContainer(object):
9 def __init__(self, config, schema=None):
10 self._config = config
11 self.set_schema(schema)
12
13 def set_schema(self, schema):
14 self._schema = schema
15
16 @property
17 def schema(self):
18 if self._schema is None:
19 raise RuntimeError("Schema missing.")
20
21 return self._schema
22
23 def is_valid(self):
24 try:
25 self.validate()
26 except (ValidationError, ValueError):
27 return False
28
29 return True
30
31 def validate(self):
32 jsonschema.validate(self._config, self._schema)
33
34 def to_json(self):
35 return json.dumps(self._config)
36
37 def iteritems(self):
38 return self._config.iteritems()
39
40 def to_dict(self, mask_secrets=False):
41 if (mask_secrets is False or 'secret' not in self.schema):
42 return self._config
43
44 config = self._config.copy()
45 for key in config:
46 if key in self.schema['secret']:
47 config[key] = SECRET_PLACEHOLDER
48
49 return config
50
51 def update(self, new_config):
52 jsonschema.validate(new_config, self.schema)
53
54 config = {}
55 for k, v in new_config.iteritems():
56 if k in self.schema['secret'] and v == SECRET_PLACEHOLDER:
57 config[k] = self[k]
58 else:
59 config[k] = v
60
61 self._config = config
62
63 def get(self, *args, **kwargs):
64 return self._config.get(*args, **kwargs)
65
66 def __getitem__(self, item):
67 if item in self._config:
68 return self._config[item]
69
70 raise KeyError(item)
71
72 def __contains__(self, item):
73 return item in self._config
74
75 @classmethod
76 def from_json(cls, config_in_json):
77 return cls(json.loads(config_in_json))
78
[end of redash/utils/configuration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/redash/utils/configuration.py b/redash/utils/configuration.py
--- a/redash/utils/configuration.py
+++ b/redash/utils/configuration.py
@@ -38,7 +38,7 @@
return self._config.iteritems()
def to_dict(self, mask_secrets=False):
- if (mask_secrets is False or 'secret' not in self.schema):
+ if mask_secrets is False or 'secret' not in self.schema:
return self._config
config = self._config.copy()
@@ -53,7 +53,7 @@
config = {}
for k, v in new_config.iteritems():
- if k in self.schema['secret'] and v == SECRET_PLACEHOLDER:
+ if k in self.schema.get('secret', []) and v == SECRET_PLACEHOLDER:
config[k] = self[k]
else:
config[k] = v
|
{"golden_diff": "diff --git a/redash/utils/configuration.py b/redash/utils/configuration.py\n--- a/redash/utils/configuration.py\n+++ b/redash/utils/configuration.py\n@@ -38,7 +38,7 @@\n return self._config.iteritems()\n \n def to_dict(self, mask_secrets=False):\n- if (mask_secrets is False or 'secret' not in self.schema):\n+ if mask_secrets is False or 'secret' not in self.schema:\n return self._config\n \n config = self._config.copy()\n@@ -53,7 +53,7 @@\n \n config = {}\n for k, v in new_config.iteritems():\n- if k in self.schema['secret'] and v == SECRET_PLACEHOLDER:\n+ if k in self.schema.get('secret', []) and v == SECRET_PLACEHOLDER:\n config[k] = self[k]\n else:\n config[k] = v\n", "issue": "Error on adding modules to python datasource\nI'm trying to add a module to a python datasource, but it's failing with this traceback\n\n```\nTraceback (most recent call last):\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1475, in full_dispatch_request\n rv = self.dispatch_request()\n File \"/usr/local/lib/python2.7/dist-packages/flask/app.py\", line 1461, in dispatch_request\n return self.view_functions[rule.endpoint](**req.view_args)\n File \"/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py\", line 477, in wrapper\n resp = resource(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/flask_login.py\", line 792, in decorated_view\n return func(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/flask/views.py\", line 84, in view\n return self.dispatch_request(*args, **kwargs)\n File \"/opt/redash/redash.0.9.2.b1536/redash/handlers/base.py\", line 19, in dispatch_request\n return super(BaseResource, self).dispatch_request(*args, **kwargs)\n File \"/usr/local/lib/python2.7/dist-packages/flask_restful/__init__.py\", line 587, in dispatch_request\n resp = meth(*args, **kwargs)\n File \"/opt/redash/redash.0.9.2.b1536/redash/permissions.py\", line 40, in decorated\n return fn(*args, **kwargs)\n File \"/opt/redash/redash.0.9.2.b1536/redash/handlers/data_sources.py\", line 38, in post\n data_source.options.update(req['options'])\n File \"/opt/redash/redash.0.9.2.b1536/redash/utils/configuration.py\", line 56, in update\n if k in self.schema['secret'] and v == SECRET_PLACEHOLDER:\nKeyError: 'secret'\n```\n\n", "before_files": [{"content": "import json\nimport jsonschema\nfrom jsonschema import ValidationError\n\nSECRET_PLACEHOLDER = '--------'\n\n\nclass ConfigurationContainer(object):\n def __init__(self, config, schema=None):\n self._config = config\n self.set_schema(schema)\n\n def set_schema(self, schema):\n self._schema = schema\n\n @property\n def schema(self):\n if self._schema is None:\n raise RuntimeError(\"Schema missing.\")\n\n return self._schema\n\n def is_valid(self):\n try:\n self.validate()\n except (ValidationError, ValueError):\n return False\n\n return True\n\n def validate(self):\n jsonschema.validate(self._config, self._schema)\n\n def to_json(self):\n return json.dumps(self._config)\n\n def iteritems(self):\n return self._config.iteritems()\n\n def to_dict(self, mask_secrets=False):\n if (mask_secrets is False or 'secret' not in self.schema):\n return self._config\n\n config = self._config.copy()\n for key in config:\n if key in self.schema['secret']:\n config[key] = SECRET_PLACEHOLDER\n\n return config\n\n def update(self, new_config):\n jsonschema.validate(new_config, self.schema)\n\n config = {}\n for k, v in new_config.iteritems():\n if k in self.schema['secret'] and v == SECRET_PLACEHOLDER:\n config[k] = self[k]\n else:\n config[k] = v\n\n self._config = config\n\n def get(self, *args, **kwargs):\n return self._config.get(*args, **kwargs)\n\n def __getitem__(self, item):\n if item in self._config:\n return self._config[item]\n\n raise KeyError(item)\n\n def __contains__(self, item):\n return item in self._config\n\n @classmethod\n def from_json(cls, config_in_json):\n return cls(json.loads(config_in_json))\n", "path": "redash/utils/configuration.py"}]}
| 1,571 | 193 |
gh_patches_debug_4079
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-9112
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Edition Editing: language autocomplete is slow
### Problem
According to [this](https://internetarchive.slack.com/archives/C0119PRDV46/p1713446825373169?thread_ts=1713436300.354359&cid=C0119PRDV46) thread, it is very slow:
<img width="455" alt="Screenshot 2024-04-22 at 5 32 04 AM" src="https://github.com/internetarchive/openlibrary/assets/978325/54575542-e9a8-4452-a12a-5ed262897196">
#### Evidence / Screenshot
#### Relevant URL(s)
https://openlibrary.org/books/OL24938286M/Pacific_Vortex!/edit
### Reproducing the bug
1. Go to a work edit page
2. Try to add a language
* Expected behavior: Fast
* Actual behavior: Slow
### Context
Other keywords: dropdown, pulldown
### Notes from this Issue's Lead
#### Proposal & constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
#### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
#### Stakeholders
<!-- @ tag stakeholders of this bug -->
</issue>
<code>
[start of openlibrary/plugins/worksearch/autocomplete.py]
1 import itertools
2 import web
3 import json
4
5
6 from infogami.utils import delegate
7 from infogami.utils.view import safeint
8 from openlibrary.core.models import Thing
9 from openlibrary.plugins.upstream import utils
10 from openlibrary.plugins.worksearch.search import get_solr
11 from openlibrary.utils import (
12 find_olid_in_string,
13 olid_to_key,
14 )
15
16
17 def to_json(d):
18 web.header('Content-Type', 'application/json')
19 return delegate.RawText(json.dumps(d))
20
21
22 class autocomplete(delegate.page):
23 path = "/_autocomplete"
24 fq = ['-type:edition']
25 fl = 'key,type,name,title,score'
26 olid_suffix: str | None = None
27 sort: str | None = None
28 query = 'title:"{q}"^2 OR title:({q}*) OR name:"{q}"^2 OR name:({q}*)'
29
30 def db_fetch(self, key: str) -> Thing | None:
31 if thing := web.ctx.site.get(key):
32 return thing.as_fake_solr_record()
33 else:
34 return None
35
36 def doc_wrap(self, doc: dict):
37 """Modify the returned solr document in place."""
38 if 'name' not in doc:
39 doc['name'] = doc.get('title')
40
41 def doc_filter(self, doc: dict) -> bool:
42 """Exclude certain documents"""
43 return True
44
45 def GET(self):
46 return self.direct_get()
47
48 def direct_get(self, fq: list[str] | None = None):
49 i = web.input(q="", limit=5)
50 i.limit = safeint(i.limit, 5)
51
52 solr = get_solr()
53
54 # look for ID in query string here
55 q = solr.escape(i.q).strip()
56 embedded_olid = None
57 if self.olid_suffix:
58 embedded_olid = find_olid_in_string(q, self.olid_suffix)
59
60 if embedded_olid:
61 solr_q = f'key:"{olid_to_key(embedded_olid)}"'
62 else:
63 solr_q = self.query.format(q=q)
64
65 fq = fq or self.fq
66 params = {
67 'q_op': 'AND',
68 'rows': i.limit,
69 **({'fq': fq} if fq else {}),
70 # limit the fields returned for better performance
71 'fl': self.fl,
72 **({'sort': self.sort} if self.sort else {}),
73 }
74
75 data = solr.select(solr_q, **params)
76 docs = data['docs']
77
78 if embedded_olid and not docs:
79 # Grumble! Work not in solr yet. Create a dummy.
80 fake_doc = self.db_fetch(olid_to_key(embedded_olid))
81 if fake_doc:
82 docs = [fake_doc]
83
84 result_docs = []
85 for d in docs:
86 if self.doc_filter(d):
87 self.doc_wrap(d)
88 result_docs.append(d)
89
90 return to_json(result_docs)
91
92
93 class languages_autocomplete(delegate.page):
94 path = "/languages/_autocomplete"
95
96 def GET(self):
97 i = web.input(q="", limit=5)
98 i.limit = safeint(i.limit, 5)
99 return to_json(
100 list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))
101 )
102
103
104 class works_autocomplete(autocomplete):
105 path = "/works/_autocomplete"
106 fq = ['type:work']
107 fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'
108 olid_suffix = 'W'
109 query = 'title:"{q}"^2 OR title:({q}*)'
110
111 def doc_filter(self, doc: dict) -> bool:
112 # Exclude orphaned editions from autocomplete results
113 # Note: Do this here instead of with an `fq=key:*W` for performance
114 # reasons.
115 return doc['key'][-1] == 'W'
116
117 def doc_wrap(self, doc: dict):
118 doc['full_title'] = doc['title']
119 if 'subtitle' in doc:
120 doc['full_title'] += ": " + doc['subtitle']
121 doc['name'] = doc.get('title')
122
123
124 class authors_autocomplete(autocomplete):
125 path = "/authors/_autocomplete"
126 fq = ['type:author']
127 fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'
128 olid_suffix = 'A'
129 query = 'name:({q}*) OR alternate_names:({q}*) OR name:"{q}"^2 OR alternate_names:"{q}"^2'
130
131 def doc_wrap(self, doc: dict):
132 if 'top_work' in doc:
133 doc['works'] = [doc.pop('top_work')]
134 else:
135 doc['works'] = []
136 doc['subjects'] = doc.pop('top_subjects', [])
137
138
139 class subjects_autocomplete(autocomplete):
140 # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+
141 path = "/subjects_autocomplete"
142 fq = ['type:subject']
143 fl = 'key,name,work_count'
144 query = 'name:({q}*)'
145 sort = 'work_count desc'
146
147 def GET(self):
148 i = web.input(type="")
149 fq = self.fq
150 if i.type:
151 fq = fq + [f'subject_type:{i.type}']
152
153 return super().direct_get(fq=fq)
154
155
156 def setup():
157 """Do required setup."""
158 pass
159
[end of openlibrary/plugins/worksearch/autocomplete.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py
--- a/openlibrary/plugins/worksearch/autocomplete.py
+++ b/openlibrary/plugins/worksearch/autocomplete.py
@@ -96,6 +96,7 @@
def GET(self):
i = web.input(q="", limit=5)
i.limit = safeint(i.limit, 5)
+ web.header("Cache-Control", "max-age=%d" % (24 * 3600))
return to_json(
list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))
)
|
{"golden_diff": "diff --git a/openlibrary/plugins/worksearch/autocomplete.py b/openlibrary/plugins/worksearch/autocomplete.py\n--- a/openlibrary/plugins/worksearch/autocomplete.py\n+++ b/openlibrary/plugins/worksearch/autocomplete.py\n@@ -96,6 +96,7 @@\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n+ web.header(\"Cache-Control\", \"max-age=%d\" % (24 * 3600))\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n", "issue": "Edition Editing: language autocomplete is slow\n### Problem\r\n\r\nAccording to [this](https://internetarchive.slack.com/archives/C0119PRDV46/p1713446825373169?thread_ts=1713436300.354359&cid=C0119PRDV46) thread, it is very slow:\r\n\r\n<img width=\"455\" alt=\"Screenshot 2024-04-22 at 5 32 04 AM\" src=\"https://github.com/internetarchive/openlibrary/assets/978325/54575542-e9a8-4452-a12a-5ed262897196\">\r\n\r\n#### Evidence / Screenshot\r\n\r\n#### Relevant URL(s)\r\nhttps://openlibrary.org/books/OL24938286M/Pacific_Vortex!/edit\r\n\r\n### Reproducing the bug\r\n\r\n1. Go to a work edit page\r\n2. Try to add a language\r\n\r\n* Expected behavior: Fast\r\n* Actual behavior: Slow\r\n\r\n\r\n### Context\r\n\r\nOther keywords: dropdown, pulldown\r\n\r\n### Notes from this Issue's Lead\r\n\r\n#### Proposal & constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n#### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n#### Stakeholders\r\n<!-- @ tag stakeholders of this bug -->\r\n\n", "before_files": [{"content": "import itertools\nimport web\nimport json\n\n\nfrom infogami.utils import delegate\nfrom infogami.utils.view import safeint\nfrom openlibrary.core.models import Thing\nfrom openlibrary.plugins.upstream import utils\nfrom openlibrary.plugins.worksearch.search import get_solr\nfrom openlibrary.utils import (\n find_olid_in_string,\n olid_to_key,\n)\n\n\ndef to_json(d):\n web.header('Content-Type', 'application/json')\n return delegate.RawText(json.dumps(d))\n\n\nclass autocomplete(delegate.page):\n path = \"/_autocomplete\"\n fq = ['-type:edition']\n fl = 'key,type,name,title,score'\n olid_suffix: str | None = None\n sort: str | None = None\n query = 'title:\"{q}\"^2 OR title:({q}*) OR name:\"{q}\"^2 OR name:({q}*)'\n\n def db_fetch(self, key: str) -> Thing | None:\n if thing := web.ctx.site.get(key):\n return thing.as_fake_solr_record()\n else:\n return None\n\n def doc_wrap(self, doc: dict):\n \"\"\"Modify the returned solr document in place.\"\"\"\n if 'name' not in doc:\n doc['name'] = doc.get('title')\n\n def doc_filter(self, doc: dict) -> bool:\n \"\"\"Exclude certain documents\"\"\"\n return True\n\n def GET(self):\n return self.direct_get()\n\n def direct_get(self, fq: list[str] | None = None):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n\n solr = get_solr()\n\n # look for ID in query string here\n q = solr.escape(i.q).strip()\n embedded_olid = None\n if self.olid_suffix:\n embedded_olid = find_olid_in_string(q, self.olid_suffix)\n\n if embedded_olid:\n solr_q = f'key:\"{olid_to_key(embedded_olid)}\"'\n else:\n solr_q = self.query.format(q=q)\n\n fq = fq or self.fq\n params = {\n 'q_op': 'AND',\n 'rows': i.limit,\n **({'fq': fq} if fq else {}),\n # limit the fields returned for better performance\n 'fl': self.fl,\n **({'sort': self.sort} if self.sort else {}),\n }\n\n data = solr.select(solr_q, **params)\n docs = data['docs']\n\n if embedded_olid and not docs:\n # Grumble! Work not in solr yet. Create a dummy.\n fake_doc = self.db_fetch(olid_to_key(embedded_olid))\n if fake_doc:\n docs = [fake_doc]\n\n result_docs = []\n for d in docs:\n if self.doc_filter(d):\n self.doc_wrap(d)\n result_docs.append(d)\n\n return to_json(result_docs)\n\n\nclass languages_autocomplete(delegate.page):\n path = \"/languages/_autocomplete\"\n\n def GET(self):\n i = web.input(q=\"\", limit=5)\n i.limit = safeint(i.limit, 5)\n return to_json(\n list(itertools.islice(utils.autocomplete_languages(i.q), i.limit))\n )\n\n\nclass works_autocomplete(autocomplete):\n path = \"/works/_autocomplete\"\n fq = ['type:work']\n fl = 'key,title,subtitle,cover_i,first_publish_year,author_name,edition_count'\n olid_suffix = 'W'\n query = 'title:\"{q}\"^2 OR title:({q}*)'\n\n def doc_filter(self, doc: dict) -> bool:\n # Exclude orphaned editions from autocomplete results\n # Note: Do this here instead of with an `fq=key:*W` for performance\n # reasons.\n return doc['key'][-1] == 'W'\n\n def doc_wrap(self, doc: dict):\n doc['full_title'] = doc['title']\n if 'subtitle' in doc:\n doc['full_title'] += \": \" + doc['subtitle']\n doc['name'] = doc.get('title')\n\n\nclass authors_autocomplete(autocomplete):\n path = \"/authors/_autocomplete\"\n fq = ['type:author']\n fl = 'key,name,alternate_names,birth_date,death_date,work_count,top_work,top_subjects'\n olid_suffix = 'A'\n query = 'name:({q}*) OR alternate_names:({q}*) OR name:\"{q}\"^2 OR alternate_names:\"{q}\"^2'\n\n def doc_wrap(self, doc: dict):\n if 'top_work' in doc:\n doc['works'] = [doc.pop('top_work')]\n else:\n doc['works'] = []\n doc['subjects'] = doc.pop('top_subjects', [])\n\n\nclass subjects_autocomplete(autocomplete):\n # can't use /subjects/_autocomplete because the subjects endpoint = /subjects/[^/]+\n path = \"/subjects_autocomplete\"\n fq = ['type:subject']\n fl = 'key,name,work_count'\n query = 'name:({q}*)'\n sort = 'work_count desc'\n\n def GET(self):\n i = web.input(type=\"\")\n fq = self.fq\n if i.type:\n fq = fq + [f'subject_type:{i.type}']\n\n return super().direct_get(fq=fq)\n\n\ndef setup():\n \"\"\"Do required setup.\"\"\"\n pass\n", "path": "openlibrary/plugins/worksearch/autocomplete.py"}]}
| 2,449 | 132 |
gh_patches_debug_28248
|
rasdani/github-patches
|
git_diff
|
open-mmlab__mmpretrain-548
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Focal loss for single label classification?
### Checklist
- I have searched related issues but cannot get the expected help.
- I have read related documents and don't know what to do.
### Describe the question you meet
I'm trying to train a resnet18 with a LinearClsHead using the focal loss for a single label binary classification problem, but the current focal loss implementation raises an assertion error as the predictions have shape [batch_size, 2] and target has shape [batch_size]. According to the documentation the implemented focal loss only works for multilabel tasks. Is there any way of using it also for single label tasks?
### Post related information
1. The output of `pip list | grep "mmcv\|mmcls\|^torch"`
mmcls 0.17.0
mmcv-full 1.3.9
torch 1.9.0
torch-model-archiver 0.4.1
torchvision 0.10.0
2. This is the model config I'm using:
```python
# type: ignore
model = dict(
type="ImageClassifier",
backbone=dict(
type="ResNet", depth=18, num_stages=4, out_indices=(3,), style="pytorch"
),
neck=dict(type="GlobalAveragePooling"),
head=dict(
type="LinearClsHead",
num_classes=2,
in_channels=512,
loss=dict(type="FocalLoss", loss_weight=1.0),
topk=(1),
),
)
load_from = "/media/VA/pretrained_weights/mmcls/resnet18_batch256_20200708-34ab8f90.pth"
```
3. I am getting the following error during training:
```
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/apis/train.py", line 164, in train_model
runner.run(data_loaders, cfg.workflow)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py", line 29, in run_iter
outputs = self.model.train_step(data_batch, self.optimizer,
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 67, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/classifiers/base.py", line 146, in train_step
losses = self(**data)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/classifiers/base.py", line 90, in forward
return self.forward_train(img, **kwargs)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/classifiers/image.py", line 110, in forward_train
loss = self.head.forward_train(x, gt_label)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/heads/linear_head.py", line 53, in forward_train
losses = self.loss(cls_score, gt_label)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/heads/cls_head.py", line 46, in loss
loss = self.compute_loss(cls_score, gt_label, avg_factor=num_samples)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/losses/focal_loss.py", line 106, in forward
loss_cls = self.loss_weight * sigmoid_focal_loss(
File "/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/losses/focal_loss.py", line 36, in sigmoid_focal_loss
assert pred.shape == \
AssertionError: pred and target should be in the same shape.
```
Thank you!
</issue>
<code>
[start of mmcls/models/losses/focal_loss.py]
1 # Copyright (c) OpenMMLab. All rights reserved.
2 import torch.nn as nn
3 import torch.nn.functional as F
4
5 from ..builder import LOSSES
6 from .utils import weight_reduce_loss
7
8
9 def sigmoid_focal_loss(pred,
10 target,
11 weight=None,
12 gamma=2.0,
13 alpha=0.25,
14 reduction='mean',
15 avg_factor=None):
16 r"""Sigmoid focal loss.
17
18 Args:
19 pred (torch.Tensor): The prediction with shape (N, \*).
20 target (torch.Tensor): The ground truth label of the prediction with
21 shape (N, \*).
22 weight (torch.Tensor, optional): Sample-wise loss weight with shape
23 (N, ). Defaults to None.
24 gamma (float): The gamma for calculating the modulating factor.
25 Defaults to 2.0.
26 alpha (float): A balanced form for Focal Loss. Defaults to 0.25.
27 reduction (str): The method used to reduce the loss.
28 Options are "none", "mean" and "sum". If reduction is 'none' ,
29 loss is same shape as pred and label. Defaults to 'mean'.
30 avg_factor (int, optional): Average factor that is used to average
31 the loss. Defaults to None.
32
33 Returns:
34 torch.Tensor: Loss.
35 """
36 assert pred.shape == \
37 target.shape, 'pred and target should be in the same shape.'
38 pred_sigmoid = pred.sigmoid()
39 target = target.type_as(pred)
40 pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)
41 focal_weight = (alpha * target + (1 - alpha) *
42 (1 - target)) * pt.pow(gamma)
43 loss = F.binary_cross_entropy_with_logits(
44 pred, target, reduction='none') * focal_weight
45 if weight is not None:
46 assert weight.dim() == 1
47 weight = weight.float()
48 if pred.dim() > 1:
49 weight = weight.reshape(-1, 1)
50 loss = weight_reduce_loss(loss, weight, reduction, avg_factor)
51 return loss
52
53
54 @LOSSES.register_module()
55 class FocalLoss(nn.Module):
56 """Focal loss.
57
58 Args:
59 gamma (float): Focusing parameter in focal loss.
60 Defaults to 2.0.
61 alpha (float): The parameter in balanced form of focal
62 loss. Defaults to 0.25.
63 reduction (str): The method used to reduce the loss into
64 a scalar. Options are "none" and "mean". Defaults to 'mean'.
65 loss_weight (float): Weight of loss. Defaults to 1.0.
66 """
67
68 def __init__(self,
69 gamma=2.0,
70 alpha=0.25,
71 reduction='mean',
72 loss_weight=1.0):
73
74 super(FocalLoss, self).__init__()
75 self.gamma = gamma
76 self.alpha = alpha
77 self.reduction = reduction
78 self.loss_weight = loss_weight
79
80 def forward(self,
81 pred,
82 target,
83 weight=None,
84 avg_factor=None,
85 reduction_override=None):
86 r"""Sigmoid focal loss.
87
88 Args:
89 pred (torch.Tensor): The prediction with shape (N, \*).
90 target (torch.Tensor): The ground truth label of the prediction
91 with shape (N, \*).
92 weight (torch.Tensor, optional): Sample-wise loss weight with shape
93 (N, \*). Defaults to None.
94 avg_factor (int, optional): Average factor that is used to average
95 the loss. Defaults to None.
96 reduction_override (str, optional): The method used to reduce the
97 loss into a scalar. Options are "none", "mean" and "sum".
98 Defaults to None.
99
100 Returns:
101 torch.Tensor: Loss.
102 """
103 assert reduction_override in (None, 'none', 'mean', 'sum')
104 reduction = (
105 reduction_override if reduction_override else self.reduction)
106 loss_cls = self.loss_weight * sigmoid_focal_loss(
107 pred,
108 target,
109 weight,
110 gamma=self.gamma,
111 alpha=self.alpha,
112 reduction=reduction,
113 avg_factor=avg_factor)
114 return loss_cls
115
[end of mmcls/models/losses/focal_loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mmcls/models/losses/focal_loss.py b/mmcls/models/losses/focal_loss.py
--- a/mmcls/models/losses/focal_loss.py
+++ b/mmcls/models/losses/focal_loss.py
@@ -3,7 +3,7 @@
import torch.nn.functional as F
from ..builder import LOSSES
-from .utils import weight_reduce_loss
+from .utils import convert_to_one_hot, weight_reduce_loss
def sigmoid_focal_loss(pred,
@@ -88,7 +88,7 @@
Args:
pred (torch.Tensor): The prediction with shape (N, \*).
target (torch.Tensor): The ground truth label of the prediction
- with shape (N, \*).
+ with shape (N, \*), N or (N,1).
weight (torch.Tensor, optional): Sample-wise loss weight with shape
(N, \*). Defaults to None.
avg_factor (int, optional): Average factor that is used to average
@@ -103,6 +103,8 @@
assert reduction_override in (None, 'none', 'mean', 'sum')
reduction = (
reduction_override if reduction_override else self.reduction)
+ if target.dim() == 1 or (target.dim() == 2 and target.shape[1] == 1):
+ target = convert_to_one_hot(target.view(-1, 1), pred.shape[-1])
loss_cls = self.loss_weight * sigmoid_focal_loss(
pred,
target,
|
{"golden_diff": "diff --git a/mmcls/models/losses/focal_loss.py b/mmcls/models/losses/focal_loss.py\n--- a/mmcls/models/losses/focal_loss.py\n+++ b/mmcls/models/losses/focal_loss.py\n@@ -3,7 +3,7 @@\n import torch.nn.functional as F\n \n from ..builder import LOSSES\n-from .utils import weight_reduce_loss\n+from .utils import convert_to_one_hot, weight_reduce_loss\n \n \n def sigmoid_focal_loss(pred,\n@@ -88,7 +88,7 @@\n Args:\n pred (torch.Tensor): The prediction with shape (N, \\*).\n target (torch.Tensor): The ground truth label of the prediction\n- with shape (N, \\*).\n+ with shape (N, \\*), N or (N,1).\n weight (torch.Tensor, optional): Sample-wise loss weight with shape\n (N, \\*). Defaults to None.\n avg_factor (int, optional): Average factor that is used to average\n@@ -103,6 +103,8 @@\n assert reduction_override in (None, 'none', 'mean', 'sum')\n reduction = (\n reduction_override if reduction_override else self.reduction)\n+ if target.dim() == 1 or (target.dim() == 2 and target.shape[1] == 1):\n+ target = convert_to_one_hot(target.view(-1, 1), pred.shape[-1])\n loss_cls = self.loss_weight * sigmoid_focal_loss(\n pred,\n target,\n", "issue": "Focal loss for single label classification?\n### Checklist\r\n- I have searched related issues but cannot get the expected help.\r\n- I have read related documents and don't know what to do.\r\n\r\n### Describe the question you meet\r\n\r\nI'm trying to train a resnet18 with a LinearClsHead using the focal loss for a single label binary classification problem, but the current focal loss implementation raises an assertion error as the predictions have shape [batch_size, 2] and target has shape [batch_size]. According to the documentation the implemented focal loss only works for multilabel tasks. Is there any way of using it also for single label tasks? \r\n\r\n### Post related information\r\n1. The output of `pip list | grep \"mmcv\\|mmcls\\|^torch\"`\r\nmmcls 0.17.0\r\nmmcv-full 1.3.9\r\ntorch 1.9.0\r\ntorch-model-archiver 0.4.1\r\ntorchvision 0.10.0\r\n\r\n2. This is the model config I'm using:\r\n```python\r\n# type: ignore\r\nmodel = dict(\r\n type=\"ImageClassifier\",\r\n backbone=dict(\r\n type=\"ResNet\", depth=18, num_stages=4, out_indices=(3,), style=\"pytorch\"\r\n ),\r\n neck=dict(type=\"GlobalAveragePooling\"),\r\n head=dict(\r\n type=\"LinearClsHead\",\r\n num_classes=2,\r\n in_channels=512,\r\n loss=dict(type=\"FocalLoss\", loss_weight=1.0),\r\n topk=(1),\r\n ),\r\n)\r\n\r\nload_from = \"/media/VA/pretrained_weights/mmcls/resnet18_batch256_20200708-34ab8f90.pth\"\r\n```\r\n3. I am getting the following error during training:\r\n```\r\nFile \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/apis/train.py\", line 164, in train_model\r\n runner.run(data_loaders, cfg.workflow)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py\", line 127, in run\r\n epoch_runner(data_loaders[i], **kwargs)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py\", line 50, in train\r\n self.run_iter(data_batch, train_mode=True, **kwargs)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/epoch_based_runner.py\", line 29, in run_iter\r\n outputs = self.model.train_step(data_batch, self.optimizer,\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py\", line 67, in train_step\r\n return self.module.train_step(*inputs[0], **kwargs[0])\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/classifiers/base.py\", line 146, in train_step\r\n losses = self(**data)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py\", line 98, in new_func\r\n return old_func(*args, **kwargs)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/classifiers/base.py\", line 90, in forward\r\n return self.forward_train(img, **kwargs)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/classifiers/image.py\", line 110, in forward_train\r\n loss = self.head.forward_train(x, gt_label)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/heads/linear_head.py\", line 53, in forward_train\r\n losses = self.loss(cls_score, gt_label)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/heads/cls_head.py\", line 46, in loss\r\n loss = self.compute_loss(cls_score, gt_label, avg_factor=num_samples)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 1051, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/losses/focal_loss.py\", line 106, in forward\r\n loss_cls = self.loss_weight * sigmoid_focal_loss(\r\n File \"/media/data/miniconda3/envs/ai-project-screen_classification/lib/python3.8/site-packages/mmcls/models/losses/focal_loss.py\", line 36, in sigmoid_focal_loss\r\n assert pred.shape == \\\r\nAssertionError: pred and target should be in the same shape.\r\n```\r\nThank you!\n", "before_files": [{"content": "# Copyright (c) OpenMMLab. All rights reserved.\nimport torch.nn as nn\nimport torch.nn.functional as F\n\nfrom ..builder import LOSSES\nfrom .utils import weight_reduce_loss\n\n\ndef sigmoid_focal_loss(pred,\n target,\n weight=None,\n gamma=2.0,\n alpha=0.25,\n reduction='mean',\n avg_factor=None):\n r\"\"\"Sigmoid focal loss.\n\n Args:\n pred (torch.Tensor): The prediction with shape (N, \\*).\n target (torch.Tensor): The ground truth label of the prediction with\n shape (N, \\*).\n weight (torch.Tensor, optional): Sample-wise loss weight with shape\n (N, ). Defaults to None.\n gamma (float): The gamma for calculating the modulating factor.\n Defaults to 2.0.\n alpha (float): A balanced form for Focal Loss. Defaults to 0.25.\n reduction (str): The method used to reduce the loss.\n Options are \"none\", \"mean\" and \"sum\". If reduction is 'none' ,\n loss is same shape as pred and label. Defaults to 'mean'.\n avg_factor (int, optional): Average factor that is used to average\n the loss. Defaults to None.\n\n Returns:\n torch.Tensor: Loss.\n \"\"\"\n assert pred.shape == \\\n target.shape, 'pred and target should be in the same shape.'\n pred_sigmoid = pred.sigmoid()\n target = target.type_as(pred)\n pt = (1 - pred_sigmoid) * target + pred_sigmoid * (1 - target)\n focal_weight = (alpha * target + (1 - alpha) *\n (1 - target)) * pt.pow(gamma)\n loss = F.binary_cross_entropy_with_logits(\n pred, target, reduction='none') * focal_weight\n if weight is not None:\n assert weight.dim() == 1\n weight = weight.float()\n if pred.dim() > 1:\n weight = weight.reshape(-1, 1)\n loss = weight_reduce_loss(loss, weight, reduction, avg_factor)\n return loss\n\n\[email protected]_module()\nclass FocalLoss(nn.Module):\n \"\"\"Focal loss.\n\n Args:\n gamma (float): Focusing parameter in focal loss.\n Defaults to 2.0.\n alpha (float): The parameter in balanced form of focal\n loss. Defaults to 0.25.\n reduction (str): The method used to reduce the loss into\n a scalar. Options are \"none\" and \"mean\". Defaults to 'mean'.\n loss_weight (float): Weight of loss. Defaults to 1.0.\n \"\"\"\n\n def __init__(self,\n gamma=2.0,\n alpha=0.25,\n reduction='mean',\n loss_weight=1.0):\n\n super(FocalLoss, self).__init__()\n self.gamma = gamma\n self.alpha = alpha\n self.reduction = reduction\n self.loss_weight = loss_weight\n\n def forward(self,\n pred,\n target,\n weight=None,\n avg_factor=None,\n reduction_override=None):\n r\"\"\"Sigmoid focal loss.\n\n Args:\n pred (torch.Tensor): The prediction with shape (N, \\*).\n target (torch.Tensor): The ground truth label of the prediction\n with shape (N, \\*).\n weight (torch.Tensor, optional): Sample-wise loss weight with shape\n (N, \\*). Defaults to None.\n avg_factor (int, optional): Average factor that is used to average\n the loss. Defaults to None.\n reduction_override (str, optional): The method used to reduce the\n loss into a scalar. Options are \"none\", \"mean\" and \"sum\".\n Defaults to None.\n\n Returns:\n torch.Tensor: Loss.\n \"\"\"\n assert reduction_override in (None, 'none', 'mean', 'sum')\n reduction = (\n reduction_override if reduction_override else self.reduction)\n loss_cls = self.loss_weight * sigmoid_focal_loss(\n pred,\n target,\n weight,\n gamma=self.gamma,\n alpha=self.alpha,\n reduction=reduction,\n avg_factor=avg_factor)\n return loss_cls\n", "path": "mmcls/models/losses/focal_loss.py"}]}
| 2,896 | 333 |
gh_patches_debug_3209
|
rasdani/github-patches
|
git_diff
|
twisted__twisted-452
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
flattenEvent() fails with "'ascii' codec can't encode character" in Python 2.7
|[<img alt="znerol's avatar" src="https://avatars.githubusercontent.com/u/23288?s=50" width="50" height="50">](https://github.com/znerol)| @znerol reported|
|-|-|
|Trac ID|trac#8699|
|Type|defect|
|Created|2016-07-28 05:59:35Z|
backtrace ends in _flatten.py
```
twisted/logger/_flatten.py(119)flattenEvent()
-> flattenedValue = conversionFunction(fieldValue)
(Pdb) conversionFunction
<type 'str'>
```
I guess conversionFunction should be unicode in Python 2.7 and str in Python 3.
<details><summary>Searchable metadata</summary>
```
trac-id__8699 8699
type__defect defect
reporter__znerol znerol
priority__normal normal
milestone__None None
branch__
branch_author__
status__closed closed
resolution__fixed fixed
component__core core
keywords__None None
time__1469685575533160 1469685575533160
changetime__1470085969851774 1470085969851774
version__None None
owner__Craig_Rodrigues__rodrigc_____ Craig Rodrigues <rodrigc@...>
```
</details>
</issue>
<code>
[start of twisted/logger/_flatten.py]
1 # -*- test-case-name: twisted.logger.test.test_flatten -*-
2 # Copyright (c) Twisted Matrix Laboratories.
3 # See LICENSE for details.
4
5 """
6 Code related to "flattening" events; that is, extracting a description of all
7 relevant fields from the format string and persisting them for later
8 examination.
9 """
10
11 from string import Formatter
12 from collections import defaultdict
13
14 from twisted.python.compat import unicode
15
16 aFormatter = Formatter()
17
18
19
20 class KeyFlattener(object):
21 """
22 A L{KeyFlattener} computes keys for the things within curly braces in
23 PEP-3101-style format strings as parsed by L{string.Formatter.parse}.
24 """
25
26 def __init__(self):
27 """
28 Initialize a L{KeyFlattener}.
29 """
30 self.keys = defaultdict(lambda: 0)
31
32
33 def flatKey(self, fieldName, formatSpec, conversion):
34 """
35 Compute a string key for a given field/format/conversion.
36
37 @param fieldName: A format field name.
38 @type fieldName: L{str}
39
40 @param formatSpec: A format spec.
41 @type formatSpec: L{str}
42
43 @param conversion: A format field conversion type.
44 @type conversion: L{str}
45
46 @return: A key specific to the given field, format and conversion, as
47 well as the occurrence of that combination within this
48 L{KeyFlattener}'s lifetime.
49 @rtype: L{str}
50 """
51 result = (
52 "{fieldName}!{conversion}:{formatSpec}"
53 .format(
54 fieldName=fieldName,
55 formatSpec=(formatSpec or ""),
56 conversion=(conversion or ""),
57 )
58 )
59 self.keys[result] += 1
60 n = self.keys[result]
61 if n != 1:
62 result += "/" + str(self.keys[result])
63 return result
64
65
66
67 def flattenEvent(event):
68 """
69 Flatten the given event by pre-associating format fields with specific
70 objects and callable results in a L{dict} put into the C{"log_flattened"}
71 key in the event.
72
73 @param event: A logging event.
74 @type event: L{dict}
75 """
76 if "log_format" not in event:
77 return
78
79 if "log_flattened" in event:
80 fields = event["log_flattened"]
81 else:
82 fields = {}
83
84 keyFlattener = KeyFlattener()
85
86 for (literalText, fieldName, formatSpec, conversion) in (
87 aFormatter.parse(event["log_format"])
88 ):
89 if fieldName is None:
90 continue
91
92 if conversion != "r":
93 conversion = "s"
94
95 flattenedKey = keyFlattener.flatKey(fieldName, formatSpec, conversion)
96 structuredKey = keyFlattener.flatKey(fieldName, formatSpec, "")
97
98 if flattenedKey in fields:
99 # We've already seen and handled this key
100 continue
101
102 if fieldName.endswith(u"()"):
103 fieldName = fieldName[:-2]
104 callit = True
105 else:
106 callit = False
107
108 field = aFormatter.get_field(fieldName, (), event)
109 fieldValue = field[0]
110
111 if conversion == "r":
112 conversionFunction = repr
113 else: # Above: if conversion is not "r", it's "s"
114 conversionFunction = str
115
116 if callit:
117 fieldValue = fieldValue()
118
119 flattenedValue = conversionFunction(fieldValue)
120 fields[flattenedKey] = flattenedValue
121 fields[structuredKey] = fieldValue
122
123 if fields:
124 event["log_flattened"] = fields
125
126
127
128 def extractField(field, event):
129 """
130 Extract a given format field from the given event.
131
132 @param field: A string describing a format field or log key. This is the
133 text that would normally fall between a pair of curly braces in a
134 format string: for example, C{"key[2].attribute"}. If a conversion is
135 specified (the thing after the C{"!"} character in a format field) then
136 the result will always be L{unicode}.
137 @type field: L{str} (native string)
138
139 @param event: A log event.
140 @type event: L{dict}
141
142 @return: A value extracted from the field.
143 @rtype: L{object}
144
145 @raise KeyError: if the field is not found in the given event.
146 """
147 keyFlattener = KeyFlattener()
148 [[literalText, fieldName, formatSpec, conversion]] = aFormatter.parse(
149 "{" + field + "}"
150 )
151 key = keyFlattener.flatKey(fieldName, formatSpec, conversion)
152 if "log_flattened" not in event:
153 flattenEvent(event)
154 return event["log_flattened"][key]
155
156
157
158 def flatFormat(event):
159 """
160 Format an event which has been flattened with L{flattenEvent}.
161
162 @param event: A logging event.
163 @type event: L{dict}
164
165 @return: A formatted string.
166 @rtype: L{unicode}
167 """
168 fieldValues = event["log_flattened"]
169 s = []
170 keyFlattener = KeyFlattener()
171 formatFields = aFormatter.parse(event["log_format"])
172 for literalText, fieldName, formatSpec, conversion in formatFields:
173 s.append(literalText)
174 if fieldName is not None:
175 key = keyFlattener.flatKey(
176 fieldName, formatSpec, conversion or "s")
177 s.append(unicode(fieldValues[key]))
178 return u"".join(s)
179
[end of twisted/logger/_flatten.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/twisted/logger/_flatten.py b/twisted/logger/_flatten.py
--- a/twisted/logger/_flatten.py
+++ b/twisted/logger/_flatten.py
@@ -111,7 +111,7 @@
if conversion == "r":
conversionFunction = repr
else: # Above: if conversion is not "r", it's "s"
- conversionFunction = str
+ conversionFunction = unicode
if callit:
fieldValue = fieldValue()
|
{"golden_diff": "diff --git a/twisted/logger/_flatten.py b/twisted/logger/_flatten.py\n--- a/twisted/logger/_flatten.py\n+++ b/twisted/logger/_flatten.py\n@@ -111,7 +111,7 @@\n if conversion == \"r\":\n conversionFunction = repr\n else: # Above: if conversion is not \"r\", it's \"s\"\n- conversionFunction = str\n+ conversionFunction = unicode\n \n if callit:\n fieldValue = fieldValue()\n", "issue": "flattenEvent() fails with \"'ascii' codec can't encode character\" in Python 2.7\n|[<img alt=\"znerol's avatar\" src=\"https://avatars.githubusercontent.com/u/23288?s=50\" width=\"50\" height=\"50\">](https://github.com/znerol)| @znerol reported|\n|-|-|\n|Trac ID|trac#8699|\n|Type|defect|\n|Created|2016-07-28 05:59:35Z|\n\nbacktrace ends in _flatten.py\n\n```\ntwisted/logger/_flatten.py(119)flattenEvent()\n-> flattenedValue = conversionFunction(fieldValue)\n(Pdb) conversionFunction\n<type 'str'>\n```\n\nI guess conversionFunction should be unicode in Python 2.7 and str in Python 3.\n\n<details><summary>Searchable metadata</summary>\n\n```\ntrac-id__8699 8699\ntype__defect defect\nreporter__znerol znerol\npriority__normal normal\nmilestone__None None\nbranch__ \nbranch_author__ \nstatus__closed closed\nresolution__fixed fixed\ncomponent__core core\nkeywords__None None\ntime__1469685575533160 1469685575533160\nchangetime__1470085969851774 1470085969851774\nversion__None None\nowner__Craig_Rodrigues__rodrigc_____ Craig Rodrigues <rodrigc@...>\n\n```\n</details>\n\n", "before_files": [{"content": "# -*- test-case-name: twisted.logger.test.test_flatten -*-\n# Copyright (c) Twisted Matrix Laboratories.\n# See LICENSE for details.\n\n\"\"\"\nCode related to \"flattening\" events; that is, extracting a description of all\nrelevant fields from the format string and persisting them for later\nexamination.\n\"\"\"\n\nfrom string import Formatter\nfrom collections import defaultdict\n\nfrom twisted.python.compat import unicode\n\naFormatter = Formatter()\n\n\n\nclass KeyFlattener(object):\n \"\"\"\n A L{KeyFlattener} computes keys for the things within curly braces in\n PEP-3101-style format strings as parsed by L{string.Formatter.parse}.\n \"\"\"\n\n def __init__(self):\n \"\"\"\n Initialize a L{KeyFlattener}.\n \"\"\"\n self.keys = defaultdict(lambda: 0)\n\n\n def flatKey(self, fieldName, formatSpec, conversion):\n \"\"\"\n Compute a string key for a given field/format/conversion.\n\n @param fieldName: A format field name.\n @type fieldName: L{str}\n\n @param formatSpec: A format spec.\n @type formatSpec: L{str}\n\n @param conversion: A format field conversion type.\n @type conversion: L{str}\n\n @return: A key specific to the given field, format and conversion, as\n well as the occurrence of that combination within this\n L{KeyFlattener}'s lifetime.\n @rtype: L{str}\n \"\"\"\n result = (\n \"{fieldName}!{conversion}:{formatSpec}\"\n .format(\n fieldName=fieldName,\n formatSpec=(formatSpec or \"\"),\n conversion=(conversion or \"\"),\n )\n )\n self.keys[result] += 1\n n = self.keys[result]\n if n != 1:\n result += \"/\" + str(self.keys[result])\n return result\n\n\n\ndef flattenEvent(event):\n \"\"\"\n Flatten the given event by pre-associating format fields with specific\n objects and callable results in a L{dict} put into the C{\"log_flattened\"}\n key in the event.\n\n @param event: A logging event.\n @type event: L{dict}\n \"\"\"\n if \"log_format\" not in event:\n return\n\n if \"log_flattened\" in event:\n fields = event[\"log_flattened\"]\n else:\n fields = {}\n\n keyFlattener = KeyFlattener()\n\n for (literalText, fieldName, formatSpec, conversion) in (\n aFormatter.parse(event[\"log_format\"])\n ):\n if fieldName is None:\n continue\n\n if conversion != \"r\":\n conversion = \"s\"\n\n flattenedKey = keyFlattener.flatKey(fieldName, formatSpec, conversion)\n structuredKey = keyFlattener.flatKey(fieldName, formatSpec, \"\")\n\n if flattenedKey in fields:\n # We've already seen and handled this key\n continue\n\n if fieldName.endswith(u\"()\"):\n fieldName = fieldName[:-2]\n callit = True\n else:\n callit = False\n\n field = aFormatter.get_field(fieldName, (), event)\n fieldValue = field[0]\n\n if conversion == \"r\":\n conversionFunction = repr\n else: # Above: if conversion is not \"r\", it's \"s\"\n conversionFunction = str\n\n if callit:\n fieldValue = fieldValue()\n\n flattenedValue = conversionFunction(fieldValue)\n fields[flattenedKey] = flattenedValue\n fields[structuredKey] = fieldValue\n\n if fields:\n event[\"log_flattened\"] = fields\n\n\n\ndef extractField(field, event):\n \"\"\"\n Extract a given format field from the given event.\n\n @param field: A string describing a format field or log key. This is the\n text that would normally fall between a pair of curly braces in a\n format string: for example, C{\"key[2].attribute\"}. If a conversion is\n specified (the thing after the C{\"!\"} character in a format field) then\n the result will always be L{unicode}.\n @type field: L{str} (native string)\n\n @param event: A log event.\n @type event: L{dict}\n\n @return: A value extracted from the field.\n @rtype: L{object}\n\n @raise KeyError: if the field is not found in the given event.\n \"\"\"\n keyFlattener = KeyFlattener()\n [[literalText, fieldName, formatSpec, conversion]] = aFormatter.parse(\n \"{\" + field + \"}\"\n )\n key = keyFlattener.flatKey(fieldName, formatSpec, conversion)\n if \"log_flattened\" not in event:\n flattenEvent(event)\n return event[\"log_flattened\"][key]\n\n\n\ndef flatFormat(event):\n \"\"\"\n Format an event which has been flattened with L{flattenEvent}.\n\n @param event: A logging event.\n @type event: L{dict}\n\n @return: A formatted string.\n @rtype: L{unicode}\n \"\"\"\n fieldValues = event[\"log_flattened\"]\n s = []\n keyFlattener = KeyFlattener()\n formatFields = aFormatter.parse(event[\"log_format\"])\n for literalText, fieldName, formatSpec, conversion in formatFields:\n s.append(literalText)\n if fieldName is not None:\n key = keyFlattener.flatKey(\n fieldName, formatSpec, conversion or \"s\")\n s.append(unicode(fieldValues[key]))\n return u\"\".join(s)\n", "path": "twisted/logger/_flatten.py"}]}
| 2,548 | 110 |
gh_patches_debug_57077
|
rasdani/github-patches
|
git_diff
|
canonical__cloud-init-5343
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cloud-init generates a traceback if a default route already exists during ephemeral network setup
This bug was originally filed in Launchpad as [LP: #1860164](https://bugs.launchpad.net/cloud-init/+bug/1860164)
<details>
<summary>Launchpad details</summary>
<pre>
affected_projects = []
assignee = None
assignee_name = None
date_closed = None
date_created = 2020-01-17T18:37:30.886100+00:00
date_fix_committed = None
date_fix_released = None
id = 1860164
importance = medium
is_complete = False
lp_url = https://bugs.launchpad.net/cloud-init/+bug/1860164
milestone = None
owner = rjschwei
owner_name = Robert Schweikert
private = False
status = triaged
submitter = rjschwei
submitter_name = Robert Schweikert
tags = []
duplicates = []
</pre>
</details>
_Launchpad user **Robert Schweikert(rjschwei)** wrote on 2020-01-17T18:37:30.886100+00:00_
If a route already exists when the ephemeral network exists cloud-init will generate the following traceback:
2020-01-16 21:14:22,584 - util.py[DEBUG]: Getting data from <class 'cloudinit.sources.DataSourceOracle.DataSourceOracle'> failed
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 760, in find_source
if s.update_metadata([EventType.BOOT_NEW_INSTANCE]):
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 649, in update_metadata
result = self.get_data()
File "/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py", line 273, in get_data
return_value = self._get_data()
File "/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceOracle.py", line 195, in _get_data
with dhcp.EphemeralDHCPv4(net.find_fallback_nic()):
File "/usr/lib/python2.7/site-packages/cloudinit/net/dhcp.py", line 57, in __enter__
return self.obtain_lease()
File "/usr/lib/python2.7/site-packages/cloudinit/net/dhcp.py", line 109, in obtain_lease
ephipv4.__enter__()
File "/usr/lib/python2.7/site-packages/cloudinit/net/__init__.py", line 920, in __enter__
self._bringup_static_routes()
File "/usr/lib/python2.7/site-packages/cloudinit/net/__init__.py", line 974, in _bringup_static_routes
['dev', self.interface], capture=True)
File "/usr/lib/python2.7/site-packages/cloudinit/util.py", line 2083, in subp
cmd=args)
ProcessExecutionError: Unexpected error while running command.
This is a regression from 19.1 on SUSE where exiting routes were simply skipped.
</issue>
<code>
[start of cloudinit/net/netops/iproute2.py]
1 from typing import Optional
2
3 from cloudinit import subp
4 from cloudinit.net.netops import NetOps
5
6
7 class Iproute2(NetOps):
8 @staticmethod
9 def link_up(
10 interface: str, family: Optional[str] = None
11 ) -> subp.SubpResult:
12 family_args = []
13 if family:
14 family_args = ["-family", family]
15 return subp.subp(
16 ["ip", *family_args, "link", "set", "dev", interface, "up"]
17 )
18
19 @staticmethod
20 def link_down(
21 interface: str, family: Optional[str] = None
22 ) -> subp.SubpResult:
23 family_args = []
24 if family:
25 family_args = ["-family", family]
26 return subp.subp(
27 ["ip", *family_args, "link", "set", "dev", interface, "down"]
28 )
29
30 @staticmethod
31 def link_rename(current_name: str, new_name: str):
32 subp.subp(["ip", "link", "set", current_name, "name", new_name])
33
34 @staticmethod
35 def add_route(
36 interface: str,
37 route: str,
38 *,
39 gateway: Optional[str] = None,
40 source_address: Optional[str] = None,
41 ):
42 gateway_args = []
43 source_args = []
44 if gateway and gateway != "0.0.0.0":
45 gateway_args = ["via", gateway]
46 if source_address:
47 source_args = ["src", source_address]
48 subp.subp(
49 [
50 "ip",
51 "-4",
52 "route",
53 "add",
54 route,
55 *gateway_args,
56 "dev",
57 interface,
58 *source_args,
59 ]
60 )
61
62 @staticmethod
63 def append_route(interface: str, address: str, gateway: str):
64 gateway_args = []
65 if gateway and gateway != "0.0.0.0":
66 gateway_args = ["via", gateway]
67 subp.subp(
68 [
69 "ip",
70 "-4",
71 "route",
72 "append",
73 address,
74 *gateway_args,
75 "dev",
76 interface,
77 ]
78 )
79
80 @staticmethod
81 def del_route(
82 interface: str,
83 address: str,
84 *,
85 gateway: Optional[str] = None,
86 source_address: Optional[str] = None,
87 ):
88 gateway_args = []
89 source_args = []
90 if gateway and gateway != "0.0.0.0":
91 gateway_args = ["via", gateway]
92 if source_address:
93 source_args = ["src", source_address]
94 subp.subp(
95 [
96 "ip",
97 "-4",
98 "route",
99 "del",
100 address,
101 *gateway_args,
102 "dev",
103 interface,
104 *source_args,
105 ]
106 )
107
108 @staticmethod
109 def get_default_route() -> str:
110 return subp.subp(
111 ["ip", "route", "show", "0.0.0.0/0"],
112 ).stdout
113
114 @staticmethod
115 def add_addr(
116 interface: str, address: str, broadcast: Optional[str] = None
117 ):
118 broadcast_args = []
119 if broadcast:
120 broadcast_args = ["broadcast", broadcast]
121 subp.subp(
122 [
123 "ip",
124 "-family",
125 "inet",
126 "addr",
127 "add",
128 address,
129 *broadcast_args,
130 "dev",
131 interface,
132 ],
133 update_env={"LANG": "C"},
134 )
135
136 @staticmethod
137 def del_addr(interface: str, address: str):
138 subp.subp(
139 ["ip", "-family", "inet", "addr", "del", address, "dev", interface]
140 )
141
142 @staticmethod
143 def flush_addr(interface: str):
144 subp.subp(["ip", "flush", "dev", interface])
145
[end of cloudinit/net/netops/iproute2.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cloudinit/net/netops/iproute2.py b/cloudinit/net/netops/iproute2.py
--- a/cloudinit/net/netops/iproute2.py
+++ b/cloudinit/net/netops/iproute2.py
@@ -50,7 +50,7 @@
"ip",
"-4",
"route",
- "add",
+ "replace",
route,
*gateway_args,
"dev",
|
{"golden_diff": "diff --git a/cloudinit/net/netops/iproute2.py b/cloudinit/net/netops/iproute2.py\n--- a/cloudinit/net/netops/iproute2.py\n+++ b/cloudinit/net/netops/iproute2.py\n@@ -50,7 +50,7 @@\n \"ip\",\n \"-4\",\n \"route\",\n- \"add\",\n+ \"replace\",\n route,\n *gateway_args,\n \"dev\",\n", "issue": "cloud-init generates a traceback if a default route already exists during ephemeral network setup\nThis bug was originally filed in Launchpad as [LP: #1860164](https://bugs.launchpad.net/cloud-init/+bug/1860164)\n<details>\n<summary>Launchpad details</summary>\n<pre>\naffected_projects = []\nassignee = None\nassignee_name = None\ndate_closed = None\ndate_created = 2020-01-17T18:37:30.886100+00:00\ndate_fix_committed = None\ndate_fix_released = None\nid = 1860164\nimportance = medium\nis_complete = False\nlp_url = https://bugs.launchpad.net/cloud-init/+bug/1860164\nmilestone = None\nowner = rjschwei\nowner_name = Robert Schweikert\nprivate = False\nstatus = triaged\nsubmitter = rjschwei\nsubmitter_name = Robert Schweikert\ntags = []\nduplicates = []\n</pre>\n</details>\n\n_Launchpad user **Robert Schweikert(rjschwei)** wrote on 2020-01-17T18:37:30.886100+00:00_\n\nIf a route already exists when the ephemeral network exists cloud-init will generate the following traceback:\n\n2020-01-16 21:14:22,584 - util.py[DEBUG]: Getting data from <class 'cloudinit.sources.DataSourceOracle.DataSourceOracle'> failed\nTraceback (most recent call last):\n File \"/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py\", line 760, in find_source\n if s.update_metadata([EventType.BOOT_NEW_INSTANCE]):\n File \"/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py\", line 649, in update_metadata\n result = self.get_data()\n File \"/usr/lib/python2.7/site-packages/cloudinit/sources/__init__.py\", line 273, in get_data\n return_value = self._get_data()\n File \"/usr/lib/python2.7/site-packages/cloudinit/sources/DataSourceOracle.py\", line 195, in _get_data\n with dhcp.EphemeralDHCPv4(net.find_fallback_nic()):\n File \"/usr/lib/python2.7/site-packages/cloudinit/net/dhcp.py\", line 57, in __enter__\n return self.obtain_lease()\n File \"/usr/lib/python2.7/site-packages/cloudinit/net/dhcp.py\", line 109, in obtain_lease\n ephipv4.__enter__()\n File \"/usr/lib/python2.7/site-packages/cloudinit/net/__init__.py\", line 920, in __enter__\n self._bringup_static_routes()\n File \"/usr/lib/python2.7/site-packages/cloudinit/net/__init__.py\", line 974, in _bringup_static_routes\n ['dev', self.interface], capture=True)\n File \"/usr/lib/python2.7/site-packages/cloudinit/util.py\", line 2083, in subp\n cmd=args)\nProcessExecutionError: Unexpected error while running command.\n\nThis is a regression from 19.1 on SUSE where exiting routes were simply skipped.\n", "before_files": [{"content": "from typing import Optional\n\nfrom cloudinit import subp\nfrom cloudinit.net.netops import NetOps\n\n\nclass Iproute2(NetOps):\n @staticmethod\n def link_up(\n interface: str, family: Optional[str] = None\n ) -> subp.SubpResult:\n family_args = []\n if family:\n family_args = [\"-family\", family]\n return subp.subp(\n [\"ip\", *family_args, \"link\", \"set\", \"dev\", interface, \"up\"]\n )\n\n @staticmethod\n def link_down(\n interface: str, family: Optional[str] = None\n ) -> subp.SubpResult:\n family_args = []\n if family:\n family_args = [\"-family\", family]\n return subp.subp(\n [\"ip\", *family_args, \"link\", \"set\", \"dev\", interface, \"down\"]\n )\n\n @staticmethod\n def link_rename(current_name: str, new_name: str):\n subp.subp([\"ip\", \"link\", \"set\", current_name, \"name\", new_name])\n\n @staticmethod\n def add_route(\n interface: str,\n route: str,\n *,\n gateway: Optional[str] = None,\n source_address: Optional[str] = None,\n ):\n gateway_args = []\n source_args = []\n if gateway and gateway != \"0.0.0.0\":\n gateway_args = [\"via\", gateway]\n if source_address:\n source_args = [\"src\", source_address]\n subp.subp(\n [\n \"ip\",\n \"-4\",\n \"route\",\n \"add\",\n route,\n *gateway_args,\n \"dev\",\n interface,\n *source_args,\n ]\n )\n\n @staticmethod\n def append_route(interface: str, address: str, gateway: str):\n gateway_args = []\n if gateway and gateway != \"0.0.0.0\":\n gateway_args = [\"via\", gateway]\n subp.subp(\n [\n \"ip\",\n \"-4\",\n \"route\",\n \"append\",\n address,\n *gateway_args,\n \"dev\",\n interface,\n ]\n )\n\n @staticmethod\n def del_route(\n interface: str,\n address: str,\n *,\n gateway: Optional[str] = None,\n source_address: Optional[str] = None,\n ):\n gateway_args = []\n source_args = []\n if gateway and gateway != \"0.0.0.0\":\n gateway_args = [\"via\", gateway]\n if source_address:\n source_args = [\"src\", source_address]\n subp.subp(\n [\n \"ip\",\n \"-4\",\n \"route\",\n \"del\",\n address,\n *gateway_args,\n \"dev\",\n interface,\n *source_args,\n ]\n )\n\n @staticmethod\n def get_default_route() -> str:\n return subp.subp(\n [\"ip\", \"route\", \"show\", \"0.0.0.0/0\"],\n ).stdout\n\n @staticmethod\n def add_addr(\n interface: str, address: str, broadcast: Optional[str] = None\n ):\n broadcast_args = []\n if broadcast:\n broadcast_args = [\"broadcast\", broadcast]\n subp.subp(\n [\n \"ip\",\n \"-family\",\n \"inet\",\n \"addr\",\n \"add\",\n address,\n *broadcast_args,\n \"dev\",\n interface,\n ],\n update_env={\"LANG\": \"C\"},\n )\n\n @staticmethod\n def del_addr(interface: str, address: str):\n subp.subp(\n [\"ip\", \"-family\", \"inet\", \"addr\", \"del\", address, \"dev\", interface]\n )\n\n @staticmethod\n def flush_addr(interface: str):\n subp.subp([\"ip\", \"flush\", \"dev\", interface])\n", "path": "cloudinit/net/netops/iproute2.py"}]}
| 2,447 | 94 |
gh_patches_debug_9045
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-2967
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider att is broken
During the global build at 2021-06-02-14-42-40, spider **att** failed with **0 features** and **5433 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/att.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/att.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/att.geojson))
</issue>
<code>
[start of locations/spiders/att.py]
1 import scrapy
2 import json
3 import re
4 from locations.items import GeojsonPointItem
5 from locations.hours import OpeningHours
6
7
8 DAY_MAPPING = {
9 "MONDAY": "Mo",
10 "TUESDAY": "Tu",
11 "WEDNESDAY": "We",
12 "THURSDAY": "Th",
13 "FRIDAY": "Fr",
14 "SATURDAY": "Sa",
15 "SUNDAY": "Su"
16 }
17
18
19 class ATTScraper(scrapy.Spider):
20 name = "att"
21 item_attributes = { 'brand': "AT&T", 'brand_wikidata': "Q35476" }
22 allowed_domains = ['www.att.com']
23 start_urls = (
24 'https://www.att.com/stores/us',
25 )
26 download_delay = 0.2
27
28 def parse_hours(self, store_hours):
29 opening_hours = OpeningHours()
30 store_data = json.loads(store_hours)
31
32 for store_day in store_data:
33 if len(store_day["intervals"]) < 1:
34 continue
35 day = DAY_MAPPING[store_day["day"]]
36 open_time = str(store_day["intervals"][0]["start"])
37 if open_time == '0':
38 open_time = '0000'
39 close_time = str(store_day["intervals"][0]["end"])
40 if close_time == '0':
41 close_time = '2359'
42 opening_hours.add_range(day=day,
43 open_time=open_time,
44 close_time=close_time,
45 time_format='%H%M'
46 )
47
48 return opening_hours.as_opening_hours()
49
50 def parse(self, response):
51 urls = response.xpath('//a[@class="Directory-listLink"]/@href').extract()
52 is_store_list = response.xpath('//a[@class="Teaser-titleLink"]/@href').extract()
53
54 if not urls and is_store_list:
55 urls = response.xpath('//a[@class="Teaser-titleLink"]/@href').extract()
56 for url in urls:
57 if url.count('/') >= 2:
58 yield scrapy.Request(response.urljoin(url), callback=self.parse_store)
59 else:
60 yield scrapy.Request(response.urljoin(url))
61
62 def parse_store(self, response):
63 ref = re.search(r'.+/(.+?)/?(?:\.html|$)', response.url).group(1)
64
65 properties = {
66 'ref': ref,
67 'name': response.xpath('normalize-space(//span[@class="LocationName-brand"]/text())').extract_first(),
68 'addr_full': response.xpath('normalize-space(//meta[@itemprop="streetAddress"]/@content)').extract_first(),
69 'city': response.xpath('normalize-space(//meta[@itemprop="addressLocality"]/@content)').extract_first(),
70 'state': response.xpath('normalize-space(//abbr[@itemprop="addressRegion"]/text())').extract_first(),
71 'postcode': response.xpath('normalize-space(//span[@itemprop="postalCode"]/text())').extract_first(),
72 'country': response.xpath('normalize-space(//abbr[@itemprop="addressCountry"]/text())').extract_first(),
73 'phone': response.xpath('normalize-space(//span[@itemprop="telephone"]//text())').extract_first(),
74 'website': response.url,
75 'lat': response.xpath('normalize-space(//meta[@itemprop="latitude"]/@content)').extract_first(),
76 'lon': response.xpath('normalize-space(//meta[@itemprop="longitude"]/@content)').extract_first(),
77 }
78
79 hours = response.xpath('//span[@class="c-location-hours-today js-location-hours"]/@data-days').extract_first()
80 properties['opening_hours'] = self.parse_hours(hours)
81
82 yield GeojsonPointItem(**properties)
83
[end of locations/spiders/att.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/att.py b/locations/spiders/att.py
--- a/locations/spiders/att.py
+++ b/locations/spiders/att.py
@@ -76,7 +76,7 @@
'lon': response.xpath('normalize-space(//meta[@itemprop="longitude"]/@content)').extract_first(),
}
- hours = response.xpath('//span[@class="c-location-hours-today js-location-hours"]/@data-days').extract_first()
+ hours = response.xpath('//span[@class="c-hours-today js-hours-today"]/@data-days').extract_first()
properties['opening_hours'] = self.parse_hours(hours)
yield GeojsonPointItem(**properties)
|
{"golden_diff": "diff --git a/locations/spiders/att.py b/locations/spiders/att.py\n--- a/locations/spiders/att.py\n+++ b/locations/spiders/att.py\n@@ -76,7 +76,7 @@\n 'lon': response.xpath('normalize-space(//meta[@itemprop=\"longitude\"]/@content)').extract_first(),\n }\n \n- hours = response.xpath('//span[@class=\"c-location-hours-today js-location-hours\"]/@data-days').extract_first()\n+ hours = response.xpath('//span[@class=\"c-hours-today js-hours-today\"]/@data-days').extract_first()\n properties['opening_hours'] = self.parse_hours(hours)\n \n yield GeojsonPointItem(**properties)\n", "issue": "Spider att is broken\nDuring the global build at 2021-06-02-14-42-40, spider **att** failed with **0 features** and **5433 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/logs/att.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/att.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-06-02-14-42-40/output/att.geojson))\n", "before_files": [{"content": "import scrapy\nimport json\nimport re\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nDAY_MAPPING = {\n \"MONDAY\": \"Mo\",\n \"TUESDAY\": \"Tu\",\n \"WEDNESDAY\": \"We\",\n \"THURSDAY\": \"Th\",\n \"FRIDAY\": \"Fr\",\n \"SATURDAY\": \"Sa\",\n \"SUNDAY\": \"Su\"\n}\n\n\nclass ATTScraper(scrapy.Spider):\n name = \"att\"\n item_attributes = { 'brand': \"AT&T\", 'brand_wikidata': \"Q35476\" }\n allowed_domains = ['www.att.com']\n start_urls = (\n 'https://www.att.com/stores/us',\n )\n download_delay = 0.2\n\n def parse_hours(self, store_hours):\n opening_hours = OpeningHours()\n store_data = json.loads(store_hours)\n\n for store_day in store_data:\n if len(store_day[\"intervals\"]) < 1:\n continue\n day = DAY_MAPPING[store_day[\"day\"]]\n open_time = str(store_day[\"intervals\"][0][\"start\"])\n if open_time == '0':\n open_time = '0000'\n close_time = str(store_day[\"intervals\"][0][\"end\"])\n if close_time == '0':\n close_time = '2359'\n opening_hours.add_range(day=day,\n open_time=open_time,\n close_time=close_time,\n time_format='%H%M'\n )\n\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n urls = response.xpath('//a[@class=\"Directory-listLink\"]/@href').extract()\n is_store_list = response.xpath('//a[@class=\"Teaser-titleLink\"]/@href').extract()\n\n if not urls and is_store_list:\n urls = response.xpath('//a[@class=\"Teaser-titleLink\"]/@href').extract()\n for url in urls:\n if url.count('/') >= 2:\n yield scrapy.Request(response.urljoin(url), callback=self.parse_store)\n else:\n yield scrapy.Request(response.urljoin(url))\n\n def parse_store(self, response):\n ref = re.search(r'.+/(.+?)/?(?:\\.html|$)', response.url).group(1)\n\n properties = {\n 'ref': ref,\n 'name': response.xpath('normalize-space(//span[@class=\"LocationName-brand\"]/text())').extract_first(),\n 'addr_full': response.xpath('normalize-space(//meta[@itemprop=\"streetAddress\"]/@content)').extract_first(),\n 'city': response.xpath('normalize-space(//meta[@itemprop=\"addressLocality\"]/@content)').extract_first(),\n 'state': response.xpath('normalize-space(//abbr[@itemprop=\"addressRegion\"]/text())').extract_first(),\n 'postcode': response.xpath('normalize-space(//span[@itemprop=\"postalCode\"]/text())').extract_first(),\n 'country': response.xpath('normalize-space(//abbr[@itemprop=\"addressCountry\"]/text())').extract_first(),\n 'phone': response.xpath('normalize-space(//span[@itemprop=\"telephone\"]//text())').extract_first(),\n 'website': response.url,\n 'lat': response.xpath('normalize-space(//meta[@itemprop=\"latitude\"]/@content)').extract_first(),\n 'lon': response.xpath('normalize-space(//meta[@itemprop=\"longitude\"]/@content)').extract_first(),\n }\n\n hours = response.xpath('//span[@class=\"c-location-hours-today js-location-hours\"]/@data-days').extract_first()\n properties['opening_hours'] = self.parse_hours(hours)\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/att.py"}]}
| 1,668 | 155 |
gh_patches_debug_41853
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.general-4935
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
community.general.lxd connection not working with molecule
### Summary
When I try to run `molecule create` with the [lxd driver](https://github.com/ansible-community/molecule-lxd), it creates the lxc container correctly, but then gives a warning and then fails to run a command on the container.
```
[WARNING]: The "ansible_collections.community.general.plugins.connection.lxd" connection plugin has an improperly configured remote target value, forcing "inventory_hostname" templated value instead of the string
```
After some debugging, I found that the `remote_addr` value was being set to the literal string 'inventory_hostname' instead of the value of the current host's `inventory_hostname`. I found another connection plugin that had [fixed a similar issue](https://github.com/ansible/ansible/pull/77894).
Applying this patch to the `plugins/connection/lxd.py` file fixes the problem.
[fix_lxd_inventory_hostname.patch.txt](https://github.com/ansible-collections/community.general/files/8960273/fix_lxd_inventory_hostname.patch.txt)
### Issue Type
Bug Report
### Component Name
plugins/connection/lxd.py
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.13.1]
config file = /home/anton/ansible-collection-oit-ne-servers/roles/common/ansible.cfg
configured module search path = ['/home/anton/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/anton/ansible-collection-oit-ne-servers/.venv/lib/python3.10/site-packages/ansible
ansible collection location = /home/anton/ansible-collection-oit-ne-servers/roles/common/.collections
executable location = /home/anton/ansible-collection-oit-ne-servers/.venv/bin/ansible
python version = 3.10.5 (main, Jun 11 2022, 16:53:24) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = False
```
### Community.general Version
```console (paste below)
$ ansible-galaxy collection list community.general
Collection Version
----------------- -------
community.general 5.2.0
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.4 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.4 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```console
$ mkdir tmp
$ cd tmp
$ python3 -m venv .venv
$ . .venv/bin/activate
$ python3 -m pip install --upgrade pip setuptools wheel
$ python3 -m pip install ansible molecule molecule-lxd
$ molecule init role tmp.common --driver-name lxd
$ cd common
```
Modify `molecule/default/molecule.yml`:
```yaml (paste below)
dependency:
name: galaxy
driver:
name: lxd
platforms:
- name: centos-stream-8
source:
type: image
mode: pull
server: https://images.linuxcontainers.org
protocol: simplestreams
alias: centos/8-Stream/amd64
profiles: ["default"]
provisioner:
name: ansible
verifier:
name: ansible
```
```console
$ molecule create
```
### Expected Results
I expected that the lxd container would be properly created and prepared.
### Actual Results
```console (paste below)
PLAY [Prepare] *****************************************************************
TASK [Install basic packages to bare containers] *******************************
[WARNING]: The "lxd" connection plugin has an improperly configured remote
target value, forcing "inventory_hostname" templated value instead of the
string
fatal: [centos-stream-8]: FAILED! => {"changed": true, "msg": "non-zero return code", "rc": 1, "stderr": "Error: Instance not found\n", "stderr_lines": ["Error: Instance not found"], "stdout": "", "stdout_lines": []}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/connection/lxd.py]
1 # -*- coding: utf-8 -*-
2 # (c) 2016 Matt Clay <[email protected]>
3 # (c) 2017 Ansible Project
4 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
5
6 from __future__ import (absolute_import, division, print_function)
7 __metaclass__ = type
8
9 DOCUMENTATION = '''
10 author: Matt Clay (@mattclay) <[email protected]>
11 name: lxd
12 short_description: Run tasks in lxc containers via lxc CLI
13 description:
14 - Run commands or put/fetch files to an existing lxc container using lxc CLI
15 options:
16 remote_addr:
17 description:
18 - Container identifier.
19 default: inventory_hostname
20 vars:
21 - name: ansible_host
22 - name: ansible_lxd_host
23 executable:
24 description:
25 - shell to use for execution inside container
26 default: /bin/sh
27 vars:
28 - name: ansible_executable
29 - name: ansible_lxd_executable
30 remote:
31 description:
32 - Name of the LXD remote to use.
33 default: local
34 vars:
35 - name: ansible_lxd_remote
36 version_added: 2.0.0
37 project:
38 description:
39 - Name of the LXD project to use.
40 vars:
41 - name: ansible_lxd_project
42 version_added: 2.0.0
43 '''
44
45 import os
46 from subprocess import Popen, PIPE
47
48 from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
49 from ansible.module_utils.common.process import get_bin_path
50 from ansible.module_utils.common.text.converters import to_bytes, to_text
51 from ansible.plugins.connection import ConnectionBase
52
53
54 class Connection(ConnectionBase):
55 """ lxd based connections """
56
57 transport = 'community.general.lxd'
58 has_pipelining = True
59 default_user = 'root'
60
61 def __init__(self, play_context, new_stdin, *args, **kwargs):
62 super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
63
64 self._host = self._play_context.remote_addr
65 try:
66 self._lxc_cmd = get_bin_path("lxc")
67 except ValueError:
68 raise AnsibleError("lxc command not found in PATH")
69
70 if self._play_context.remote_user is not None and self._play_context.remote_user != 'root':
71 self._display.warning('lxd does not support remote_user, using container default: root')
72
73 def _connect(self):
74 """connect to lxd (nothing to do here) """
75 super(Connection, self)._connect()
76
77 if not self._connected:
78 self._display.vvv(u"ESTABLISH LXD CONNECTION FOR USER: root", host=self._host)
79 self._connected = True
80
81 def exec_command(self, cmd, in_data=None, sudoable=True):
82 """ execute a command on the lxd host """
83 super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
84
85 self._display.vvv(u"EXEC {0}".format(cmd), host=self._host)
86
87 local_cmd = [self._lxc_cmd]
88 if self.get_option("project"):
89 local_cmd.extend(["--project", self.get_option("project")])
90 local_cmd.extend([
91 "exec",
92 "%s:%s" % (self.get_option("remote"), self.get_option("remote_addr")),
93 "--",
94 self.get_option("executable"), "-c", cmd
95 ])
96
97 local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
98 in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')
99
100 process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
101 stdout, stderr = process.communicate(in_data)
102
103 stdout = to_text(stdout)
104 stderr = to_text(stderr)
105
106 if stderr == "error: Container is not running.\n":
107 raise AnsibleConnectionFailure("container not running: %s" % self._host)
108
109 if stderr == "error: not found\n":
110 raise AnsibleConnectionFailure("container not found: %s" % self._host)
111
112 return process.returncode, stdout, stderr
113
114 def put_file(self, in_path, out_path):
115 """ put a file from local to lxd """
116 super(Connection, self).put_file(in_path, out_path)
117
118 self._display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self._host)
119
120 if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
121 raise AnsibleFileNotFound("input path is not a file: %s" % in_path)
122
123 local_cmd = [self._lxc_cmd]
124 if self.get_option("project"):
125 local_cmd.extend(["--project", self.get_option("project")])
126 local_cmd.extend([
127 "file", "push",
128 in_path,
129 "%s:%s/%s" % (self.get_option("remote"), self.get_option("remote_addr"), out_path)
130 ])
131
132 local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
133
134 process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
135 process.communicate()
136
137 def fetch_file(self, in_path, out_path):
138 """ fetch a file from lxd to local """
139 super(Connection, self).fetch_file(in_path, out_path)
140
141 self._display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._host)
142
143 local_cmd = [self._lxc_cmd]
144 if self.get_option("project"):
145 local_cmd.extend(["--project", self.get_option("project")])
146 local_cmd.extend([
147 "file", "pull",
148 "%s:%s/%s" % (self.get_option("remote"), self.get_option("remote_addr"), in_path),
149 out_path
150 ])
151
152 local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
153
154 process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
155 process.communicate()
156
157 def close(self):
158 """ close the connection (nothing to do here) """
159 super(Connection, self).close()
160
161 self._connected = False
162
[end of plugins/connection/lxd.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/connection/lxd.py b/plugins/connection/lxd.py
--- a/plugins/connection/lxd.py
+++ b/plugins/connection/lxd.py
@@ -18,6 +18,7 @@
- Container identifier.
default: inventory_hostname
vars:
+ - name: inventory_hostname
- name: ansible_host
- name: ansible_lxd_host
executable:
@@ -61,7 +62,6 @@
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
- self._host = self._play_context.remote_addr
try:
self._lxc_cmd = get_bin_path("lxc")
except ValueError:
@@ -75,14 +75,14 @@
super(Connection, self)._connect()
if not self._connected:
- self._display.vvv(u"ESTABLISH LXD CONNECTION FOR USER: root", host=self._host)
+ self._display.vvv(u"ESTABLISH LXD CONNECTION FOR USER: root", host=self.get_option('remote_addr'))
self._connected = True
def exec_command(self, cmd, in_data=None, sudoable=True):
""" execute a command on the lxd host """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
- self._display.vvv(u"EXEC {0}".format(cmd), host=self._host)
+ self._display.vvv(u"EXEC {0}".format(cmd), host=self.get_option('remote_addr'))
local_cmd = [self._lxc_cmd]
if self.get_option("project"):
@@ -104,10 +104,10 @@
stderr = to_text(stderr)
if stderr == "error: Container is not running.\n":
- raise AnsibleConnectionFailure("container not running: %s" % self._host)
+ raise AnsibleConnectionFailure("container not running: %s" % self.get_option('remote_addr'))
if stderr == "error: not found\n":
- raise AnsibleConnectionFailure("container not found: %s" % self._host)
+ raise AnsibleConnectionFailure("container not found: %s" % self.get_option('remote_addr'))
return process.returncode, stdout, stderr
@@ -115,7 +115,7 @@
""" put a file from local to lxd """
super(Connection, self).put_file(in_path, out_path)
- self._display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self._host)
+ self._display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.get_option('remote_addr'))
if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("input path is not a file: %s" % in_path)
@@ -138,7 +138,7 @@
""" fetch a file from lxd to local """
super(Connection, self).fetch_file(in_path, out_path)
- self._display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._host)
+ self._display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.get_option('remote_addr'))
local_cmd = [self._lxc_cmd]
if self.get_option("project"):
|
{"golden_diff": "diff --git a/plugins/connection/lxd.py b/plugins/connection/lxd.py\n--- a/plugins/connection/lxd.py\n+++ b/plugins/connection/lxd.py\n@@ -18,6 +18,7 @@\n - Container identifier.\n default: inventory_hostname\n vars:\n+ - name: inventory_hostname\n - name: ansible_host\n - name: ansible_lxd_host\n executable:\n@@ -61,7 +62,6 @@\n def __init__(self, play_context, new_stdin, *args, **kwargs):\n super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)\n \n- self._host = self._play_context.remote_addr\n try:\n self._lxc_cmd = get_bin_path(\"lxc\")\n except ValueError:\n@@ -75,14 +75,14 @@\n super(Connection, self)._connect()\n \n if not self._connected:\n- self._display.vvv(u\"ESTABLISH LXD CONNECTION FOR USER: root\", host=self._host)\n+ self._display.vvv(u\"ESTABLISH LXD CONNECTION FOR USER: root\", host=self.get_option('remote_addr'))\n self._connected = True\n \n def exec_command(self, cmd, in_data=None, sudoable=True):\n \"\"\" execute a command on the lxd host \"\"\"\n super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)\n \n- self._display.vvv(u\"EXEC {0}\".format(cmd), host=self._host)\n+ self._display.vvv(u\"EXEC {0}\".format(cmd), host=self.get_option('remote_addr'))\n \n local_cmd = [self._lxc_cmd]\n if self.get_option(\"project\"):\n@@ -104,10 +104,10 @@\n stderr = to_text(stderr)\n \n if stderr == \"error: Container is not running.\\n\":\n- raise AnsibleConnectionFailure(\"container not running: %s\" % self._host)\n+ raise AnsibleConnectionFailure(\"container not running: %s\" % self.get_option('remote_addr'))\n \n if stderr == \"error: not found\\n\":\n- raise AnsibleConnectionFailure(\"container not found: %s\" % self._host)\n+ raise AnsibleConnectionFailure(\"container not found: %s\" % self.get_option('remote_addr'))\n \n return process.returncode, stdout, stderr\n \n@@ -115,7 +115,7 @@\n \"\"\" put a file from local to lxd \"\"\"\n super(Connection, self).put_file(in_path, out_path)\n \n- self._display.vvv(u\"PUT {0} TO {1}\".format(in_path, out_path), host=self._host)\n+ self._display.vvv(u\"PUT {0} TO {1}\".format(in_path, out_path), host=self.get_option('remote_addr'))\n \n if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):\n raise AnsibleFileNotFound(\"input path is not a file: %s\" % in_path)\n@@ -138,7 +138,7 @@\n \"\"\" fetch a file from lxd to local \"\"\"\n super(Connection, self).fetch_file(in_path, out_path)\n \n- self._display.vvv(u\"FETCH {0} TO {1}\".format(in_path, out_path), host=self._host)\n+ self._display.vvv(u\"FETCH {0} TO {1}\".format(in_path, out_path), host=self.get_option('remote_addr'))\n \n local_cmd = [self._lxc_cmd]\n if self.get_option(\"project\"):\n", "issue": "community.general.lxd connection not working with molecule\n### Summary\r\n\r\nWhen I try to run `molecule create` with the [lxd driver](https://github.com/ansible-community/molecule-lxd), it creates the lxc container correctly, but then gives a warning and then fails to run a command on the container.\r\n```\r\n[WARNING]: The \"ansible_collections.community.general.plugins.connection.lxd\" connection plugin has an improperly configured remote target value, forcing \"inventory_hostname\" templated value instead of the string\r\n```\r\nAfter some debugging, I found that the `remote_addr` value was being set to the literal string 'inventory_hostname' instead of the value of the current host's `inventory_hostname`. I found another connection plugin that had [fixed a similar issue](https://github.com/ansible/ansible/pull/77894).\r\n\r\nApplying this patch to the `plugins/connection/lxd.py` file fixes the problem.\r\n\r\n[fix_lxd_inventory_hostname.patch.txt](https://github.com/ansible-collections/community.general/files/8960273/fix_lxd_inventory_hostname.patch.txt)\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nplugins/connection/lxd.py\r\n\r\n### Ansible Version\r\n\r\n```console (paste below)\r\n$ ansible --version\r\nansible [core 2.13.1]\r\n config file = /home/anton/ansible-collection-oit-ne-servers/roles/common/ansible.cfg\r\n configured module search path = ['/home/anton/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/anton/ansible-collection-oit-ne-servers/.venv/lib/python3.10/site-packages/ansible\r\n ansible collection location = /home/anton/ansible-collection-oit-ne-servers/roles/common/.collections\r\n executable location = /home/anton/ansible-collection-oit-ne-servers/.venv/bin/ansible\r\n python version = 3.10.5 (main, Jun 11 2022, 16:53:24) [GCC 9.4.0]\r\n jinja version = 3.1.2\r\n libyaml = False\r\n\r\n```\r\n\r\n\r\n### Community.general Version\r\n\r\n```console (paste below)\r\n$ ansible-galaxy collection list community.general\r\nCollection Version\r\n----------------- -------\r\ncommunity.general 5.2.0 \r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\ncat /etc/os-release \r\nNAME=\"Ubuntu\"\r\nVERSION=\"20.04.4 LTS (Focal Fossa)\"\r\nID=ubuntu\r\nID_LIKE=debian\r\nPRETTY_NAME=\"Ubuntu 20.04.4 LTS\"\r\nVERSION_ID=\"20.04\"\r\nHOME_URL=\"https://www.ubuntu.com/\"\r\nSUPPORT_URL=\"https://help.ubuntu.com/\"\r\nBUG_REPORT_URL=\"https://bugs.launchpad.net/ubuntu/\"\r\nPRIVACY_POLICY_URL=\"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy\"\r\nVERSION_CODENAME=focal\r\nUBUNTU_CODENAME=focal\r\n\r\n### Steps to Reproduce\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```console\r\n$ mkdir tmp\r\n$ cd tmp\r\n$ python3 -m venv .venv\r\n$ . .venv/bin/activate\r\n$ python3 -m pip install --upgrade pip setuptools wheel\r\n$ python3 -m pip install ansible molecule molecule-lxd\r\n$ molecule init role tmp.common --driver-name lxd\r\n$ cd common\r\n```\r\n\r\nModify `molecule/default/molecule.yml`:\r\n```yaml (paste below)\r\ndependency:\r\n name: galaxy\r\ndriver:\r\n name: lxd\r\nplatforms:\r\n - name: centos-stream-8\r\n source:\r\n type: image\r\n mode: pull\r\n server: https://images.linuxcontainers.org\r\n protocol: simplestreams\r\n alias: centos/8-Stream/amd64\r\n profiles: [\"default\"]\r\nprovisioner:\r\n name: ansible\r\nverifier:\r\n name: ansible\r\n```\r\n\r\n```console\r\n$ molecule create\r\n```\r\n\r\n### Expected Results\r\n\r\nI expected that the lxd container would be properly created and prepared.\r\n\r\n### Actual Results\r\n\r\n```console (paste below)\r\nPLAY [Prepare] *****************************************************************\r\n\r\nTASK [Install basic packages to bare containers] *******************************\r\n[WARNING]: The \"lxd\" connection plugin has an improperly configured remote\r\ntarget value, forcing \"inventory_hostname\" templated value instead of the\r\nstring\r\nfatal: [centos-stream-8]: FAILED! => {\"changed\": true, \"msg\": \"non-zero return code\", \"rc\": 1, \"stderr\": \"Error: Instance not found\\n\", \"stderr_lines\": [\"Error: Instance not found\"], \"stdout\": \"\", \"stdout_lines\": []}\r\n\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# (c) 2016 Matt Clay <[email protected]>\n# (c) 2017 Ansible Project\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import (absolute_import, division, print_function)\n__metaclass__ = type\n\nDOCUMENTATION = '''\n author: Matt Clay (@mattclay) <[email protected]>\n name: lxd\n short_description: Run tasks in lxc containers via lxc CLI\n description:\n - Run commands or put/fetch files to an existing lxc container using lxc CLI\n options:\n remote_addr:\n description:\n - Container identifier.\n default: inventory_hostname\n vars:\n - name: ansible_host\n - name: ansible_lxd_host\n executable:\n description:\n - shell to use for execution inside container\n default: /bin/sh\n vars:\n - name: ansible_executable\n - name: ansible_lxd_executable\n remote:\n description:\n - Name of the LXD remote to use.\n default: local\n vars:\n - name: ansible_lxd_remote\n version_added: 2.0.0\n project:\n description:\n - Name of the LXD project to use.\n vars:\n - name: ansible_lxd_project\n version_added: 2.0.0\n'''\n\nimport os\nfrom subprocess import Popen, PIPE\n\nfrom ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound\nfrom ansible.module_utils.common.process import get_bin_path\nfrom ansible.module_utils.common.text.converters import to_bytes, to_text\nfrom ansible.plugins.connection import ConnectionBase\n\n\nclass Connection(ConnectionBase):\n \"\"\" lxd based connections \"\"\"\n\n transport = 'community.general.lxd'\n has_pipelining = True\n default_user = 'root'\n\n def __init__(self, play_context, new_stdin, *args, **kwargs):\n super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)\n\n self._host = self._play_context.remote_addr\n try:\n self._lxc_cmd = get_bin_path(\"lxc\")\n except ValueError:\n raise AnsibleError(\"lxc command not found in PATH\")\n\n if self._play_context.remote_user is not None and self._play_context.remote_user != 'root':\n self._display.warning('lxd does not support remote_user, using container default: root')\n\n def _connect(self):\n \"\"\"connect to lxd (nothing to do here) \"\"\"\n super(Connection, self)._connect()\n\n if not self._connected:\n self._display.vvv(u\"ESTABLISH LXD CONNECTION FOR USER: root\", host=self._host)\n self._connected = True\n\n def exec_command(self, cmd, in_data=None, sudoable=True):\n \"\"\" execute a command on the lxd host \"\"\"\n super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)\n\n self._display.vvv(u\"EXEC {0}\".format(cmd), host=self._host)\n\n local_cmd = [self._lxc_cmd]\n if self.get_option(\"project\"):\n local_cmd.extend([\"--project\", self.get_option(\"project\")])\n local_cmd.extend([\n \"exec\",\n \"%s:%s\" % (self.get_option(\"remote\"), self.get_option(\"remote_addr\")),\n \"--\",\n self.get_option(\"executable\"), \"-c\", cmd\n ])\n\n local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]\n in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')\n\n process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n stdout, stderr = process.communicate(in_data)\n\n stdout = to_text(stdout)\n stderr = to_text(stderr)\n\n if stderr == \"error: Container is not running.\\n\":\n raise AnsibleConnectionFailure(\"container not running: %s\" % self._host)\n\n if stderr == \"error: not found\\n\":\n raise AnsibleConnectionFailure(\"container not found: %s\" % self._host)\n\n return process.returncode, stdout, stderr\n\n def put_file(self, in_path, out_path):\n \"\"\" put a file from local to lxd \"\"\"\n super(Connection, self).put_file(in_path, out_path)\n\n self._display.vvv(u\"PUT {0} TO {1}\".format(in_path, out_path), host=self._host)\n\n if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):\n raise AnsibleFileNotFound(\"input path is not a file: %s\" % in_path)\n\n local_cmd = [self._lxc_cmd]\n if self.get_option(\"project\"):\n local_cmd.extend([\"--project\", self.get_option(\"project\")])\n local_cmd.extend([\n \"file\", \"push\",\n in_path,\n \"%s:%s/%s\" % (self.get_option(\"remote\"), self.get_option(\"remote_addr\"), out_path)\n ])\n\n local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]\n\n process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n process.communicate()\n\n def fetch_file(self, in_path, out_path):\n \"\"\" fetch a file from lxd to local \"\"\"\n super(Connection, self).fetch_file(in_path, out_path)\n\n self._display.vvv(u\"FETCH {0} TO {1}\".format(in_path, out_path), host=self._host)\n\n local_cmd = [self._lxc_cmd]\n if self.get_option(\"project\"):\n local_cmd.extend([\"--project\", self.get_option(\"project\")])\n local_cmd.extend([\n \"file\", \"pull\",\n \"%s:%s/%s\" % (self.get_option(\"remote\"), self.get_option(\"remote_addr\"), in_path),\n out_path\n ])\n\n local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]\n\n process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)\n process.communicate()\n\n def close(self):\n \"\"\" close the connection (nothing to do here) \"\"\"\n super(Connection, self).close()\n\n self._connected = False\n", "path": "plugins/connection/lxd.py"}]}
| 3,371 | 787 |
gh_patches_debug_8441
|
rasdani/github-patches
|
git_diff
|
pyqtgraph__pyqtgraph-953
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HistogramLUTWidget kargs bug?
# lut_widget = HistogramLUTWidget(background='w')
File "/usr/local/lib/python3.4/dist-packages/pyqtgraph-0.9.8-py3.4.egg/pyqtgraph/widgets/HistogramLUTWidget.py", line 18, in **init**
self.item = HistogramLUTItem(_args, *_kargs)
# TypeError: **init**() got an unexpected keyword argument 'background'
I can fix it by:
class HistogramLUTWidget(pg.GraphicsView):
```
def __init__(self, parent=None, *args, **kargs):
# background = kargs.get('background', 'default')
background = kargs.pop('background', 'default')
```
...
</issue>
<code>
[start of pyqtgraph/widgets/HistogramLUTWidget.py]
1 """
2 Widget displaying an image histogram along with gradient editor. Can be used to adjust the appearance of images.
3 This is a wrapper around HistogramLUTItem
4 """
5
6 from ..Qt import QtGui, QtCore
7 from .GraphicsView import GraphicsView
8 from ..graphicsItems.HistogramLUTItem import HistogramLUTItem
9
10 __all__ = ['HistogramLUTWidget']
11
12
13 class HistogramLUTWidget(GraphicsView):
14
15 def __init__(self, parent=None, *args, **kargs):
16 background = kargs.get('background', 'default')
17 GraphicsView.__init__(self, parent, useOpenGL=False, background=background)
18 self.item = HistogramLUTItem(*args, **kargs)
19 self.setCentralItem(self.item)
20 self.setSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Expanding)
21 self.setMinimumWidth(95)
22
23
24 def sizeHint(self):
25 return QtCore.QSize(115, 200)
26
27
28
29 def __getattr__(self, attr):
30 return getattr(self.item, attr)
31
32
33
34
[end of pyqtgraph/widgets/HistogramLUTWidget.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyqtgraph/widgets/HistogramLUTWidget.py b/pyqtgraph/widgets/HistogramLUTWidget.py
--- a/pyqtgraph/widgets/HistogramLUTWidget.py
+++ b/pyqtgraph/widgets/HistogramLUTWidget.py
@@ -13,7 +13,7 @@
class HistogramLUTWidget(GraphicsView):
def __init__(self, parent=None, *args, **kargs):
- background = kargs.get('background', 'default')
+ background = kargs.pop('background', 'default')
GraphicsView.__init__(self, parent, useOpenGL=False, background=background)
self.item = HistogramLUTItem(*args, **kargs)
self.setCentralItem(self.item)
|
{"golden_diff": "diff --git a/pyqtgraph/widgets/HistogramLUTWidget.py b/pyqtgraph/widgets/HistogramLUTWidget.py\n--- a/pyqtgraph/widgets/HistogramLUTWidget.py\n+++ b/pyqtgraph/widgets/HistogramLUTWidget.py\n@@ -13,7 +13,7 @@\n class HistogramLUTWidget(GraphicsView):\n \n def __init__(self, parent=None, *args, **kargs):\n- background = kargs.get('background', 'default')\n+ background = kargs.pop('background', 'default')\n GraphicsView.__init__(self, parent, useOpenGL=False, background=background)\n self.item = HistogramLUTItem(*args, **kargs)\n self.setCentralItem(self.item)\n", "issue": "HistogramLUTWidget kargs bug?\n# lut_widget = HistogramLUTWidget(background='w')\n\n File \"/usr/local/lib/python3.4/dist-packages/pyqtgraph-0.9.8-py3.4.egg/pyqtgraph/widgets/HistogramLUTWidget.py\", line 18, in **init**\n self.item = HistogramLUTItem(_args, *_kargs)\n# TypeError: **init**() got an unexpected keyword argument 'background'\n\nI can fix it by:\n\nclass HistogramLUTWidget(pg.GraphicsView):\n\n```\ndef __init__(self, parent=None, *args, **kargs):\n # background = kargs.get('background', 'default')\n background = kargs.pop('background', 'default')\n```\n\n...\n\n", "before_files": [{"content": "\"\"\"\nWidget displaying an image histogram along with gradient editor. Can be used to adjust the appearance of images.\nThis is a wrapper around HistogramLUTItem\n\"\"\"\n\nfrom ..Qt import QtGui, QtCore\nfrom .GraphicsView import GraphicsView\nfrom ..graphicsItems.HistogramLUTItem import HistogramLUTItem\n\n__all__ = ['HistogramLUTWidget']\n\n\nclass HistogramLUTWidget(GraphicsView):\n \n def __init__(self, parent=None, *args, **kargs):\n background = kargs.get('background', 'default')\n GraphicsView.__init__(self, parent, useOpenGL=False, background=background)\n self.item = HistogramLUTItem(*args, **kargs)\n self.setCentralItem(self.item)\n self.setSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Expanding)\n self.setMinimumWidth(95)\n \n\n def sizeHint(self):\n return QtCore.QSize(115, 200)\n \n \n\n def __getattr__(self, attr):\n return getattr(self.item, attr)\n\n\n\n", "path": "pyqtgraph/widgets/HistogramLUTWidget.py"}]}
| 990 | 158 |
gh_patches_debug_6219
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1502
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
AR scraper failing since at least 2017-03-11
State: AR - scraper has been failing since 2017-03-11
Based on automated runs it appears that AR has not run successfully in 2 days (2017-03-11).
```/usr/local/bin/billy-update ar``` | **failed during bills**
```
Traceback (most recent call last):
File "/opt/openstates/billy/billy/bin/update.py", line 368, in main
run_record += _run_scraper(stype, args, metadata)
File "/opt/openstates/billy/billy/bin/update.py", line 102, in _run_scraper
scraper.scrape(chamber, time)
File "/srv/openstates-web/openstates/ar/bills.py", line 40, in scrape
self.save_bill(bill)
File "/opt/openstates/billy/billy/scrape/__init__.py", line 199, in save_object
self.validate_json(obj)
File "/opt/openstates/billy/billy/scrape/__init__.py", line 130, in validate_json
raise ve
FieldValidationError: Value u'' for field '<obj>.sponsors[0].name' cannot be blank'
```
Visit http://bobsled.openstates.org/ for more info.
</issue>
<code>
[start of openstates/ar/bills.py]
1 import re
2 import csv
3 import StringIO
4 import datetime
5
6 from billy.scrape.bills import BillScraper, Bill
7 from billy.scrape.votes import Vote
8
9 import lxml.html
10
11 import scrapelib
12
13
14 def unicode_csv_reader(unicode_csv_data, dialect=csv.excel, **kwargs):
15 # csv.py doesn't do Unicode; encode temporarily as UTF-8:
16 csv_reader = csv.reader(utf_8_encoder(unicode_csv_data),
17 dialect=dialect, **kwargs)
18 for row in csv_reader:
19 # decode UTF-8 back to Unicode, cell by cell:
20 yield [unicode(cell, 'utf-8') for cell in row]
21
22
23 def utf_8_encoder(unicode_csv_data):
24 for line in unicode_csv_data:
25 yield line.encode('utf-8')
26
27
28 class ARBillScraper(BillScraper):
29 jurisdiction = 'ar'
30
31 def scrape(self, chamber, session):
32 self.bills = {}
33
34 self.slug = self.metadata['session_details'][session]['slug']
35
36 self.scrape_bill(chamber, session)
37 self.scrape_actions()
38
39 for bill in self.bills.itervalues():
40 self.save_bill(bill)
41
42 def scrape_bill(self, chamber, session):
43 url = "ftp://www.arkleg.state.ar.us/dfadooas/LegislativeMeasures.txt"
44 page = self.get(url).text
45 page = unicode_csv_reader(StringIO.StringIO(page), delimiter='|')
46
47 for row in page:
48 bill_chamber = {'H': 'lower', 'S': 'upper'}[row[0]]
49 if bill_chamber != chamber:
50 continue
51
52 bill_id = "%s%s %s" % (row[0], row[1], row[2])
53
54 type_spec = re.match(r'(H|S)([A-Z]+)\s', bill_id).group(2)
55 bill_type = {
56 'B': 'bill',
57 'R': 'resolution',
58 'JR': 'joint resolution',
59 'CR': 'concurrent resolution',
60 'MR': 'memorial resolution',
61 'CMR': 'concurrent memorial resolution'}[type_spec]
62
63 if row[-1] != self.slug:
64 continue
65
66 bill = Bill(session, chamber, bill_id, row[3], type=bill_type)
67 bill.add_source(url)
68
69 primary = row[11]
70 if not primary:
71 primary = row[12]
72 bill.add_sponsor('primary', primary)
73
74 # ftp://www.arkleg.state.ar.us/Bills/
75 # TODO: Keep on eye on this post 2017 to see if they apply R going forward.
76 session_code = '2017R' if session == '2017' else session
77
78 version_url = ("ftp://www.arkleg.state.ar.us/Bills/"
79 "%s/Public/%s.pdf" % (
80 session_code, bill_id.replace(' ', '')))
81 bill.add_version(bill_id, version_url, mimetype='application/pdf')
82
83 self.scrape_bill_page(bill)
84
85 self.bills[bill_id] = bill
86
87 def scrape_actions(self):
88 url = "ftp://www.arkleg.state.ar.us/dfadooas/ChamberActions.txt"
89 page = self.get(url).text
90 page = csv.reader(StringIO.StringIO(page))
91
92 for row in page:
93 bill_id = "%s%s %s" % (row[1], row[2], row[3])
94
95 if bill_id not in self.bills:
96 continue
97 # different term
98 if row[-2] != self.slug:
99 continue
100
101 # Commas aren't escaped, but only one field (the action) can
102 # contain them so we can work around it by using both positive
103 # and negative offsets
104 bill_id = "%s%s %s" % (row[1], row[2], row[3])
105 actor = {'HU': 'lower', 'SU': 'upper'}[row[-5].upper()]
106 # manual fix for crazy time value
107 row[6] = row[6].replace('.520000000', '')
108 date = datetime.datetime.strptime(row[6], "%Y-%m-%d %H:%M:%S")
109 action = ','.join(row[7:-5])
110
111 action_type = []
112 if action.startswith('Filed'):
113 action_type.append('bill:introduced')
114 elif (action.startswith('Read first time') or
115 action.startswith('Read the first time')):
116 action_type.append('bill:reading:1')
117 if re.match('Read the first time, .*, read the second time', action):
118 action_type.append('bill:reading:2')
119 elif action.startswith('Read the third time and passed'):
120 action_type.append('bill:passed')
121 action_type.append('bill:reading:3')
122 elif action.startswith('Read the third time'):
123 action_type.append('bill:reading:3')
124 elif action.startswith('DELIVERED TO GOVERNOR'):
125 action_type.append('governor:received')
126 elif action.startswith('Notification'):
127 action_type.append('governor:signed')
128
129 if 'referred to' in action:
130 action_type.append('committee:referred')
131
132 if 'Returned by the Committee' in action:
133 if 'recommendation that it Do Pass' in action:
134 action_type.append('committee:passed:favorable')
135 else:
136 action_type.append('committee:passed')
137
138 if re.match(r'Amendment No\. \d+ read and adopted', action):
139 action_type.append('amendment:introduced')
140 action_type.append('amendment:passed')
141
142 if not action:
143 action = '[No text provided]'
144 self.bills[bill_id].add_action(actor, action, date,
145 type=action_type or ['other'])
146
147 def scrape_bill_page(self, bill):
148 # We need to scrape each bill page in order to grab associated votes.
149 # It's still more efficient to get the rest of the data we're
150 # interested in from the CSVs, though, because their site splits
151 # other info (e.g. actions) across many pages
152 for t in self.metadata['terms']:
153 if bill['session'] in t['sessions']:
154 term_year = t['start_year']
155 break
156 measureno = bill['bill_id'].replace(' ', '')
157 url = ("http://www.arkleg.state.ar.us/assembly/%s/%s/"
158 "Pages/BillInformation.aspx?measureno=%s" % (
159 term_year, self.slug, measureno))
160 bill.add_source(url)
161
162 page = lxml.html.fromstring(self.get(url).text)
163 page.make_links_absolute(url)
164
165 for link in page.xpath("//a[contains(@href, 'Amendments')]"):
166 num = link.xpath("string(../../td[2])")
167 name = "Amendment %s" % num
168 bill.add_document(name, link.attrib['href'])
169
170 try:
171 cosponsor_link = page.xpath(
172 "//a[contains(@href, 'CoSponsors')]")[0]
173 self.scrape_cosponsors(bill, cosponsor_link.attrib['href'])
174 except IndexError:
175 # No cosponsor link is OK
176 pass
177
178 # hist_link = page.xpath("//a[contains(@href, 'BillStatusHistory')]")[0]
179 # self.scrape_votes(bill, hist_link.attrib['href'])
180
181 # def scrape_votes(self, bill, url):
182 # page = lxml.html.fromstring(self.get(url).text)
183 # page.make_links_absolute(url)
184
185 for link in page.xpath("//a[contains(@href, 'votes.aspx')]"):
186 date = link.xpath("string(../../td[2])")
187 date = datetime.datetime.strptime(date, "%m/%d/%Y %I:%M:%S %p")
188
189 motion = link.xpath("string(../../td[3])")
190
191 self.scrape_vote(bill, date, motion, link.attrib['href'])
192
193 def scrape_vote(self, bill, date, motion, url):
194 try:
195 page = self.get(url).text
196 except scrapelib.HTTPError:
197 #sometiems the link is there but is dead
198 return
199
200 if 'not yet official' in page:
201 # Sometimes they link to vote pages before they go live
202 return
203
204 page = lxml.html.fromstring(page)
205
206 if url.endswith('Senate'):
207 actor = 'upper'
208 else:
209 actor = 'lower'
210
211 count_path = "string(//td[@align = 'center' and contains(., '%s: ')])"
212 yes_count = int(page.xpath(count_path % "Yeas").split()[-1])
213 no_count = int(page.xpath(count_path % "Nays").split()[-1])
214 other_count = int(page.xpath(count_path % "Non Voting").split()[-1])
215 other_count += int(page.xpath(count_path % "Present").split()[-1])
216
217 passed = yes_count > no_count + other_count
218 vote = Vote(actor, date, motion, passed, yes_count,
219 no_count, other_count)
220 vote.add_source(url)
221
222 xpath = (
223 '//*[contains(@class, "ms-standardheader")]/'
224 'following-sibling::table')
225 divs = page.xpath(xpath)
226 votevals = 'yes no other other'.split()
227 for (voteval, div) in zip(votevals, divs):
228 for a in div.xpath('.//a'):
229 name = a.text_content().strip()
230 if not name:
231 continue
232 getattr(vote, voteval)(name)
233 bill.add_vote(vote)
234
235 def scrape_cosponsors(self, bill, url):
236 page = self.get(url).text
237 page = lxml.html.fromstring(page)
238
[end of openstates/ar/bills.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openstates/ar/bills.py b/openstates/ar/bills.py
--- a/openstates/ar/bills.py
+++ b/openstates/ar/bills.py
@@ -69,7 +69,8 @@
primary = row[11]
if not primary:
primary = row[12]
- bill.add_sponsor('primary', primary)
+ if primary:
+ bill.add_sponsor('primary', primary)
# ftp://www.arkleg.state.ar.us/Bills/
# TODO: Keep on eye on this post 2017 to see if they apply R going forward.
|
{"golden_diff": "diff --git a/openstates/ar/bills.py b/openstates/ar/bills.py\n--- a/openstates/ar/bills.py\n+++ b/openstates/ar/bills.py\n@@ -69,7 +69,8 @@\n primary = row[11]\n if not primary:\n primary = row[12]\n- bill.add_sponsor('primary', primary)\n+ if primary:\n+ bill.add_sponsor('primary', primary)\n \n # ftp://www.arkleg.state.ar.us/Bills/\n # TODO: Keep on eye on this post 2017 to see if they apply R going forward.\n", "issue": "AR scraper failing since at least 2017-03-11\nState: AR - scraper has been failing since 2017-03-11\n\nBased on automated runs it appears that AR has not run successfully in 2 days (2017-03-11).\n\n```/usr/local/bin/billy-update ar``` | **failed during bills**\n\n```\n Traceback (most recent call last):\n File \"/opt/openstates/billy/billy/bin/update.py\", line 368, in main\n run_record += _run_scraper(stype, args, metadata)\n File \"/opt/openstates/billy/billy/bin/update.py\", line 102, in _run_scraper\n scraper.scrape(chamber, time)\n File \"/srv/openstates-web/openstates/ar/bills.py\", line 40, in scrape\n self.save_bill(bill)\n File \"/opt/openstates/billy/billy/scrape/__init__.py\", line 199, in save_object\n self.validate_json(obj)\n File \"/opt/openstates/billy/billy/scrape/__init__.py\", line 130, in validate_json\n raise ve\nFieldValidationError: Value u'' for field '<obj>.sponsors[0].name' cannot be blank'\n\n```\n\nVisit http://bobsled.openstates.org/ for more info.\n\n", "before_files": [{"content": "import re\nimport csv\nimport StringIO\nimport datetime\n\nfrom billy.scrape.bills import BillScraper, Bill\nfrom billy.scrape.votes import Vote\n\nimport lxml.html\n\nimport scrapelib\n\n\ndef unicode_csv_reader(unicode_csv_data, dialect=csv.excel, **kwargs):\n # csv.py doesn't do Unicode; encode temporarily as UTF-8:\n csv_reader = csv.reader(utf_8_encoder(unicode_csv_data),\n dialect=dialect, **kwargs)\n for row in csv_reader:\n # decode UTF-8 back to Unicode, cell by cell:\n yield [unicode(cell, 'utf-8') for cell in row]\n\n\ndef utf_8_encoder(unicode_csv_data):\n for line in unicode_csv_data:\n yield line.encode('utf-8')\n\n\nclass ARBillScraper(BillScraper):\n jurisdiction = 'ar'\n\n def scrape(self, chamber, session):\n self.bills = {}\n\n self.slug = self.metadata['session_details'][session]['slug']\n\n self.scrape_bill(chamber, session)\n self.scrape_actions()\n\n for bill in self.bills.itervalues():\n self.save_bill(bill)\n\n def scrape_bill(self, chamber, session):\n url = \"ftp://www.arkleg.state.ar.us/dfadooas/LegislativeMeasures.txt\"\n page = self.get(url).text\n page = unicode_csv_reader(StringIO.StringIO(page), delimiter='|')\n\n for row in page:\n bill_chamber = {'H': 'lower', 'S': 'upper'}[row[0]]\n if bill_chamber != chamber:\n continue\n\n bill_id = \"%s%s %s\" % (row[0], row[1], row[2])\n\n type_spec = re.match(r'(H|S)([A-Z]+)\\s', bill_id).group(2)\n bill_type = {\n 'B': 'bill',\n 'R': 'resolution',\n 'JR': 'joint resolution',\n 'CR': 'concurrent resolution',\n 'MR': 'memorial resolution',\n 'CMR': 'concurrent memorial resolution'}[type_spec]\n\n if row[-1] != self.slug:\n continue\n\n bill = Bill(session, chamber, bill_id, row[3], type=bill_type)\n bill.add_source(url)\n\n primary = row[11]\n if not primary:\n primary = row[12]\n bill.add_sponsor('primary', primary)\n\n # ftp://www.arkleg.state.ar.us/Bills/\n # TODO: Keep on eye on this post 2017 to see if they apply R going forward.\n session_code = '2017R' if session == '2017' else session\n\n version_url = (\"ftp://www.arkleg.state.ar.us/Bills/\"\n \"%s/Public/%s.pdf\" % (\n session_code, bill_id.replace(' ', '')))\n bill.add_version(bill_id, version_url, mimetype='application/pdf')\n\n self.scrape_bill_page(bill)\n\n self.bills[bill_id] = bill\n\n def scrape_actions(self):\n url = \"ftp://www.arkleg.state.ar.us/dfadooas/ChamberActions.txt\"\n page = self.get(url).text\n page = csv.reader(StringIO.StringIO(page))\n\n for row in page:\n bill_id = \"%s%s %s\" % (row[1], row[2], row[3])\n\n if bill_id not in self.bills:\n continue\n # different term\n if row[-2] != self.slug:\n continue\n\n # Commas aren't escaped, but only one field (the action) can\n # contain them so we can work around it by using both positive\n # and negative offsets\n bill_id = \"%s%s %s\" % (row[1], row[2], row[3])\n actor = {'HU': 'lower', 'SU': 'upper'}[row[-5].upper()]\n # manual fix for crazy time value\n row[6] = row[6].replace('.520000000', '')\n date = datetime.datetime.strptime(row[6], \"%Y-%m-%d %H:%M:%S\")\n action = ','.join(row[7:-5])\n\n action_type = []\n if action.startswith('Filed'):\n action_type.append('bill:introduced')\n elif (action.startswith('Read first time') or\n action.startswith('Read the first time')):\n action_type.append('bill:reading:1')\n if re.match('Read the first time, .*, read the second time', action):\n action_type.append('bill:reading:2')\n elif action.startswith('Read the third time and passed'):\n action_type.append('bill:passed')\n action_type.append('bill:reading:3')\n elif action.startswith('Read the third time'):\n action_type.append('bill:reading:3')\n elif action.startswith('DELIVERED TO GOVERNOR'):\n action_type.append('governor:received')\n elif action.startswith('Notification'):\n action_type.append('governor:signed')\n\n if 'referred to' in action:\n action_type.append('committee:referred')\n\n if 'Returned by the Committee' in action:\n if 'recommendation that it Do Pass' in action:\n action_type.append('committee:passed:favorable')\n else:\n action_type.append('committee:passed')\n\n if re.match(r'Amendment No\\. \\d+ read and adopted', action):\n action_type.append('amendment:introduced')\n action_type.append('amendment:passed')\n\n if not action:\n action = '[No text provided]'\n self.bills[bill_id].add_action(actor, action, date,\n type=action_type or ['other'])\n\n def scrape_bill_page(self, bill):\n # We need to scrape each bill page in order to grab associated votes.\n # It's still more efficient to get the rest of the data we're\n # interested in from the CSVs, though, because their site splits\n # other info (e.g. actions) across many pages\n for t in self.metadata['terms']:\n if bill['session'] in t['sessions']:\n term_year = t['start_year']\n break\n measureno = bill['bill_id'].replace(' ', '')\n url = (\"http://www.arkleg.state.ar.us/assembly/%s/%s/\"\n \"Pages/BillInformation.aspx?measureno=%s\" % (\n term_year, self.slug, measureno))\n bill.add_source(url)\n\n page = lxml.html.fromstring(self.get(url).text)\n page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, 'Amendments')]\"):\n num = link.xpath(\"string(../../td[2])\")\n name = \"Amendment %s\" % num\n bill.add_document(name, link.attrib['href'])\n\n try:\n cosponsor_link = page.xpath(\n \"//a[contains(@href, 'CoSponsors')]\")[0]\n self.scrape_cosponsors(bill, cosponsor_link.attrib['href'])\n except IndexError:\n # No cosponsor link is OK\n pass\n\n # hist_link = page.xpath(\"//a[contains(@href, 'BillStatusHistory')]\")[0]\n # self.scrape_votes(bill, hist_link.attrib['href'])\n\n # def scrape_votes(self, bill, url):\n # page = lxml.html.fromstring(self.get(url).text)\n # page.make_links_absolute(url)\n\n for link in page.xpath(\"//a[contains(@href, 'votes.aspx')]\"):\n date = link.xpath(\"string(../../td[2])\")\n date = datetime.datetime.strptime(date, \"%m/%d/%Y %I:%M:%S %p\")\n\n motion = link.xpath(\"string(../../td[3])\")\n\n self.scrape_vote(bill, date, motion, link.attrib['href'])\n\n def scrape_vote(self, bill, date, motion, url):\n try:\n page = self.get(url).text\n except scrapelib.HTTPError:\n #sometiems the link is there but is dead\n return\n\n if 'not yet official' in page:\n # Sometimes they link to vote pages before they go live\n return\n\n page = lxml.html.fromstring(page)\n\n if url.endswith('Senate'):\n actor = 'upper'\n else:\n actor = 'lower'\n\n count_path = \"string(//td[@align = 'center' and contains(., '%s: ')])\"\n yes_count = int(page.xpath(count_path % \"Yeas\").split()[-1])\n no_count = int(page.xpath(count_path % \"Nays\").split()[-1])\n other_count = int(page.xpath(count_path % \"Non Voting\").split()[-1])\n other_count += int(page.xpath(count_path % \"Present\").split()[-1])\n\n passed = yes_count > no_count + other_count\n vote = Vote(actor, date, motion, passed, yes_count,\n no_count, other_count)\n vote.add_source(url)\n\n xpath = (\n '//*[contains(@class, \"ms-standardheader\")]/'\n 'following-sibling::table')\n divs = page.xpath(xpath)\n votevals = 'yes no other other'.split()\n for (voteval, div) in zip(votevals, divs):\n for a in div.xpath('.//a'):\n name = a.text_content().strip()\n if not name:\n continue\n getattr(vote, voteval)(name)\n bill.add_vote(vote)\n\n def scrape_cosponsors(self, bill, url):\n page = self.get(url).text\n page = lxml.html.fromstring(page)\n", "path": "openstates/ar/bills.py"}]}
| 3,575 | 135 |
gh_patches_debug_16517
|
rasdani/github-patches
|
git_diff
|
ultrabug__py3status-113
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Typo in the keyboard_layout module
`xbklayout` function should be `xkblayout`, i.e. "kb" instead of "bk". This typo appears 3 times in total.
The rest of the code uses "kb" so I assumed what I found was a typo and decided to report it since it already caught my eye.
</issue>
<code>
[start of py3status/modules/keyboard_layout.py]
1 # -*- coding: utf-8 -*-
2 """
3 Display the current keyboard layout.
4
5 Configuration parameters:
6 - cache_timeout: check for keyboard layout change every seconds
7
8 Requires:
9 - xkblayout-state
10 or
11 - setxkbmap
12
13 @author shadowprince
14 @license Eclipse Public License
15 """
16
17 from subprocess import check_output
18 from time import time
19 import shlex
20 import re
21
22 # colors of layouts, check your command's output to match keys
23 LANG_COLORS = {
24 'fr': '#268BD2', # solarized blue
25 'ru': '#F75252', # red
26 'ua': '#FCE94F', # yellow
27 'us': '#729FCF', # light blue
28 }
29
30 LAYOUT_RE = re.compile(r".*layout:\s*(\w+).*", flags=re.DOTALL)
31
32
33 def xbklayout():
34 """
35 check using xkblayout-state (preferred method)
36 """
37 return check_output(
38 ["xkblayout-state", "print", "%s"]
39 ).decode('utf-8')
40
41
42 def setxkbmap():
43 """
44 check using setxkbmap >= 1.3.0
45
46 Please read issue 33 for more information :
47 https://github.com/ultrabug/py3status/pull/33
48 """
49 out = check_output(shlex.split("setxkbmap -query")).decode("utf-8")
50
51 return re.match(LAYOUT_RE, out).group(1)
52
53
54 class Py3status:
55 """
56 """
57 # available configuration parameters
58 cache_timeout = 10
59 color = ''
60
61 def __init__(self):
62 """
63 find the best implementation to get the keyboard's layout
64 """
65 try:
66 xbklayout()
67 except:
68 self.command = setxkbmap
69 else:
70 self.command = xbklayout
71
72 def keyboard_layout(self, i3s_output_list, i3s_config):
73 response = {
74 'cached_until': time() + self.cache_timeout,
75 'full_text': ''
76 }
77
78 lang = self.command().strip()
79 lang_color = self.color if self.color else LANG_COLORS.get(lang)
80
81 response['full_text'] = lang or '??'
82 if lang_color:
83 response['color'] = lang_color
84
85 return response
86
87 if __name__ == "__main__":
88 """
89 Test this module by calling it directly.
90 """
91 from time import sleep
92 x = Py3status()
93 config = {
94 'color_good': '#00FF00',
95 'color_bad': '#FF0000',
96 }
97 while True:
98 print(x.keyboard_layout([], config))
99 sleep(1)
100
[end of py3status/modules/keyboard_layout.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/py3status/modules/keyboard_layout.py b/py3status/modules/keyboard_layout.py
--- a/py3status/modules/keyboard_layout.py
+++ b/py3status/modules/keyboard_layout.py
@@ -30,7 +30,7 @@
LAYOUT_RE = re.compile(r".*layout:\s*(\w+).*", flags=re.DOTALL)
-def xbklayout():
+def xkblayout():
"""
check using xkblayout-state (preferred method)
"""
@@ -63,11 +63,11 @@
find the best implementation to get the keyboard's layout
"""
try:
- xbklayout()
+ xkblayout()
except:
self.command = setxkbmap
else:
- self.command = xbklayout
+ self.command = xkblayout
def keyboard_layout(self, i3s_output_list, i3s_config):
response = {
|
{"golden_diff": "diff --git a/py3status/modules/keyboard_layout.py b/py3status/modules/keyboard_layout.py\n--- a/py3status/modules/keyboard_layout.py\n+++ b/py3status/modules/keyboard_layout.py\n@@ -30,7 +30,7 @@\n LAYOUT_RE = re.compile(r\".*layout:\\s*(\\w+).*\", flags=re.DOTALL)\n \n \n-def xbklayout():\n+def xkblayout():\n \"\"\"\n check using xkblayout-state (preferred method)\n \"\"\"\n@@ -63,11 +63,11 @@\n find the best implementation to get the keyboard's layout\n \"\"\"\n try:\n- xbklayout()\n+ xkblayout()\n except:\n self.command = setxkbmap\n else:\n- self.command = xbklayout\n+ self.command = xkblayout\n \n def keyboard_layout(self, i3s_output_list, i3s_config):\n response = {\n", "issue": "Typo in the keyboard_layout module\n`xbklayout` function should be `xkblayout`, i.e. \"kb\" instead of \"bk\". This typo appears 3 times in total.\n\nThe rest of the code uses \"kb\" so I assumed what I found was a typo and decided to report it since it already caught my eye.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDisplay the current keyboard layout.\n\nConfiguration parameters:\n - cache_timeout: check for keyboard layout change every seconds\n\nRequires:\n - xkblayout-state\n or\n - setxkbmap\n\n@author shadowprince\n@license Eclipse Public License\n\"\"\"\n\nfrom subprocess import check_output\nfrom time import time\nimport shlex\nimport re\n\n# colors of layouts, check your command's output to match keys\nLANG_COLORS = {\n 'fr': '#268BD2', # solarized blue\n 'ru': '#F75252', # red\n 'ua': '#FCE94F', # yellow\n 'us': '#729FCF', # light blue\n}\n\nLAYOUT_RE = re.compile(r\".*layout:\\s*(\\w+).*\", flags=re.DOTALL)\n\n\ndef xbklayout():\n \"\"\"\n check using xkblayout-state (preferred method)\n \"\"\"\n return check_output(\n [\"xkblayout-state\", \"print\", \"%s\"]\n ).decode('utf-8')\n\n\ndef setxkbmap():\n \"\"\"\n check using setxkbmap >= 1.3.0\n\n Please read issue 33 for more information :\n https://github.com/ultrabug/py3status/pull/33\n \"\"\"\n out = check_output(shlex.split(\"setxkbmap -query\")).decode(\"utf-8\")\n\n return re.match(LAYOUT_RE, out).group(1)\n\n\nclass Py3status:\n \"\"\"\n \"\"\"\n # available configuration parameters\n cache_timeout = 10\n color = ''\n\n def __init__(self):\n \"\"\"\n find the best implementation to get the keyboard's layout\n \"\"\"\n try:\n xbklayout()\n except:\n self.command = setxkbmap\n else:\n self.command = xbklayout\n\n def keyboard_layout(self, i3s_output_list, i3s_config):\n response = {\n 'cached_until': time() + self.cache_timeout,\n 'full_text': ''\n }\n\n lang = self.command().strip()\n lang_color = self.color if self.color else LANG_COLORS.get(lang)\n\n response['full_text'] = lang or '??'\n if lang_color:\n response['color'] = lang_color\n\n return response\n\nif __name__ == \"__main__\":\n \"\"\"\n Test this module by calling it directly.\n \"\"\"\n from time import sleep\n x = Py3status()\n config = {\n 'color_good': '#00FF00',\n 'color_bad': '#FF0000',\n }\n while True:\n print(x.keyboard_layout([], config))\n sleep(1)\n", "path": "py3status/modules/keyboard_layout.py"}]}
| 1,409 | 208 |
gh_patches_debug_62154
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-258
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`globals` should be an optional config field
Running over a config without `globals`, I see:
```
def make_rundir(config=None, path=None):
"""When a path has not been specified, make the run directory.
Creates a rundir with the following hierarchy:
./runinfo <- Home of all run directories
|----000
|----001 <- Directories for each run
| ....
|----NNN
Kwargs:
- path (str): String path to a specific run dir
Default : None.
"""
try:
if not path:
path = None
> elif config["globals"].get('runDir', None):
E KeyError: 'globals'
../dataflow/rundirs.py:25: KeyError
```
</issue>
<code>
[start of parsl/dataflow/rundirs.py]
1 import os
2 from glob import glob
3 import logging
4
5 logger = logging.getLogger(__name__)
6
7
8 def make_rundir(config=None, path=None):
9 """When a path has not been specified, make the run directory.
10
11 Creates a rundir with the following hierarchy:
12 ./runinfo <- Home of all run directories
13 |----000
14 |----001 <- Directories for each run
15 | ....
16 |----NNN
17
18 Kwargs:
19 - path (str): String path to a specific run dir
20 Default : None.
21 """
22 try:
23 if not path:
24 path = None
25 elif config["globals"].get('runDir', None):
26 path = config["globals"]['runDir']
27
28 if not path:
29 path = "./runinfo"
30
31 if not os.path.exists(path):
32 os.makedirs(path)
33
34 prev_rundirs = glob(os.path.join(path, "[0-9]*"))
35
36 current_rundir = os.path.join(path, '000')
37
38 if prev_rundirs:
39 # Since we globbed on files named as 0-9
40 x = sorted([int(os.path.basename(x)) for x in prev_rundirs])[-1]
41 current_rundir = os.path.join(path, '{0:03}'.format(x + 1))
42
43 os.makedirs(current_rundir)
44 logger.debug("Parsl run initializing in rundir:{0}".format(current_rundir))
45 return os.path.abspath(current_rundir)
46
47 except Exception as e:
48 logger.error("Failed to create a run directory")
49 logger.error("Error: {0}".format(e))
50 exit(-1)
51
[end of parsl/dataflow/rundirs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/dataflow/rundirs.py b/parsl/dataflow/rundirs.py
--- a/parsl/dataflow/rundirs.py
+++ b/parsl/dataflow/rundirs.py
@@ -22,7 +22,7 @@
try:
if not path:
path = None
- elif config["globals"].get('runDir', None):
+ elif config.get("globals", {}).get('runDir'):
path = config["globals"]['runDir']
if not path:
|
{"golden_diff": "diff --git a/parsl/dataflow/rundirs.py b/parsl/dataflow/rundirs.py\n--- a/parsl/dataflow/rundirs.py\n+++ b/parsl/dataflow/rundirs.py\n@@ -22,7 +22,7 @@\n try:\n if not path:\n path = None\n- elif config[\"globals\"].get('runDir', None):\n+ elif config.get(\"globals\", {}).get('runDir'):\n path = config[\"globals\"]['runDir']\n \n if not path:\n", "issue": "`globals` should be an optional config field\nRunning over a config without `globals`, I see:\r\n```\r\n def make_rundir(config=None, path=None):\r\n \"\"\"When a path has not been specified, make the run directory.\r\n\r\n Creates a rundir with the following hierarchy:\r\n ./runinfo <- Home of all run directories\r\n |----000\r\n |----001 <- Directories for each run\r\n | ....\r\n |----NNN\r\n\r\n Kwargs:\r\n - path (str): String path to a specific run dir\r\n Default : None.\r\n \"\"\"\r\n try:\r\n if not path:\r\n path = None\r\n> elif config[\"globals\"].get('runDir', None):\r\nE KeyError: 'globals'\r\n\r\n../dataflow/rundirs.py:25: KeyError\r\n```\n", "before_files": [{"content": "import os\nfrom glob import glob\nimport logging\n\nlogger = logging.getLogger(__name__)\n\n\ndef make_rundir(config=None, path=None):\n \"\"\"When a path has not been specified, make the run directory.\n\n Creates a rundir with the following hierarchy:\n ./runinfo <- Home of all run directories\n |----000\n |----001 <- Directories for each run\n | ....\n |----NNN\n\n Kwargs:\n - path (str): String path to a specific run dir\n Default : None.\n \"\"\"\n try:\n if not path:\n path = None\n elif config[\"globals\"].get('runDir', None):\n path = config[\"globals\"]['runDir']\n\n if not path:\n path = \"./runinfo\"\n\n if not os.path.exists(path):\n os.makedirs(path)\n\n prev_rundirs = glob(os.path.join(path, \"[0-9]*\"))\n\n current_rundir = os.path.join(path, '000')\n\n if prev_rundirs:\n # Since we globbed on files named as 0-9\n x = sorted([int(os.path.basename(x)) for x in prev_rundirs])[-1]\n current_rundir = os.path.join(path, '{0:03}'.format(x + 1))\n\n os.makedirs(current_rundir)\n logger.debug(\"Parsl run initializing in rundir:{0}\".format(current_rundir))\n return os.path.abspath(current_rundir)\n\n except Exception as e:\n logger.error(\"Failed to create a run directory\")\n logger.error(\"Error: {0}\".format(e))\n exit(-1)\n", "path": "parsl/dataflow/rundirs.py"}]}
| 1,177 | 118 |
gh_patches_debug_23081
|
rasdani/github-patches
|
git_diff
|
ManimCommunity__manim-521
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Issue Saving Log files
https://github.com/ManimCommunity/manim/blob/e011f640cf085879b67cce7bc0dc08450ba92d3b/manim/config/config.py#L158-L165
Here it defines that scene name will be log file name. But a problem is when I tried with no scene name and entered it at runtime. The log file a saved in `media/logs/.log` which is weird and should not happen.
</issue>
<code>
[start of manim/config/config.py]
1 """
2 config.py
3 ---------
4 Process the manim.cfg file and the command line arguments into a single
5 config object.
6 """
7
8
9 __all__ = ["file_writer_config", "config", "camera_config", "tempconfig"]
10
11
12 import os
13 import sys
14 from contextlib import contextmanager
15
16 import colour
17
18 from .. import constants
19 from .config_utils import (
20 _determine_quality,
21 _run_config,
22 _init_dirs,
23 _from_command_line,
24 )
25
26 from .logger import set_rich_logger, set_file_logger, logger
27 from ..utils.tex import TexTemplate, TexTemplateFromFile
28
29 __all__ = ["file_writer_config", "config", "camera_config", "tempconfig"]
30
31
32 config = None
33
34
35 @contextmanager
36 def tempconfig(temp):
37 """Context manager that temporarily modifies the global config dict.
38
39 The code block inside the ``with`` statement will use the modified config.
40 After the code block, the config will be restored to its original value.
41
42 Parameters
43 ----------
44
45 temp : :class:`dict`
46 A dictionary whose keys will be used to temporarily update the global
47 config.
48
49 Examples
50 --------
51 Use ``with tempconfig({...})`` to temporarily change the default values of
52 certain objects.
53
54 .. code_block:: python
55
56 c = Camera()
57 c.frame_width == config['frame_width'] # -> True
58 with tempconfig({'frame_width': 100}):
59 c = Camera()
60 c.frame_width == config['frame_width'] # -> False
61 c.frame_width == 100 # -> True
62
63 """
64 global config
65 original = config.copy()
66
67 temp = {k: v for k, v in temp.items() if k in original}
68
69 # In order to change the config that every module has acces to, use
70 # update(), DO NOT use assignment. Assigning config = some_dict will just
71 # make the local variable named config point to a new dictionary, it will
72 # NOT change the dictionary that every module has a reference to.
73 config.update(temp)
74 try:
75 yield
76 finally:
77 config.update(original) # update, not assignment!
78
79
80 def _parse_config(config_parser, args):
81 """Parse config files and CLI arguments into a single dictionary."""
82 # By default, use the CLI section of the digested .cfg files
83 default = config_parser["CLI"]
84
85 # Handle the *_quality flags. These determine the section to read
86 # and are stored in 'camera_config'. Note the highest resolution
87 # passed as argument will be used.
88 quality = _determine_quality(args)
89 section = config_parser[quality if quality != "production" else "CLI"]
90
91 # Loop over low quality for the keys, could be any quality really
92 config = {opt: section.getint(opt) for opt in config_parser["low_quality"]}
93
94 config["default_pixel_height"] = default.getint("pixel_height")
95 config["default_pixel_width"] = default.getint("pixel_width")
96 # The -r, --resolution flag overrides the *_quality flags
97 if args.resolution is not None:
98 if "," in args.resolution:
99 height_str, width_str = args.resolution.split(",")
100 height, width = int(height_str), int(width_str)
101 else:
102 height = int(args.resolution)
103 width = int(16 * height / 9)
104 config.update({"pixel_height": height, "pixel_width": width})
105
106 # Handle the -c (--background_color) flag
107 if args.background_color is not None:
108 try:
109 background_color = colour.Color(args.background_color)
110 except AttributeError as err:
111 logger.warning("Please use a valid color.")
112 logger.error(err)
113 sys.exit(2)
114 else:
115 background_color = colour.Color(default["background_color"])
116 config["background_color"] = background_color
117
118 config["use_js_renderer"] = args.use_js_renderer or default.getboolean(
119 "use_js_renderer"
120 )
121
122 config["js_renderer_path"] = args.js_renderer_path or default.get(
123 "js_renderer_path"
124 )
125
126 # Set the rest of the frame properties
127 config["frame_height"] = 8.0
128 config["frame_width"] = (
129 config["frame_height"] * config["pixel_width"] / config["pixel_height"]
130 )
131 config["frame_y_radius"] = config["frame_height"] / 2
132 config["frame_x_radius"] = config["frame_width"] / 2
133 config["top"] = config["frame_y_radius"] * constants.UP
134 config["bottom"] = config["frame_y_radius"] * constants.DOWN
135 config["left_side"] = config["frame_x_radius"] * constants.LEFT
136 config["right_side"] = config["frame_x_radius"] * constants.RIGHT
137
138 # Handle the --tex_template flag. Note we accept None if the flag is absent
139 tex_fn = os.path.expanduser(args.tex_template) if args.tex_template else None
140
141 if tex_fn is not None and not os.access(tex_fn, os.R_OK):
142 # custom template not available, fallback to default
143 logger.warning(
144 f"Custom TeX template {tex_fn} not found or not readable. "
145 "Falling back to the default template."
146 )
147 tex_fn = None
148 config["tex_template_file"] = tex_fn
149 config["tex_template"] = (
150 TexTemplateFromFile(filename=tex_fn) if tex_fn is not None else TexTemplate()
151 )
152
153 return config
154
155
156 args, config_parser, file_writer_config, successfully_read_files = _run_config()
157 logger.setLevel(file_writer_config["verbosity"])
158 set_rich_logger(config_parser["logger"], file_writer_config["verbosity"])
159
160 if _from_command_line():
161 logger.debug(
162 f"Read configuration files: {[os.path.abspath(cfgfile) for cfgfile in successfully_read_files]}"
163 )
164 if not (hasattr(args, "subcommands")):
165 _init_dirs(file_writer_config)
166 config = _parse_config(config_parser, args)
167 if config["use_js_renderer"]:
168 file_writer_config["disable_caching"] = True
169 camera_config = config
170
171 if file_writer_config["log_to_file"]:
172 # IMPORTANT note about file name : The log file name will be the scene_name get from the args (contained in file_writer_config). So it can differ from the real name of the scene.
173 log_file_path = os.path.join(
174 file_writer_config["log_dir"],
175 "".join(file_writer_config["scene_names"]) + ".log",
176 )
177 set_file_logger(log_file_path)
178 logger.info("Log file wil be saved in %(logpath)s", {"logpath": log_file_path})
179
[end of manim/config/config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/manim/config/config.py b/manim/config/config.py
--- a/manim/config/config.py
+++ b/manim/config/config.py
@@ -156,10 +156,19 @@
# Set the different loggers
set_rich_logger(config_parser["logger"], file_writer_config["verbosity"])
if file_writer_config["log_to_file"]:
- # IMPORTANT note about file name : The log file name will be the scene_name get from the args (contained in file_writer_config). So it can differ from the real name of the scene.
+ # Note about log_file_name : The log file name will be the <name_of_animation_file>_<name_of_scene>.log
+ # get from the args (contained in file_writer_config). So it can differ from the real name of the scene.
+ # <name_of_scene> would only appear if scene name was provided on manim call
+ scene_name_suffix = "".join(file_writer_config["scene_names"])
+ scene_file_name = os.path.basename(args.file).split(".")[0]
+ log_file_name = (
+ f"{scene_file_name}_{scene_name_suffix}.log"
+ if scene_name_suffix
+ else f"{scene_file_name}.log"
+ )
log_file_path = os.path.join(
file_writer_config["log_dir"],
- "".join(file_writer_config["scene_names"]) + ".log",
+ log_file_name,
)
set_file_logger(log_file_path)
- logger.info("Log file wil be saved in %(logpath)s", {"logpath": log_file_path})
+ logger.info("Log file will be saved in %(logpath)s", {"logpath": log_file_path})
|
{"golden_diff": "diff --git a/manim/config/config.py b/manim/config/config.py\n--- a/manim/config/config.py\n+++ b/manim/config/config.py\n@@ -156,10 +156,19 @@\n # Set the different loggers\n set_rich_logger(config_parser[\"logger\"], file_writer_config[\"verbosity\"])\n if file_writer_config[\"log_to_file\"]:\n- # IMPORTANT note about file name : The log file name will be the scene_name get from the args (contained in file_writer_config). So it can differ from the real name of the scene.\n+ # Note about log_file_name : The log file name will be the <name_of_animation_file>_<name_of_scene>.log\n+ # get from the args (contained in file_writer_config). So it can differ from the real name of the scene.\n+ # <name_of_scene> would only appear if scene name was provided on manim call\n+ scene_name_suffix = \"\".join(file_writer_config[\"scene_names\"])\n+ scene_file_name = os.path.basename(args.file).split(\".\")[0]\n+ log_file_name = (\n+ f\"{scene_file_name}_{scene_name_suffix}.log\"\n+ if scene_name_suffix\n+ else f\"{scene_file_name}.log\"\n+ )\n log_file_path = os.path.join(\n file_writer_config[\"log_dir\"],\n- \"\".join(file_writer_config[\"scene_names\"]) + \".log\",\n+ log_file_name,\n )\n set_file_logger(log_file_path)\n- logger.info(\"Log file wil be saved in %(logpath)s\", {\"logpath\": log_file_path})\n+ logger.info(\"Log file will be saved in %(logpath)s\", {\"logpath\": log_file_path})\n", "issue": "Issue Saving Log files\nhttps://github.com/ManimCommunity/manim/blob/e011f640cf085879b67cce7bc0dc08450ba92d3b/manim/config/config.py#L158-L165\r\nHere it defines that scene name will be log file name. But a problem is when I tried with no scene name and entered it at runtime. The log file a saved in `media/logs/.log` which is weird and should not happen.\n", "before_files": [{"content": "\"\"\"\nconfig.py\n---------\nProcess the manim.cfg file and the command line arguments into a single\nconfig object.\n\"\"\"\n\n\n__all__ = [\"file_writer_config\", \"config\", \"camera_config\", \"tempconfig\"]\n\n\nimport os\nimport sys\nfrom contextlib import contextmanager\n\nimport colour\n\nfrom .. import constants\nfrom .config_utils import (\n _determine_quality,\n _run_config,\n _init_dirs,\n _from_command_line,\n)\n\nfrom .logger import set_rich_logger, set_file_logger, logger\nfrom ..utils.tex import TexTemplate, TexTemplateFromFile\n\n__all__ = [\"file_writer_config\", \"config\", \"camera_config\", \"tempconfig\"]\n\n\nconfig = None\n\n\n@contextmanager\ndef tempconfig(temp):\n \"\"\"Context manager that temporarily modifies the global config dict.\n\n The code block inside the ``with`` statement will use the modified config.\n After the code block, the config will be restored to its original value.\n\n Parameters\n ----------\n\n temp : :class:`dict`\n A dictionary whose keys will be used to temporarily update the global\n config.\n\n Examples\n --------\n Use ``with tempconfig({...})`` to temporarily change the default values of\n certain objects.\n\n .. code_block:: python\n\n c = Camera()\n c.frame_width == config['frame_width'] # -> True\n with tempconfig({'frame_width': 100}):\n c = Camera()\n c.frame_width == config['frame_width'] # -> False\n c.frame_width == 100 # -> True\n\n \"\"\"\n global config\n original = config.copy()\n\n temp = {k: v for k, v in temp.items() if k in original}\n\n # In order to change the config that every module has acces to, use\n # update(), DO NOT use assignment. Assigning config = some_dict will just\n # make the local variable named config point to a new dictionary, it will\n # NOT change the dictionary that every module has a reference to.\n config.update(temp)\n try:\n yield\n finally:\n config.update(original) # update, not assignment!\n\n\ndef _parse_config(config_parser, args):\n \"\"\"Parse config files and CLI arguments into a single dictionary.\"\"\"\n # By default, use the CLI section of the digested .cfg files\n default = config_parser[\"CLI\"]\n\n # Handle the *_quality flags. These determine the section to read\n # and are stored in 'camera_config'. Note the highest resolution\n # passed as argument will be used.\n quality = _determine_quality(args)\n section = config_parser[quality if quality != \"production\" else \"CLI\"]\n\n # Loop over low quality for the keys, could be any quality really\n config = {opt: section.getint(opt) for opt in config_parser[\"low_quality\"]}\n\n config[\"default_pixel_height\"] = default.getint(\"pixel_height\")\n config[\"default_pixel_width\"] = default.getint(\"pixel_width\")\n # The -r, --resolution flag overrides the *_quality flags\n if args.resolution is not None:\n if \",\" in args.resolution:\n height_str, width_str = args.resolution.split(\",\")\n height, width = int(height_str), int(width_str)\n else:\n height = int(args.resolution)\n width = int(16 * height / 9)\n config.update({\"pixel_height\": height, \"pixel_width\": width})\n\n # Handle the -c (--background_color) flag\n if args.background_color is not None:\n try:\n background_color = colour.Color(args.background_color)\n except AttributeError as err:\n logger.warning(\"Please use a valid color.\")\n logger.error(err)\n sys.exit(2)\n else:\n background_color = colour.Color(default[\"background_color\"])\n config[\"background_color\"] = background_color\n\n config[\"use_js_renderer\"] = args.use_js_renderer or default.getboolean(\n \"use_js_renderer\"\n )\n\n config[\"js_renderer_path\"] = args.js_renderer_path or default.get(\n \"js_renderer_path\"\n )\n\n # Set the rest of the frame properties\n config[\"frame_height\"] = 8.0\n config[\"frame_width\"] = (\n config[\"frame_height\"] * config[\"pixel_width\"] / config[\"pixel_height\"]\n )\n config[\"frame_y_radius\"] = config[\"frame_height\"] / 2\n config[\"frame_x_radius\"] = config[\"frame_width\"] / 2\n config[\"top\"] = config[\"frame_y_radius\"] * constants.UP\n config[\"bottom\"] = config[\"frame_y_radius\"] * constants.DOWN\n config[\"left_side\"] = config[\"frame_x_radius\"] * constants.LEFT\n config[\"right_side\"] = config[\"frame_x_radius\"] * constants.RIGHT\n\n # Handle the --tex_template flag. Note we accept None if the flag is absent\n tex_fn = os.path.expanduser(args.tex_template) if args.tex_template else None\n\n if tex_fn is not None and not os.access(tex_fn, os.R_OK):\n # custom template not available, fallback to default\n logger.warning(\n f\"Custom TeX template {tex_fn} not found or not readable. \"\n \"Falling back to the default template.\"\n )\n tex_fn = None\n config[\"tex_template_file\"] = tex_fn\n config[\"tex_template\"] = (\n TexTemplateFromFile(filename=tex_fn) if tex_fn is not None else TexTemplate()\n )\n\n return config\n\n\nargs, config_parser, file_writer_config, successfully_read_files = _run_config()\nlogger.setLevel(file_writer_config[\"verbosity\"])\nset_rich_logger(config_parser[\"logger\"], file_writer_config[\"verbosity\"])\n\nif _from_command_line():\n logger.debug(\n f\"Read configuration files: {[os.path.abspath(cfgfile) for cfgfile in successfully_read_files]}\"\n )\n if not (hasattr(args, \"subcommands\")):\n _init_dirs(file_writer_config)\nconfig = _parse_config(config_parser, args)\nif config[\"use_js_renderer\"]:\n file_writer_config[\"disable_caching\"] = True\ncamera_config = config\n\nif file_writer_config[\"log_to_file\"]:\n # IMPORTANT note about file name : The log file name will be the scene_name get from the args (contained in file_writer_config). So it can differ from the real name of the scene.\n log_file_path = os.path.join(\n file_writer_config[\"log_dir\"],\n \"\".join(file_writer_config[\"scene_names\"]) + \".log\",\n )\n set_file_logger(log_file_path)\n logger.info(\"Log file wil be saved in %(logpath)s\", {\"logpath\": log_file_path})\n", "path": "manim/config/config.py"}]}
| 2,521 | 366 |
gh_patches_debug_9158
|
rasdani/github-patches
|
git_diff
|
boto__boto-2029
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LaunchConfiguration does not retrieve AssociatePublicIpAddress properly
Pull request #1799 added support to `AssociatePublicIpAddress`, but it added only support for sending that parameter. Retrieval is not fully supported yet.
A simple fix would be grabbing what pull request #1832 implemented: https://github.com/boto/boto/pull/1832/files#diff-8c9af36969b22e4d4bb34924adc35399R105
My launch configuration object:
```
>>> launch_config.AssociatePublicIpAddress
u'true'
>>> launch_config.__class__
<class 'boto.ec2.autoscale.launchconfig.LaunchConfiguration'>
>>> pprint(dir(launch_config))
['AssociatePublicIpAddress',
'__class__',
'__delattr__',
'__dict__',
'__doc__',
'__format__',
'__getattribute__',
'__hash__',
'__init__',
'__module__',
'__new__',
'__reduce__',
'__reduce_ex__',
'__repr__',
'__setattr__',
'__sizeof__',
'__str__',
'__subclasshook__',
'__weakref__',
'associate_public_ip_address',
'block_device_mappings',
'connection',
'created_time',
'delete',
'delete_on_termination',
'ebs_optimized',
'endElement',
'image_id',
'instance_monitoring',
'instance_profile_name',
'instance_type',
'iops',
'kernel_id',
'key_name',
'launch_configuration_arn',
'member',
'name',
'ramdisk_id',
'security_groups',
'spot_price',
'startElement',
'user_data',
'volume_type']
```
I am using boto version 2.23.0.
</issue>
<code>
[start of boto/ec2/autoscale/launchconfig.py]
1 # Copyright (c) 2009 Reza Lotun http://reza.lotun.name/
2 # Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved
3 #
4 # Permission is hereby granted, free of charge, to any person obtaining a
5 # copy of this software and associated documentation files (the
6 # "Software"), to deal in the Software without restriction, including
7 # without limitation the rights to use, copy, modify, merge, publish, dis-
8 # tribute, sublicense, and/or sell copies of the Software, and to permit
9 # persons to whom the Software is furnished to do so, subject to the fol-
10 # lowing conditions:
11 #
12 # The above copyright notice and this permission notice shall be included
13 # in all copies or substantial portions of the Software.
14 #
15 # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
16 # OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-
17 # ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT
18 # SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
19 # WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
21 # IN THE SOFTWARE.
22
23 from datetime import datetime
24 from boto.resultset import ResultSet
25 from boto.ec2.elb.listelement import ListElement
26 import boto.utils
27 import base64
28
29 # this should use the corresponding object from boto.ec2
30
31
32 class Ebs(object):
33 def __init__(self, connection=None, snapshot_id=None, volume_size=None):
34 self.connection = connection
35 self.snapshot_id = snapshot_id
36 self.volume_size = volume_size
37
38 def __repr__(self):
39 return 'Ebs(%s, %s)' % (self.snapshot_id, self.volume_size)
40
41 def startElement(self, name, attrs, connection):
42 pass
43
44 def endElement(self, name, value, connection):
45 if name == 'SnapshotId':
46 self.snapshot_id = value
47 elif name == 'VolumeSize':
48 self.volume_size = value
49
50
51 class InstanceMonitoring(object):
52 def __init__(self, connection=None, enabled='false'):
53 self.connection = connection
54 self.enabled = enabled
55
56 def __repr__(self):
57 return 'InstanceMonitoring(%s)' % self.enabled
58
59 def startElement(self, name, attrs, connection):
60 pass
61
62 def endElement(self, name, value, connection):
63 if name == 'Enabled':
64 self.enabled = value
65
66
67 # this should use the BlockDeviceMapping from boto.ec2.blockdevicemapping
68 class BlockDeviceMapping(object):
69 def __init__(self, connection=None, device_name=None, virtual_name=None,
70 ebs=None, no_device=None):
71 self.connection = connection
72 self.device_name = device_name
73 self.virtual_name = virtual_name
74 self.ebs = ebs
75 self.no_device = no_device
76
77 def __repr__(self):
78 return 'BlockDeviceMapping(%s, %s)' % (self.device_name,
79 self.virtual_name)
80
81 def startElement(self, name, attrs, connection):
82 if name == 'Ebs':
83 self.ebs = Ebs(self)
84 return self.ebs
85
86 def endElement(self, name, value, connection):
87 if name == 'DeviceName':
88 self.device_name = value
89 elif name == 'VirtualName':
90 self.virtual_name = value
91 elif name == 'NoDevice':
92 self.no_device = bool(value)
93
94
95 class LaunchConfiguration(object):
96 def __init__(self, connection=None, name=None, image_id=None,
97 key_name=None, security_groups=None, user_data=None,
98 instance_type='m1.small', kernel_id=None,
99 ramdisk_id=None, block_device_mappings=None,
100 instance_monitoring=False, spot_price=None,
101 instance_profile_name=None, ebs_optimized=False,
102 associate_public_ip_address=None, volume_type=None,
103 delete_on_termination=True, iops=None):
104 """
105 A launch configuration.
106
107 :type name: str
108 :param name: Name of the launch configuration to create.
109
110 :type image_id: str
111 :param image_id: Unique ID of the Amazon Machine Image (AMI) which was
112 assigned during registration.
113
114 :type key_name: str
115 :param key_name: The name of the EC2 key pair.
116
117 :type security_groups: list
118 :param security_groups: Names or security group id's of the security
119 groups with which to associate the EC2 instances or VPC instances,
120 respectively.
121
122 :type user_data: str
123 :param user_data: The user data available to launched EC2 instances.
124
125 :type instance_type: str
126 :param instance_type: The instance type
127
128 :type kern_id: str
129 :param kern_id: Kernel id for instance
130
131 :type ramdisk_id: str
132 :param ramdisk_id: RAM disk id for instance
133
134 :type block_device_mappings: list
135 :param block_device_mappings: Specifies how block devices are exposed
136 for instances
137
138 :type instance_monitoring: bool
139 :param instance_monitoring: Whether instances in group are launched
140 with detailed monitoring.
141
142 :type spot_price: float
143 :param spot_price: The spot price you are bidding. Only applies
144 if you are building an autoscaling group with spot instances.
145
146 :type instance_profile_name: string
147 :param instance_profile_name: The name or the Amazon Resource
148 Name (ARN) of the instance profile associated with the IAM
149 role for the instance.
150
151 :type ebs_optimized: bool
152 :param ebs_optimized: Specifies whether the instance is optimized
153 for EBS I/O (true) or not (false).
154
155 :type associate_public_ip_address: bool
156 :param associate_public_ip_address: Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud.
157 Specifies whether to assign a public IP address to each instance launched in a Amazon VPC.
158 """
159 self.connection = connection
160 self.name = name
161 self.instance_type = instance_type
162 self.block_device_mappings = block_device_mappings
163 self.key_name = key_name
164 sec_groups = security_groups or []
165 self.security_groups = ListElement(sec_groups)
166 self.image_id = image_id
167 self.ramdisk_id = ramdisk_id
168 self.created_time = None
169 self.kernel_id = kernel_id
170 self.user_data = user_data
171 self.created_time = None
172 self.instance_monitoring = instance_monitoring
173 self.spot_price = spot_price
174 self.instance_profile_name = instance_profile_name
175 self.launch_configuration_arn = None
176 self.ebs_optimized = ebs_optimized
177 self.associate_public_ip_address = associate_public_ip_address
178 self.volume_type = volume_type
179 self.delete_on_termination = delete_on_termination
180 self.iops = iops
181
182 def __repr__(self):
183 return 'LaunchConfiguration:%s' % self.name
184
185 def startElement(self, name, attrs, connection):
186 if name == 'SecurityGroups':
187 return self.security_groups
188 elif name == 'BlockDeviceMappings':
189 self.block_device_mappings = ResultSet([('member',
190 BlockDeviceMapping)])
191 return self.block_device_mappings
192 elif name == 'InstanceMonitoring':
193 self.instance_monitoring = InstanceMonitoring(self)
194 return self.instance_monitoring
195
196 def endElement(self, name, value, connection):
197 if name == 'InstanceType':
198 self.instance_type = value
199 elif name == 'LaunchConfigurationName':
200 self.name = value
201 elif name == 'KeyName':
202 self.key_name = value
203 elif name == 'ImageId':
204 self.image_id = value
205 elif name == 'CreatedTime':
206 self.created_time = boto.utils.parse_ts(value)
207 elif name == 'KernelId':
208 self.kernel_id = value
209 elif name == 'RamdiskId':
210 self.ramdisk_id = value
211 elif name == 'UserData':
212 try:
213 self.user_data = base64.b64decode(value)
214 except TypeError:
215 self.user_data = value
216 elif name == 'LaunchConfigurationARN':
217 self.launch_configuration_arn = value
218 elif name == 'InstanceMonitoring':
219 self.instance_monitoring = value
220 elif name == 'SpotPrice':
221 self.spot_price = float(value)
222 elif name == 'IamInstanceProfile':
223 self.instance_profile_name = value
224 elif name == 'EbsOptimized':
225 self.ebs_optimized = True if value.lower() == 'true' else False
226 elif name == 'VolumeType':
227 self.volume_type = value
228 elif name == 'DeleteOnTermination':
229 if value.lower() == 'true':
230 self.delete_on_termination = True
231 else:
232 self.delete_on_termination = False
233 elif name == 'Iops':
234 self.iops = int(value)
235 else:
236 setattr(self, name, value)
237
238 def delete(self):
239 """ Delete this launch configuration. """
240 return self.connection.delete_launch_configuration(self.name)
241
[end of boto/ec2/autoscale/launchconfig.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py
--- a/boto/ec2/autoscale/launchconfig.py
+++ b/boto/ec2/autoscale/launchconfig.py
@@ -223,6 +223,8 @@
self.instance_profile_name = value
elif name == 'EbsOptimized':
self.ebs_optimized = True if value.lower() == 'true' else False
+ elif name == 'AssociatePublicIpAddress':
+ self.associate_public_ip_address = True if value.lower() == 'true' else False
elif name == 'VolumeType':
self.volume_type = value
elif name == 'DeleteOnTermination':
|
{"golden_diff": "diff --git a/boto/ec2/autoscale/launchconfig.py b/boto/ec2/autoscale/launchconfig.py\n--- a/boto/ec2/autoscale/launchconfig.py\n+++ b/boto/ec2/autoscale/launchconfig.py\n@@ -223,6 +223,8 @@\n self.instance_profile_name = value\n elif name == 'EbsOptimized':\n self.ebs_optimized = True if value.lower() == 'true' else False\n+ elif name == 'AssociatePublicIpAddress':\n+ self.associate_public_ip_address = True if value.lower() == 'true' else False\n elif name == 'VolumeType':\n self.volume_type = value\n elif name == 'DeleteOnTermination':\n", "issue": "LaunchConfiguration does not retrieve AssociatePublicIpAddress properly\nPull request #1799 added support to `AssociatePublicIpAddress`, but it added only support for sending that parameter. Retrieval is not fully supported yet.\n\nA simple fix would be grabbing what pull request #1832 implemented: https://github.com/boto/boto/pull/1832/files#diff-8c9af36969b22e4d4bb34924adc35399R105\n\nMy launch configuration object:\n\n```\n>>> launch_config.AssociatePublicIpAddress\nu'true'\n\n>>> launch_config.__class__\n<class 'boto.ec2.autoscale.launchconfig.LaunchConfiguration'>\n>>> pprint(dir(launch_config))\n['AssociatePublicIpAddress',\n '__class__',\n '__delattr__',\n '__dict__',\n '__doc__',\n '__format__',\n '__getattribute__',\n '__hash__',\n '__init__',\n '__module__',\n '__new__',\n '__reduce__',\n '__reduce_ex__',\n '__repr__',\n '__setattr__',\n '__sizeof__',\n '__str__',\n '__subclasshook__',\n '__weakref__',\n 'associate_public_ip_address',\n 'block_device_mappings',\n 'connection',\n 'created_time',\n 'delete',\n 'delete_on_termination',\n 'ebs_optimized',\n 'endElement',\n 'image_id',\n 'instance_monitoring',\n 'instance_profile_name',\n 'instance_type',\n 'iops',\n 'kernel_id',\n 'key_name',\n 'launch_configuration_arn',\n 'member',\n 'name',\n 'ramdisk_id',\n 'security_groups',\n 'spot_price',\n 'startElement',\n 'user_data',\n 'volume_type']\n```\n\nI am using boto version 2.23.0.\n\n", "before_files": [{"content": "# Copyright (c) 2009 Reza Lotun http://reza.lotun.name/\n# Copyright (c) 2012 Amazon.com, Inc. or its affiliates. All Rights Reserved\n#\n# Permission is hereby granted, free of charge, to any person obtaining a\n# copy of this software and associated documentation files (the\n# \"Software\"), to deal in the Software without restriction, including\n# without limitation the rights to use, copy, modify, merge, publish, dis-\n# tribute, sublicense, and/or sell copies of the Software, and to permit\n# persons to whom the Software is furnished to do so, subject to the fol-\n# lowing conditions:\n#\n# The above copyright notice and this permission notice shall be included\n# in all copies or substantial portions of the Software.\n#\n# THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS\n# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABIL-\n# ITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT\n# SHALL THE AUTHOR BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,\n# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n# IN THE SOFTWARE.\n\nfrom datetime import datetime\nfrom boto.resultset import ResultSet\nfrom boto.ec2.elb.listelement import ListElement\nimport boto.utils\nimport base64\n\n# this should use the corresponding object from boto.ec2\n\n\nclass Ebs(object):\n def __init__(self, connection=None, snapshot_id=None, volume_size=None):\n self.connection = connection\n self.snapshot_id = snapshot_id\n self.volume_size = volume_size\n\n def __repr__(self):\n return 'Ebs(%s, %s)' % (self.snapshot_id, self.volume_size)\n\n def startElement(self, name, attrs, connection):\n pass\n\n def endElement(self, name, value, connection):\n if name == 'SnapshotId':\n self.snapshot_id = value\n elif name == 'VolumeSize':\n self.volume_size = value\n\n\nclass InstanceMonitoring(object):\n def __init__(self, connection=None, enabled='false'):\n self.connection = connection\n self.enabled = enabled\n\n def __repr__(self):\n return 'InstanceMonitoring(%s)' % self.enabled\n\n def startElement(self, name, attrs, connection):\n pass\n\n def endElement(self, name, value, connection):\n if name == 'Enabled':\n self.enabled = value\n\n\n# this should use the BlockDeviceMapping from boto.ec2.blockdevicemapping\nclass BlockDeviceMapping(object):\n def __init__(self, connection=None, device_name=None, virtual_name=None,\n ebs=None, no_device=None):\n self.connection = connection\n self.device_name = device_name\n self.virtual_name = virtual_name\n self.ebs = ebs\n self.no_device = no_device\n\n def __repr__(self):\n return 'BlockDeviceMapping(%s, %s)' % (self.device_name,\n self.virtual_name)\n\n def startElement(self, name, attrs, connection):\n if name == 'Ebs':\n self.ebs = Ebs(self)\n return self.ebs\n\n def endElement(self, name, value, connection):\n if name == 'DeviceName':\n self.device_name = value\n elif name == 'VirtualName':\n self.virtual_name = value\n elif name == 'NoDevice':\n self.no_device = bool(value)\n\n\nclass LaunchConfiguration(object):\n def __init__(self, connection=None, name=None, image_id=None,\n key_name=None, security_groups=None, user_data=None,\n instance_type='m1.small', kernel_id=None,\n ramdisk_id=None, block_device_mappings=None,\n instance_monitoring=False, spot_price=None,\n instance_profile_name=None, ebs_optimized=False,\n associate_public_ip_address=None, volume_type=None,\n delete_on_termination=True, iops=None):\n \"\"\"\n A launch configuration.\n\n :type name: str\n :param name: Name of the launch configuration to create.\n\n :type image_id: str\n :param image_id: Unique ID of the Amazon Machine Image (AMI) which was\n assigned during registration.\n\n :type key_name: str\n :param key_name: The name of the EC2 key pair.\n\n :type security_groups: list\n :param security_groups: Names or security group id's of the security\n groups with which to associate the EC2 instances or VPC instances,\n respectively.\n\n :type user_data: str\n :param user_data: The user data available to launched EC2 instances.\n\n :type instance_type: str\n :param instance_type: The instance type\n\n :type kern_id: str\n :param kern_id: Kernel id for instance\n\n :type ramdisk_id: str\n :param ramdisk_id: RAM disk id for instance\n\n :type block_device_mappings: list\n :param block_device_mappings: Specifies how block devices are exposed\n for instances\n\n :type instance_monitoring: bool\n :param instance_monitoring: Whether instances in group are launched\n with detailed monitoring.\n\n :type spot_price: float\n :param spot_price: The spot price you are bidding. Only applies\n if you are building an autoscaling group with spot instances.\n\n :type instance_profile_name: string\n :param instance_profile_name: The name or the Amazon Resource\n Name (ARN) of the instance profile associated with the IAM\n role for the instance.\n\n :type ebs_optimized: bool\n :param ebs_optimized: Specifies whether the instance is optimized\n for EBS I/O (true) or not (false).\n\n :type associate_public_ip_address: bool\n :param associate_public_ip_address: Used for Auto Scaling groups that launch instances into an Amazon Virtual Private Cloud.\n Specifies whether to assign a public IP address to each instance launched in a Amazon VPC.\n \"\"\"\n self.connection = connection\n self.name = name\n self.instance_type = instance_type\n self.block_device_mappings = block_device_mappings\n self.key_name = key_name\n sec_groups = security_groups or []\n self.security_groups = ListElement(sec_groups)\n self.image_id = image_id\n self.ramdisk_id = ramdisk_id\n self.created_time = None\n self.kernel_id = kernel_id\n self.user_data = user_data\n self.created_time = None\n self.instance_monitoring = instance_monitoring\n self.spot_price = spot_price\n self.instance_profile_name = instance_profile_name\n self.launch_configuration_arn = None\n self.ebs_optimized = ebs_optimized\n self.associate_public_ip_address = associate_public_ip_address\n self.volume_type = volume_type\n self.delete_on_termination = delete_on_termination\n self.iops = iops\n\n def __repr__(self):\n return 'LaunchConfiguration:%s' % self.name\n\n def startElement(self, name, attrs, connection):\n if name == 'SecurityGroups':\n return self.security_groups\n elif name == 'BlockDeviceMappings':\n self.block_device_mappings = ResultSet([('member',\n BlockDeviceMapping)])\n return self.block_device_mappings\n elif name == 'InstanceMonitoring':\n self.instance_monitoring = InstanceMonitoring(self)\n return self.instance_monitoring\n\n def endElement(self, name, value, connection):\n if name == 'InstanceType':\n self.instance_type = value\n elif name == 'LaunchConfigurationName':\n self.name = value\n elif name == 'KeyName':\n self.key_name = value\n elif name == 'ImageId':\n self.image_id = value\n elif name == 'CreatedTime':\n self.created_time = boto.utils.parse_ts(value)\n elif name == 'KernelId':\n self.kernel_id = value\n elif name == 'RamdiskId':\n self.ramdisk_id = value\n elif name == 'UserData':\n try:\n self.user_data = base64.b64decode(value)\n except TypeError:\n self.user_data = value\n elif name == 'LaunchConfigurationARN':\n self.launch_configuration_arn = value\n elif name == 'InstanceMonitoring':\n self.instance_monitoring = value\n elif name == 'SpotPrice':\n self.spot_price = float(value)\n elif name == 'IamInstanceProfile':\n self.instance_profile_name = value\n elif name == 'EbsOptimized':\n self.ebs_optimized = True if value.lower() == 'true' else False\n elif name == 'VolumeType':\n self.volume_type = value\n elif name == 'DeleteOnTermination':\n if value.lower() == 'true':\n self.delete_on_termination = True\n else:\n self.delete_on_termination = False\n elif name == 'Iops':\n self.iops = int(value)\n else:\n setattr(self, name, value)\n\n def delete(self):\n \"\"\" Delete this launch configuration. \"\"\"\n return self.connection.delete_launch_configuration(self.name)\n", "path": "boto/ec2/autoscale/launchconfig.py"}]}
| 3,504 | 163 |
gh_patches_debug_15618
|
rasdani/github-patches
|
git_diff
|
opsdroid__opsdroid-930
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add time to crontab log message
When the cron parser is triggered it emits a debug log saying `Running crontab skills`.
It would be more useful if it included the time that opsdroid thinks it is. This would help when trying to debug issues where skills are triggered at the wrong time due to opsdroid having the wrong timezone.
The line which needs updating is [here](https://github.com/opsdroid/opsdroid/blob/master/opsdroid/parsers/crontab.py#L17).
</issue>
<code>
[start of opsdroid/parsers/crontab.py]
1 """A helper function for parsing and executing crontab skills."""
2
3 import asyncio
4 import logging
5
6 import arrow
7 import pycron
8
9
10 _LOGGER = logging.getLogger(__name__)
11
12
13 async def parse_crontab(opsdroid):
14 """Parse all crontab skills against the current time."""
15 while opsdroid.eventloop.is_running():
16 await asyncio.sleep(60 - arrow.now().time().second)
17 _LOGGER.debug(_("Running crontab skills"))
18 for skill in opsdroid.skills:
19 for matcher in skill.matchers:
20 if "crontab" in matcher:
21 if matcher["timezone"] is not None:
22 timezone = matcher["timezone"]
23 else:
24 timezone = opsdroid.config.get("timezone", "UTC")
25 if pycron.is_now(matcher["crontab"],
26 arrow.now(tz=timezone)):
27 await opsdroid.run_skill(skill,
28 skill.config,
29 None)
30
[end of opsdroid/parsers/crontab.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/opsdroid/parsers/crontab.py b/opsdroid/parsers/crontab.py
--- a/opsdroid/parsers/crontab.py
+++ b/opsdroid/parsers/crontab.py
@@ -1,5 +1,5 @@
"""A helper function for parsing and executing crontab skills."""
-
+import time
import asyncio
import logging
@@ -14,7 +14,7 @@
"""Parse all crontab skills against the current time."""
while opsdroid.eventloop.is_running():
await asyncio.sleep(60 - arrow.now().time().second)
- _LOGGER.debug(_("Running crontab skills"))
+ _LOGGER.debug(_("Running crontab skills at %s "), time.asctime())
for skill in opsdroid.skills:
for matcher in skill.matchers:
if "crontab" in matcher:
|
{"golden_diff": "diff --git a/opsdroid/parsers/crontab.py b/opsdroid/parsers/crontab.py\n--- a/opsdroid/parsers/crontab.py\n+++ b/opsdroid/parsers/crontab.py\n@@ -1,5 +1,5 @@\n \"\"\"A helper function for parsing and executing crontab skills.\"\"\"\n-\n+import time\n import asyncio\n import logging\n \n@@ -14,7 +14,7 @@\n \"\"\"Parse all crontab skills against the current time.\"\"\"\n while opsdroid.eventloop.is_running():\n await asyncio.sleep(60 - arrow.now().time().second)\n- _LOGGER.debug(_(\"Running crontab skills\"))\n+ _LOGGER.debug(_(\"Running crontab skills at %s \"), time.asctime())\n for skill in opsdroid.skills:\n for matcher in skill.matchers:\n if \"crontab\" in matcher:\n", "issue": "Add time to crontab log message\nWhen the cron parser is triggered it emits a debug log saying `Running crontab skills`.\r\n\r\nIt would be more useful if it included the time that opsdroid thinks it is. This would help when trying to debug issues where skills are triggered at the wrong time due to opsdroid having the wrong timezone.\r\n\r\nThe line which needs updating is [here](https://github.com/opsdroid/opsdroid/blob/master/opsdroid/parsers/crontab.py#L17). \n", "before_files": [{"content": "\"\"\"A helper function for parsing and executing crontab skills.\"\"\"\n\nimport asyncio\nimport logging\n\nimport arrow\nimport pycron\n\n\n_LOGGER = logging.getLogger(__name__)\n\n\nasync def parse_crontab(opsdroid):\n \"\"\"Parse all crontab skills against the current time.\"\"\"\n while opsdroid.eventloop.is_running():\n await asyncio.sleep(60 - arrow.now().time().second)\n _LOGGER.debug(_(\"Running crontab skills\"))\n for skill in opsdroid.skills:\n for matcher in skill.matchers:\n if \"crontab\" in matcher:\n if matcher[\"timezone\"] is not None:\n timezone = matcher[\"timezone\"]\n else:\n timezone = opsdroid.config.get(\"timezone\", \"UTC\")\n if pycron.is_now(matcher[\"crontab\"],\n arrow.now(tz=timezone)):\n await opsdroid.run_skill(skill,\n skill.config,\n None)\n", "path": "opsdroid/parsers/crontab.py"}]}
| 906 | 194 |
gh_patches_debug_8828
|
rasdani/github-patches
|
git_diff
|
mozmeao__snippets-service-1238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Just published jobs with global limits get prematurely completed due to missing metrics.
</issue>
<code>
[start of snippets/base/management/commands/update_jobs.py]
1 from datetime import datetime, timedelta
2
3 from django.contrib.auth import get_user_model
4 from django.core.management.base import BaseCommand
5 from django.db import transaction
6 from django.db.models import F, Q
7
8 from snippets.base.models import Job
9
10
11 class Command(BaseCommand):
12 args = "(no args)"
13 help = "Update Jobs"
14
15 @transaction.atomic
16 def handle(self, *args, **options):
17 now = datetime.utcnow()
18 user = get_user_model().objects.get_or_create(username='snippets_bot')[0]
19 count_total_completed = 0
20
21 # Publish Scheduled Jobs with `publish_start` before now or without
22 # publish_start.
23 jobs = Job.objects.filter(status=Job.SCHEDULED).filter(
24 Q(publish_start__lte=now) | Q(publish_start=None)
25 )
26 count_published = jobs.count()
27 for job in jobs:
28 job.change_status(
29 status=Job.PUBLISHED,
30 user=user,
31 reason='Published start date reached.',
32 )
33
34 # Disable Published Jobs with `publish_end` before now.
35 jobs = Job.objects.filter(status=Job.PUBLISHED, publish_end__lte=now)
36 count_publication_end = jobs.count()
37 count_total_completed += count_publication_end
38
39 for job in jobs:
40 job.change_status(
41 status=Job.COMPLETED,
42 user=user,
43 reason='Publication end date reached.',
44 )
45
46 # Disable Jobs that reached Impression, Click or Block limits.
47 count_limit = {}
48 for limit in ['impressions', 'clicks', 'blocks']:
49 jobs = (Job.objects
50 .filter(status=Job.PUBLISHED)
51 .exclude(**{f'limit_{limit}': 0})
52 .filter(**{f'limit_{limit}__lte': F(f'metric_{limit}')}))
53 for job in jobs:
54 job.change_status(
55 status=Job.COMPLETED,
56 user=user,
57 reason=f'Limit reached: {limit}.',
58 )
59
60 count_limit[limit] = jobs.count()
61 count_total_completed += count_limit[limit]
62
63 # Disable Jobs that have Impression, Click or Block limits but don't
64 # have metrics data for at least 24h. This is to handle cases where the
65 # Metrics Pipeline is broken.
66 yesterday = datetime.utcnow() - timedelta(days=1)
67 jobs = (Job.objects
68 .filter(status=Job.PUBLISHED)
69 .exclude(limit_impressions=0, limit_clicks=0, limit_blocks=0)
70 .filter(metric_last_update__lt=yesterday))
71 for job in jobs:
72 job.change_status(
73 status=Job.COMPLETED,
74 user=user,
75 reason=f'Premature termination due to missing metrics.',
76 )
77 count_premature_termination = jobs.count()
78 count_total_completed += count_premature_termination
79
80 count_running = Job.objects.filter(status=Job.PUBLISHED).count()
81
82 self.stdout.write(
83 f'Jobs Published: {count_published}\n'
84 f'Jobs Completed: {count_total_completed}\n'
85 f' - Reached Publication End Date: {count_publication_end}\n'
86 f' - Reached Impressions Limit: {count_limit["impressions"]}\n'
87 f' - Reached Clicks Limit: {count_limit["clicks"]}\n'
88 f' - Reached Blocks Limit: {count_limit["blocks"]}\n'
89 f' - Premature Termination due to missing metrics: {count_premature_termination}\n'
90 f'Total Jobs Running: {count_running}\n'
91 )
92
[end of snippets/base/management/commands/update_jobs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/snippets/base/management/commands/update_jobs.py b/snippets/base/management/commands/update_jobs.py
--- a/snippets/base/management/commands/update_jobs.py
+++ b/snippets/base/management/commands/update_jobs.py
@@ -67,6 +67,8 @@
jobs = (Job.objects
.filter(status=Job.PUBLISHED)
.exclude(limit_impressions=0, limit_clicks=0, limit_blocks=0)
+ # Exclude Jobs with limits which haven't been updated once yet.
+ .exclude(metric_last_update='1970-01-01')
.filter(metric_last_update__lt=yesterday))
for job in jobs:
job.change_status(
|
{"golden_diff": "diff --git a/snippets/base/management/commands/update_jobs.py b/snippets/base/management/commands/update_jobs.py\n--- a/snippets/base/management/commands/update_jobs.py\n+++ b/snippets/base/management/commands/update_jobs.py\n@@ -67,6 +67,8 @@\n jobs = (Job.objects\n .filter(status=Job.PUBLISHED)\n .exclude(limit_impressions=0, limit_clicks=0, limit_blocks=0)\n+ # Exclude Jobs with limits which haven't been updated once yet.\n+ .exclude(metric_last_update='1970-01-01')\n .filter(metric_last_update__lt=yesterday))\n for job in jobs:\n job.change_status(\n", "issue": "Just published jobs with global limits get prematurely completed due to missing metrics.\n\n", "before_files": [{"content": "from datetime import datetime, timedelta\n\nfrom django.contrib.auth import get_user_model\nfrom django.core.management.base import BaseCommand\nfrom django.db import transaction\nfrom django.db.models import F, Q\n\nfrom snippets.base.models import Job\n\n\nclass Command(BaseCommand):\n args = \"(no args)\"\n help = \"Update Jobs\"\n\n @transaction.atomic\n def handle(self, *args, **options):\n now = datetime.utcnow()\n user = get_user_model().objects.get_or_create(username='snippets_bot')[0]\n count_total_completed = 0\n\n # Publish Scheduled Jobs with `publish_start` before now or without\n # publish_start.\n jobs = Job.objects.filter(status=Job.SCHEDULED).filter(\n Q(publish_start__lte=now) | Q(publish_start=None)\n )\n count_published = jobs.count()\n for job in jobs:\n job.change_status(\n status=Job.PUBLISHED,\n user=user,\n reason='Published start date reached.',\n )\n\n # Disable Published Jobs with `publish_end` before now.\n jobs = Job.objects.filter(status=Job.PUBLISHED, publish_end__lte=now)\n count_publication_end = jobs.count()\n count_total_completed += count_publication_end\n\n for job in jobs:\n job.change_status(\n status=Job.COMPLETED,\n user=user,\n reason='Publication end date reached.',\n )\n\n # Disable Jobs that reached Impression, Click or Block limits.\n count_limit = {}\n for limit in ['impressions', 'clicks', 'blocks']:\n jobs = (Job.objects\n .filter(status=Job.PUBLISHED)\n .exclude(**{f'limit_{limit}': 0})\n .filter(**{f'limit_{limit}__lte': F(f'metric_{limit}')}))\n for job in jobs:\n job.change_status(\n status=Job.COMPLETED,\n user=user,\n reason=f'Limit reached: {limit}.',\n )\n\n count_limit[limit] = jobs.count()\n count_total_completed += count_limit[limit]\n\n # Disable Jobs that have Impression, Click or Block limits but don't\n # have metrics data for at least 24h. This is to handle cases where the\n # Metrics Pipeline is broken.\n yesterday = datetime.utcnow() - timedelta(days=1)\n jobs = (Job.objects\n .filter(status=Job.PUBLISHED)\n .exclude(limit_impressions=0, limit_clicks=0, limit_blocks=0)\n .filter(metric_last_update__lt=yesterday))\n for job in jobs:\n job.change_status(\n status=Job.COMPLETED,\n user=user,\n reason=f'Premature termination due to missing metrics.',\n )\n count_premature_termination = jobs.count()\n count_total_completed += count_premature_termination\n\n count_running = Job.objects.filter(status=Job.PUBLISHED).count()\n\n self.stdout.write(\n f'Jobs Published: {count_published}\\n'\n f'Jobs Completed: {count_total_completed}\\n'\n f' - Reached Publication End Date: {count_publication_end}\\n'\n f' - Reached Impressions Limit: {count_limit[\"impressions\"]}\\n'\n f' - Reached Clicks Limit: {count_limit[\"clicks\"]}\\n'\n f' - Reached Blocks Limit: {count_limit[\"blocks\"]}\\n'\n f' - Premature Termination due to missing metrics: {count_premature_termination}\\n'\n f'Total Jobs Running: {count_running}\\n'\n )\n", "path": "snippets/base/management/commands/update_jobs.py"}]}
| 1,507 | 157 |
gh_patches_debug_6786
|
rasdani/github-patches
|
git_diff
|
google__jax-6232
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
linear_transpose involving fft seemingly incorrect
I was planning to implement functionality that requires me to be able to take the transpose of a complicated linear function, which includes fourier transforms. I noticed that the fft module seems to implement the rules for transposition; the last few lines of https://jax.readthedocs.io/en/latest/_modules/jax/_src/lax/fft.html seem to pertain to it.
I am not familiar enough with the jax API at present to point out any bugs; but below is a simple test that demonstrates what I believe to be a bug. The build_matrix helper function explicitly constructs the coefficients of the linear function by feeding it all delta functions in sequence. If I linear_transpose my function, that should be identical to the matrix transpose of the built matrix. Yet it doesnt. It seems as if im getting my result in reverse order (plus another off-by-one index bug I think).
Seems to me like a bug in the implementation of the transpose rules for ffts; but again not qualified myself to spot it.
While on the topic, slightly related question: when viewed as linear operators, convolution and correlation are transposed operators. Should I trust jax to figure out efficient transformations along these lines (assuming the underlying fft rules are bug free); or is it likely optimal for me to figure out how to override the linear transpose of a convolution with my own handcrafted correlation functions (and vice versa)?
Code to reproduce:
```python
import numpy as np
import jax
from jax import numpy as jnp
import matplotlib.pyplot as plt
np.random.seed(0)
signal = np.cumsum(np.random.randn(2**8))
signal_jax = jnp.array(signal)
x = np.linspace(-1, 1, len(signal))
psf = np.clip(0.2 - np.abs(x), 0, 1) * (x > 0)
psf /= psf.sum()
psf_jax = jnp.array(psf)
jrfft = jax.jit(jnp.fft.rfft)
jirfft = jax.jit(jnp.fft.irfft)
@jax.jit
def convolve(a, b):
fa = jrfft(a)
fb = jrfft(b)
return jirfft(fa * fb)
@jax.jit
def correlate(a, b):
"""NOTE: can this be implemented as a transposition rule according to:
https://jax.readthedocs.io/en/latest/_modules/jax/_src/lax/fft.html
"""
fa = jrfft(a).conj()
fb = jrfft(b)
return jirfft(fa * fb)
def psf_convolve(psf):
"""statically bind psf arg"""
psf = jax.numpy.fft.ifftshift(psf)
return lambda a: convolve(psf, a)
def psf_correlate(psf):
"""statically bind psf arg. psf assumed to be centered"""
psf = jax.numpy.fft.ifftshift(psf)
return lambda a: correlate(psf, a)
import types
def build_matrix(func, shape):
"""explicitly evaluate coeeficient matrix of linear operator func by calling it repeatedly with delta functions"""
i, j = shape
Z = []
I = jnp.eye(i, j)
for r in range(i):
z = func(I[r])
Z.append(z)
return jnp.array(Z)
func = psf_convolve(psf_jax)
arr = types.SimpleNamespace(shape=signal_jax.shape, dtype=np.float32)
func_trans = lambda a: jax.linear_transpose(func, arr)(a)[0]
N = len(signal)
plt.figure()
M = build_matrix(func, (N, N)).T
plt.imshow(M)
plt.figure()
M = build_matrix(func_trans, (N, N))
plt.imshow(M)
plt.show()
```
- [ ] If applicable, include full error messages/tracebacks.
</issue>
<code>
[start of jax/_src/lax/fft.py]
1 # Copyright 2019 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 from functools import partial
17
18 import numpy as np
19
20 from jax.api import jit, linear_transpose, ShapeDtypeStruct
21 from jax.core import Primitive
22 from jax.interpreters import xla
23 from jax._src.util import prod
24 from jax import dtypes, lax
25 from jax.lib import xla_client
26 from jax.interpreters import ad
27 from jax.interpreters import batching
28 from jax.lib import pocketfft
29
30 xops = xla_client.ops
31
32 __all__ = [
33 "fft",
34 "fft_p",
35 ]
36
37 def _promote_to_complex(arg):
38 dtype = dtypes.result_type(arg, np.complex64)
39 return lax.convert_element_type(arg, dtype)
40
41 def _promote_to_real(arg):
42 dtype = dtypes.result_type(arg, np.float32)
43 return lax.convert_element_type(arg, dtype)
44
45 def fft(x, fft_type, fft_lengths):
46 if fft_type == xla_client.FftType.RFFT:
47 if np.iscomplexobj(x):
48 raise ValueError("only real valued inputs supported for rfft")
49 x = _promote_to_real(x)
50 else:
51 x = _promote_to_complex(x)
52 if len(fft_lengths) == 0:
53 # XLA FFT doesn't support 0-rank.
54 return x
55 fft_lengths = tuple(fft_lengths)
56 return fft_p.bind(x, fft_type=fft_type, fft_lengths=fft_lengths)
57
58 def fft_impl(x, fft_type, fft_lengths):
59 return xla.apply_primitive(fft_p, x, fft_type=fft_type, fft_lengths=fft_lengths)
60
61 _complex_dtype = lambda dtype: (np.zeros((), dtype) + np.zeros((), np.complex64)).dtype
62 _real_dtype = lambda dtype: np.zeros((), dtype).real.dtype
63 _is_even = lambda x: x % 2 == 0
64
65 def fft_abstract_eval(x, fft_type, fft_lengths):
66 if fft_type == xla_client.FftType.RFFT:
67 shape = (x.shape[:-len(fft_lengths)] + fft_lengths[:-1]
68 + (fft_lengths[-1] // 2 + 1,))
69 dtype = _complex_dtype(x.dtype)
70 elif fft_type == xla_client.FftType.IRFFT:
71 shape = x.shape[:-len(fft_lengths)] + fft_lengths
72 dtype = _real_dtype(x.dtype)
73 else:
74 shape = x.shape
75 dtype = x.dtype
76 return x.update(shape=shape, dtype=dtype)
77
78 def fft_translation_rule(c, x, fft_type, fft_lengths):
79 return xops.Fft(x, fft_type, fft_lengths)
80
81 def _naive_rfft(x, fft_lengths):
82 y = fft(x, xla_client.FftType.FFT, fft_lengths)
83 n = fft_lengths[-1]
84 return y[..., : n//2 + 1]
85
86 @partial(jit, static_argnums=1)
87 def _rfft_transpose(t, fft_lengths):
88 # The transpose of RFFT can't be expressed only in terms of irfft. Instead of
89 # manually building up larger twiddle matrices (which would increase the
90 # asymptotic complexity and is also rather complicated), we rely JAX to
91 # transpose a naive RFFT implementation.
92 dummy_shape = t.shape[:-len(fft_lengths)] + fft_lengths
93 dummy_primal = ShapeDtypeStruct(dummy_shape, _real_dtype(t.dtype))
94 transpose = linear_transpose(
95 partial(_naive_rfft, fft_lengths=fft_lengths), dummy_primal)
96 result, = transpose(t)
97 assert result.dtype == _real_dtype(t.dtype), (result.dtype, t.dtype)
98 return result
99
100 def _irfft_transpose(t, fft_lengths):
101 # The transpose of IRFFT is the RFFT of the cotangent times a scaling
102 # factor and a mask. The mask scales the cotangent for the Hermitian
103 # symmetric components of the RFFT by a factor of two, since these components
104 # are de-duplicated in the RFFT.
105 x = fft(t, xla_client.FftType.RFFT, fft_lengths)
106 n = x.shape[-1]
107 is_odd = fft_lengths[-1] % 2
108 full = partial(lax.full_like, t, dtype=t.dtype)
109 mask = lax.concatenate(
110 [full(1.0, shape=(1,)),
111 full(2.0, shape=(n - 2 + is_odd,)),
112 full(1.0, shape=(1 - is_odd,))],
113 dimension=0)
114 scale = 1 / prod(fft_lengths)
115 out = scale * mask * x
116 assert out.dtype == _complex_dtype(t.dtype), (out.dtype, t.dtype)
117 return out
118
119 def fft_transpose_rule(t, operand, fft_type, fft_lengths):
120 if fft_type == xla_client.FftType.RFFT:
121 result = _rfft_transpose(t, fft_lengths)
122 elif fft_type == xla_client.FftType.IRFFT:
123 result = _irfft_transpose(t, fft_lengths)
124 else:
125 result = fft(t, fft_type, fft_lengths)
126 return result,
127
128 def fft_batching_rule(batched_args, batch_dims, fft_type, fft_lengths):
129 x, = batched_args
130 bd, = batch_dims
131 x = batching.moveaxis(x, bd, 0)
132 return fft(x, fft_type, fft_lengths), 0
133
134 fft_p = Primitive('fft')
135 fft_p.def_impl(fft_impl)
136 fft_p.def_abstract_eval(fft_abstract_eval)
137 xla.translations[fft_p] = fft_translation_rule
138 ad.deflinear2(fft_p, fft_transpose_rule)
139 batching.primitive_batchers[fft_p] = fft_batching_rule
140 if pocketfft:
141 xla.backend_specific_translations['cpu'][fft_p] = pocketfft.pocketfft
142
[end of jax/_src/lax/fft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/jax/_src/lax/fft.py b/jax/_src/lax/fft.py
--- a/jax/_src/lax/fft.py
+++ b/jax/_src/lax/fft.py
@@ -114,7 +114,9 @@
scale = 1 / prod(fft_lengths)
out = scale * mask * x
assert out.dtype == _complex_dtype(t.dtype), (out.dtype, t.dtype)
- return out
+ # Use JAX's convention for complex gradients
+ # https://github.com/google/jax/issues/6223#issuecomment-807740707
+ return lax.conj(out)
def fft_transpose_rule(t, operand, fft_type, fft_lengths):
if fft_type == xla_client.FftType.RFFT:
|
{"golden_diff": "diff --git a/jax/_src/lax/fft.py b/jax/_src/lax/fft.py\n--- a/jax/_src/lax/fft.py\n+++ b/jax/_src/lax/fft.py\n@@ -114,7 +114,9 @@\n scale = 1 / prod(fft_lengths)\n out = scale * mask * x\n assert out.dtype == _complex_dtype(t.dtype), (out.dtype, t.dtype)\n- return out\n+ # Use JAX's convention for complex gradients\n+ # https://github.com/google/jax/issues/6223#issuecomment-807740707\n+ return lax.conj(out)\n \n def fft_transpose_rule(t, operand, fft_type, fft_lengths):\n if fft_type == xla_client.FftType.RFFT:\n", "issue": "linear_transpose involving fft seemingly incorrect\nI was planning to implement functionality that requires me to be able to take the transpose of a complicated linear function, which includes fourier transforms. I noticed that the fft module seems to implement the rules for transposition; the last few lines of https://jax.readthedocs.io/en/latest/_modules/jax/_src/lax/fft.html seem to pertain to it.\r\n\r\nI am not familiar enough with the jax API at present to point out any bugs; but below is a simple test that demonstrates what I believe to be a bug. The build_matrix helper function explicitly constructs the coefficients of the linear function by feeding it all delta functions in sequence. If I linear_transpose my function, that should be identical to the matrix transpose of the built matrix. Yet it doesnt. It seems as if im getting my result in reverse order (plus another off-by-one index bug I think).\r\n\r\nSeems to me like a bug in the implementation of the transpose rules for ffts; but again not qualified myself to spot it.\r\n\r\nWhile on the topic, slightly related question: when viewed as linear operators, convolution and correlation are transposed operators. Should I trust jax to figure out efficient transformations along these lines (assuming the underlying fft rules are bug free); or is it likely optimal for me to figure out how to override the linear transpose of a convolution with my own handcrafted correlation functions (and vice versa)?\r\n\r\n\r\nCode to reproduce:\r\n\r\n```python\r\nimport numpy as np\r\n\r\nimport jax\r\nfrom jax import numpy as jnp\r\nimport matplotlib.pyplot as plt\r\n\r\n\r\nnp.random.seed(0)\r\n\r\nsignal = np.cumsum(np.random.randn(2**8))\r\nsignal_jax = jnp.array(signal)\r\n\r\nx = np.linspace(-1, 1, len(signal))\r\npsf = np.clip(0.2 - np.abs(x), 0, 1) * (x > 0)\r\npsf /= psf.sum()\r\npsf_jax = jnp.array(psf)\r\n\r\njrfft = jax.jit(jnp.fft.rfft)\r\njirfft = jax.jit(jnp.fft.irfft)\r\n\r\n\r\[email protected]\r\ndef convolve(a, b):\r\n\tfa = jrfft(a)\r\n\tfb = jrfft(b)\r\n\treturn jirfft(fa * fb)\r\n\r\n\r\[email protected]\r\ndef correlate(a, b):\r\n\t\"\"\"NOTE: can this be implemented as a transposition rule according to:\r\n\thttps://jax.readthedocs.io/en/latest/_modules/jax/_src/lax/fft.html\r\n\t\"\"\"\r\n\tfa = jrfft(a).conj()\r\n\tfb = jrfft(b)\r\n\treturn jirfft(fa * fb)\r\n\r\n\r\ndef psf_convolve(psf):\r\n\t\"\"\"statically bind psf arg\"\"\"\r\n\tpsf = jax.numpy.fft.ifftshift(psf)\r\n\treturn lambda a: convolve(psf, a)\r\n\r\n\r\ndef psf_correlate(psf):\r\n\t\"\"\"statically bind psf arg. psf assumed to be centered\"\"\"\r\n\tpsf = jax.numpy.fft.ifftshift(psf)\r\n\treturn lambda a: correlate(psf, a)\r\n\r\n\r\nimport types\r\n\r\ndef build_matrix(func, shape):\r\n\t\"\"\"explicitly evaluate coeeficient matrix of linear operator func by calling it repeatedly with delta functions\"\"\"\r\n\ti, j = shape\r\n\tZ = []\r\n\tI = jnp.eye(i, j)\r\n\tfor r in range(i):\r\n\t\tz = func(I[r])\r\n\t\tZ.append(z)\r\n\treturn jnp.array(Z)\r\n\r\n\r\nfunc = psf_convolve(psf_jax)\r\n\r\narr = types.SimpleNamespace(shape=signal_jax.shape, dtype=np.float32)\r\nfunc_trans = lambda a: jax.linear_transpose(func, arr)(a)[0]\r\n\r\nN = len(signal)\r\n\r\nplt.figure()\r\nM = build_matrix(func, (N, N)).T\r\nplt.imshow(M)\r\n\r\nplt.figure()\r\nM = build_matrix(func_trans, (N, N))\r\nplt.imshow(M)\r\nplt.show()\r\n```\r\n\r\n- [ ] If applicable, include full error messages/tracebacks.\r\n\n", "before_files": [{"content": "# Copyright 2019 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\nfrom functools import partial\n\nimport numpy as np\n\nfrom jax.api import jit, linear_transpose, ShapeDtypeStruct\nfrom jax.core import Primitive\nfrom jax.interpreters import xla\nfrom jax._src.util import prod\nfrom jax import dtypes, lax\nfrom jax.lib import xla_client\nfrom jax.interpreters import ad\nfrom jax.interpreters import batching\nfrom jax.lib import pocketfft\n\nxops = xla_client.ops\n\n__all__ = [\n \"fft\",\n \"fft_p\",\n]\n\ndef _promote_to_complex(arg):\n dtype = dtypes.result_type(arg, np.complex64)\n return lax.convert_element_type(arg, dtype)\n\ndef _promote_to_real(arg):\n dtype = dtypes.result_type(arg, np.float32)\n return lax.convert_element_type(arg, dtype)\n\ndef fft(x, fft_type, fft_lengths):\n if fft_type == xla_client.FftType.RFFT:\n if np.iscomplexobj(x):\n raise ValueError(\"only real valued inputs supported for rfft\")\n x = _promote_to_real(x)\n else:\n x = _promote_to_complex(x)\n if len(fft_lengths) == 0:\n # XLA FFT doesn't support 0-rank.\n return x\n fft_lengths = tuple(fft_lengths)\n return fft_p.bind(x, fft_type=fft_type, fft_lengths=fft_lengths)\n\ndef fft_impl(x, fft_type, fft_lengths):\n return xla.apply_primitive(fft_p, x, fft_type=fft_type, fft_lengths=fft_lengths)\n\n_complex_dtype = lambda dtype: (np.zeros((), dtype) + np.zeros((), np.complex64)).dtype\n_real_dtype = lambda dtype: np.zeros((), dtype).real.dtype\n_is_even = lambda x: x % 2 == 0\n\ndef fft_abstract_eval(x, fft_type, fft_lengths):\n if fft_type == xla_client.FftType.RFFT:\n shape = (x.shape[:-len(fft_lengths)] + fft_lengths[:-1]\n + (fft_lengths[-1] // 2 + 1,))\n dtype = _complex_dtype(x.dtype)\n elif fft_type == xla_client.FftType.IRFFT:\n shape = x.shape[:-len(fft_lengths)] + fft_lengths\n dtype = _real_dtype(x.dtype)\n else:\n shape = x.shape\n dtype = x.dtype\n return x.update(shape=shape, dtype=dtype)\n\ndef fft_translation_rule(c, x, fft_type, fft_lengths):\n return xops.Fft(x, fft_type, fft_lengths)\n\ndef _naive_rfft(x, fft_lengths):\n y = fft(x, xla_client.FftType.FFT, fft_lengths)\n n = fft_lengths[-1]\n return y[..., : n//2 + 1]\n\n@partial(jit, static_argnums=1)\ndef _rfft_transpose(t, fft_lengths):\n # The transpose of RFFT can't be expressed only in terms of irfft. Instead of\n # manually building up larger twiddle matrices (which would increase the\n # asymptotic complexity and is also rather complicated), we rely JAX to\n # transpose a naive RFFT implementation.\n dummy_shape = t.shape[:-len(fft_lengths)] + fft_lengths\n dummy_primal = ShapeDtypeStruct(dummy_shape, _real_dtype(t.dtype))\n transpose = linear_transpose(\n partial(_naive_rfft, fft_lengths=fft_lengths), dummy_primal)\n result, = transpose(t)\n assert result.dtype == _real_dtype(t.dtype), (result.dtype, t.dtype)\n return result\n\ndef _irfft_transpose(t, fft_lengths):\n # The transpose of IRFFT is the RFFT of the cotangent times a scaling\n # factor and a mask. The mask scales the cotangent for the Hermitian\n # symmetric components of the RFFT by a factor of two, since these components\n # are de-duplicated in the RFFT.\n x = fft(t, xla_client.FftType.RFFT, fft_lengths)\n n = x.shape[-1]\n is_odd = fft_lengths[-1] % 2\n full = partial(lax.full_like, t, dtype=t.dtype)\n mask = lax.concatenate(\n [full(1.0, shape=(1,)),\n full(2.0, shape=(n - 2 + is_odd,)),\n full(1.0, shape=(1 - is_odd,))],\n dimension=0)\n scale = 1 / prod(fft_lengths)\n out = scale * mask * x\n assert out.dtype == _complex_dtype(t.dtype), (out.dtype, t.dtype)\n return out\n\ndef fft_transpose_rule(t, operand, fft_type, fft_lengths):\n if fft_type == xla_client.FftType.RFFT:\n result = _rfft_transpose(t, fft_lengths)\n elif fft_type == xla_client.FftType.IRFFT:\n result = _irfft_transpose(t, fft_lengths)\n else:\n result = fft(t, fft_type, fft_lengths)\n return result,\n\ndef fft_batching_rule(batched_args, batch_dims, fft_type, fft_lengths):\n x, = batched_args\n bd, = batch_dims\n x = batching.moveaxis(x, bd, 0)\n return fft(x, fft_type, fft_lengths), 0\n\nfft_p = Primitive('fft')\nfft_p.def_impl(fft_impl)\nfft_p.def_abstract_eval(fft_abstract_eval)\nxla.translations[fft_p] = fft_translation_rule\nad.deflinear2(fft_p, fft_transpose_rule)\nbatching.primitive_batchers[fft_p] = fft_batching_rule\nif pocketfft:\n xla.backend_specific_translations['cpu'][fft_p] = pocketfft.pocketfft\n", "path": "jax/_src/lax/fft.py"}]}
| 3,088 | 185 |
gh_patches_debug_15031
|
rasdani/github-patches
|
git_diff
|
cloud-custodian__cloud-custodian-6989
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mugc ignored Global Services policy with region and did not remove policy from other regions
**Describe the bug**
A clear and concise description of what the bug is.
According to documentation `https://cloudcustodian.io/docs/aws/examples/accountservicelimit.html` added region to one of our policies. Did redeploy and pipeline c7n-org step did update policy in specified us-east-1 region, but mugc step did not remove from us-east-2
**To Reproduce**
Steps to reproduce the behavior:
deploy policy for resource s3 in 2 regions: us-east-1 and us-east-2
add region: us-east-1 to the policy
deploy using c7n-org
run mugc
**Expected behavior**
A clear and concise description of what you expected to happen.
I would expect mugc to remove policy from all other regions
**Background (please complete the following information):**
- Python Version: [e.g. python 3.8.1] virtual environment CPython3.8.3.final.0-64
- Custodian Version: [e.g. 0.8.46.1] c7n 0.9.6, c7n-mailer 0.6.5, c7n-org 0.6.5
- Tool Version: [if applicable] codebuild pipeline
- Cloud Provider: [e.g. gcp, aws, azure] aws
- Policy: [please exclude any account/sensitive information]
```yaml
policies:
- name: list-buckets
resource: s3
region: us-east-1
```
**Additional context**
Add any other context about the problem here.
</issue>
<code>
[start of tools/ops/mugc.py]
1 # Copyright The Cloud Custodian Authors.
2 # SPDX-License-Identifier: Apache-2.0
3 import argparse
4 import itertools
5 import json
6 import os
7 import re
8 import logging
9 import sys
10
11 from c7n.credentials import SessionFactory
12 from c7n.config import Config
13 from c7n.policy import load as policy_load, PolicyCollection
14 from c7n import mu
15
16 # TODO: mugc has alot of aws assumptions
17
18 from c7n.resources.aws import AWS
19 from botocore.exceptions import ClientError
20
21
22 log = logging.getLogger('mugc')
23
24
25 def load_policies(options, config):
26 policies = PolicyCollection([], config)
27 for f in options.config_files:
28 policies += policy_load(config, f).filter(options.policy_filter)
29 return policies
30
31
32 def region_gc(options, region, policy_config, policies):
33
34 log.debug("Region:%s Starting garbage collection", region)
35 session_factory = SessionFactory(
36 region=region,
37 assume_role=policy_config.assume_role,
38 profile=policy_config.profile,
39 external_id=policy_config.external_id)
40
41 manager = mu.LambdaManager(session_factory)
42 funcs = list(manager.list_functions(options.prefix))
43 client = session_factory().client('lambda')
44
45 remove = []
46 current_policies = [p.name for p in policies]
47 pattern = re.compile(options.policy_regex)
48 for f in funcs:
49 if not pattern.match(f['FunctionName']):
50 continue
51 match = False
52 for pn in current_policies:
53 if f['FunctionName'].endswith(pn):
54 match = True
55 if options.present:
56 if match:
57 remove.append(f)
58 elif not match:
59 remove.append(f)
60
61 for n in remove:
62 events = []
63 try:
64 result = client.get_policy(FunctionName=n['FunctionName'])
65 except ClientError as e:
66 if e.response['Error']['Code'] == 'ResourceNotFoundException':
67 log.warning(
68 "Region:%s Lambda Function or Access Policy Statement missing: %s",
69 region, n['FunctionName'])
70 else:
71 log.warning(
72 "Region:%s Unexpected error: %s for function %s",
73 region, e, n['FunctionName'])
74
75 # Continue on with next function instead of raising an exception
76 continue
77
78 if 'Policy' not in result:
79 pass
80 else:
81 p = json.loads(result['Policy'])
82 for s in p['Statement']:
83 principal = s.get('Principal')
84 if not isinstance(principal, dict):
85 log.info("Skipping function %s" % n['FunctionName'])
86 continue
87 if principal == {'Service': 'events.amazonaws.com'}:
88 events.append(
89 mu.CloudWatchEventSource({}, session_factory))
90 elif principal == {'Service': 'config.amazonaws.com'}:
91 events.append(
92 mu.ConfigRule({}, session_factory))
93
94 f = mu.LambdaFunction({
95 'name': n['FunctionName'],
96 'role': n['Role'],
97 'handler': n['Handler'],
98 'timeout': n['Timeout'],
99 'memory_size': n['MemorySize'],
100 'description': n['Description'],
101 'runtime': n['Runtime'],
102 'events': events}, None)
103
104 log.info("Region:%s Removing %s", region, n['FunctionName'])
105 if options.dryrun:
106 log.info("Dryrun skipping removal")
107 continue
108 manager.remove(f)
109 log.info("Region:%s Removed %s", region, n['FunctionName'])
110
111
112 def resources_gc_prefix(options, policy_config, policy_collection):
113 """Garbage collect old custodian policies based on prefix.
114
115 We attempt to introspect to find the event sources for a policy
116 but without the old configuration this is implicit.
117 """
118
119 # Classify policies by region
120 policy_regions = {}
121 for p in policy_collection:
122 if p.execution_mode == 'poll':
123 continue
124 policy_regions.setdefault(p.options.region, []).append(p)
125
126 regions = get_gc_regions(options.regions, policy_config)
127 for r in regions:
128 region_gc(options, r, policy_config, policy_regions.get(r, []))
129
130
131 def get_gc_regions(regions, policy_config):
132 if 'all' in regions:
133 session_factory = SessionFactory(
134 region='us-east-1',
135 assume_role=policy_config.assume_role,
136 profile=policy_config.profile,
137 external_id=policy_config.external_id)
138
139 client = session_factory().client('ec2')
140 return [region['RegionName'] for region in client.describe_regions()['Regions']]
141 return regions
142
143
144 def setup_parser():
145 parser = argparse.ArgumentParser()
146 parser.add_argument("configs", nargs='*', help="Policy configuration file(s)")
147 parser.add_argument(
148 '-c', '--config', dest="config_files", nargs="*", action='append',
149 help="Policy configuration files(s)", default=[])
150 parser.add_argument(
151 "--present", action="store_true", default=False,
152 help='Target policies present in config files for removal instead of skipping them.')
153 parser.add_argument(
154 '-r', '--region', action='append', dest='regions', metavar='REGION',
155 help="AWS Region to target. Can be used multiple times, also supports `all`")
156 parser.add_argument('--dryrun', action="store_true", default=False)
157 parser.add_argument(
158 "--profile", default=os.environ.get('AWS_PROFILE'),
159 help="AWS Account Config File Profile to utilize")
160 parser.add_argument(
161 "--prefix", default="custodian-",
162 help="The Lambda name prefix to use for clean-up")
163 parser.add_argument(
164 "--policy-regex",
165 help="The policy must match the regex")
166 parser.add_argument("-p", "--policies", default=None, dest='policy_filter',
167 help="Only use named/matched policies")
168 parser.add_argument(
169 "--assume", default=None, dest="assume_role",
170 help="Role to assume")
171 parser.add_argument(
172 "-v", dest="verbose", action="store_true", default=False,
173 help='toggle verbose logging')
174 return parser
175
176
177 def main():
178 parser = setup_parser()
179 options = parser.parse_args()
180
181 log_level = logging.INFO
182 if options.verbose:
183 log_level = logging.DEBUG
184 logging.basicConfig(
185 level=log_level,
186 format="%(asctime)s: %(name)s:%(levelname)s %(message)s")
187 logging.getLogger('botocore').setLevel(logging.ERROR)
188 logging.getLogger('urllib3').setLevel(logging.ERROR)
189 logging.getLogger('c7n.cache').setLevel(logging.WARNING)
190
191 if not options.policy_regex:
192 options.policy_regex = f"^{options.prefix}.*"
193
194 if not options.regions:
195 options.regions = [os.environ.get('AWS_DEFAULT_REGION', 'us-east-1')]
196
197 files = []
198 files.extend(itertools.chain(*options.config_files))
199 files.extend(options.configs)
200 options.config_files = files
201
202 if not files:
203 parser.print_help()
204 sys.exit(1)
205
206 policy_config = Config.empty(
207 regions=options.regions,
208 profile=options.profile,
209 assume_role=options.assume_role)
210
211 # use cloud provider to initialize policies to get region expansion
212 policies = AWS().initialize_policies(
213 PolicyCollection([
214 p for p in load_policies(
215 options, policy_config)
216 if p.provider_name == 'aws'],
217 policy_config),
218 policy_config)
219
220 resources_gc_prefix(options, policy_config, policies)
221
222
223 if __name__ == '__main__':
224 main()
225
[end of tools/ops/mugc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tools/ops/mugc.py b/tools/ops/mugc.py
--- a/tools/ops/mugc.py
+++ b/tools/ops/mugc.py
@@ -43,15 +43,15 @@
client = session_factory().client('lambda')
remove = []
- current_policies = [p.name for p in policies]
pattern = re.compile(options.policy_regex)
for f in funcs:
if not pattern.match(f['FunctionName']):
continue
match = False
- for pn in current_policies:
- if f['FunctionName'].endswith(pn):
- match = True
+ for p in policies:
+ if f['FunctionName'].endswith(p.name):
+ if 'region' not in p.data or p.data['region'] == region:
+ match = True
if options.present:
if match:
remove.append(f)
|
{"golden_diff": "diff --git a/tools/ops/mugc.py b/tools/ops/mugc.py\n--- a/tools/ops/mugc.py\n+++ b/tools/ops/mugc.py\n@@ -43,15 +43,15 @@\n client = session_factory().client('lambda')\n \n remove = []\n- current_policies = [p.name for p in policies]\n pattern = re.compile(options.policy_regex)\n for f in funcs:\n if not pattern.match(f['FunctionName']):\n continue\n match = False\n- for pn in current_policies:\n- if f['FunctionName'].endswith(pn):\n- match = True\n+ for p in policies:\n+ if f['FunctionName'].endswith(p.name):\n+ if 'region' not in p.data or p.data['region'] == region:\n+ match = True\n if options.present:\n if match:\n remove.append(f)\n", "issue": "Mugc ignored Global Services policy with region and did not remove policy from other regions\n**Describe the bug**\r\nA clear and concise description of what the bug is.\r\nAccording to documentation `https://cloudcustodian.io/docs/aws/examples/accountservicelimit.html` added region to one of our policies. Did redeploy and pipeline c7n-org step did update policy in specified us-east-1 region, but mugc step did not remove from us-east-2\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\ndeploy policy for resource s3 in 2 regions: us-east-1 and us-east-2\r\nadd region: us-east-1 to the policy\r\ndeploy using c7n-org\r\nrun mugc\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\nI would expect mugc to remove policy from all other regions\r\n\r\n**Background (please complete the following information):**\r\n - Python Version: [e.g. python 3.8.1] virtual environment CPython3.8.3.final.0-64\r\n - Custodian Version: [e.g. 0.8.46.1] c7n 0.9.6, c7n-mailer 0.6.5, c7n-org 0.6.5\r\n - Tool Version: [if applicable] codebuild pipeline\r\n - Cloud Provider: [e.g. gcp, aws, azure] aws\r\n - Policy: [please exclude any account/sensitive information]\r\n```yaml\r\npolicies: \r\n - name: list-buckets\r\n resource: s3\r\n region: us-east-1\r\n```\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n\n", "before_files": [{"content": "# Copyright The Cloud Custodian Authors.\n# SPDX-License-Identifier: Apache-2.0\nimport argparse\nimport itertools\nimport json\nimport os\nimport re\nimport logging\nimport sys\n\nfrom c7n.credentials import SessionFactory\nfrom c7n.config import Config\nfrom c7n.policy import load as policy_load, PolicyCollection\nfrom c7n import mu\n\n# TODO: mugc has alot of aws assumptions\n\nfrom c7n.resources.aws import AWS\nfrom botocore.exceptions import ClientError\n\n\nlog = logging.getLogger('mugc')\n\n\ndef load_policies(options, config):\n policies = PolicyCollection([], config)\n for f in options.config_files:\n policies += policy_load(config, f).filter(options.policy_filter)\n return policies\n\n\ndef region_gc(options, region, policy_config, policies):\n\n log.debug(\"Region:%s Starting garbage collection\", region)\n session_factory = SessionFactory(\n region=region,\n assume_role=policy_config.assume_role,\n profile=policy_config.profile,\n external_id=policy_config.external_id)\n\n manager = mu.LambdaManager(session_factory)\n funcs = list(manager.list_functions(options.prefix))\n client = session_factory().client('lambda')\n\n remove = []\n current_policies = [p.name for p in policies]\n pattern = re.compile(options.policy_regex)\n for f in funcs:\n if not pattern.match(f['FunctionName']):\n continue\n match = False\n for pn in current_policies:\n if f['FunctionName'].endswith(pn):\n match = True\n if options.present:\n if match:\n remove.append(f)\n elif not match:\n remove.append(f)\n\n for n in remove:\n events = []\n try:\n result = client.get_policy(FunctionName=n['FunctionName'])\n except ClientError as e:\n if e.response['Error']['Code'] == 'ResourceNotFoundException':\n log.warning(\n \"Region:%s Lambda Function or Access Policy Statement missing: %s\",\n region, n['FunctionName'])\n else:\n log.warning(\n \"Region:%s Unexpected error: %s for function %s\",\n region, e, n['FunctionName'])\n\n # Continue on with next function instead of raising an exception\n continue\n\n if 'Policy' not in result:\n pass\n else:\n p = json.loads(result['Policy'])\n for s in p['Statement']:\n principal = s.get('Principal')\n if not isinstance(principal, dict):\n log.info(\"Skipping function %s\" % n['FunctionName'])\n continue\n if principal == {'Service': 'events.amazonaws.com'}:\n events.append(\n mu.CloudWatchEventSource({}, session_factory))\n elif principal == {'Service': 'config.amazonaws.com'}:\n events.append(\n mu.ConfigRule({}, session_factory))\n\n f = mu.LambdaFunction({\n 'name': n['FunctionName'],\n 'role': n['Role'],\n 'handler': n['Handler'],\n 'timeout': n['Timeout'],\n 'memory_size': n['MemorySize'],\n 'description': n['Description'],\n 'runtime': n['Runtime'],\n 'events': events}, None)\n\n log.info(\"Region:%s Removing %s\", region, n['FunctionName'])\n if options.dryrun:\n log.info(\"Dryrun skipping removal\")\n continue\n manager.remove(f)\n log.info(\"Region:%s Removed %s\", region, n['FunctionName'])\n\n\ndef resources_gc_prefix(options, policy_config, policy_collection):\n \"\"\"Garbage collect old custodian policies based on prefix.\n\n We attempt to introspect to find the event sources for a policy\n but without the old configuration this is implicit.\n \"\"\"\n\n # Classify policies by region\n policy_regions = {}\n for p in policy_collection:\n if p.execution_mode == 'poll':\n continue\n policy_regions.setdefault(p.options.region, []).append(p)\n\n regions = get_gc_regions(options.regions, policy_config)\n for r in regions:\n region_gc(options, r, policy_config, policy_regions.get(r, []))\n\n\ndef get_gc_regions(regions, policy_config):\n if 'all' in regions:\n session_factory = SessionFactory(\n region='us-east-1',\n assume_role=policy_config.assume_role,\n profile=policy_config.profile,\n external_id=policy_config.external_id)\n\n client = session_factory().client('ec2')\n return [region['RegionName'] for region in client.describe_regions()['Regions']]\n return regions\n\n\ndef setup_parser():\n parser = argparse.ArgumentParser()\n parser.add_argument(\"configs\", nargs='*', help=\"Policy configuration file(s)\")\n parser.add_argument(\n '-c', '--config', dest=\"config_files\", nargs=\"*\", action='append',\n help=\"Policy configuration files(s)\", default=[])\n parser.add_argument(\n \"--present\", action=\"store_true\", default=False,\n help='Target policies present in config files for removal instead of skipping them.')\n parser.add_argument(\n '-r', '--region', action='append', dest='regions', metavar='REGION',\n help=\"AWS Region to target. Can be used multiple times, also supports `all`\")\n parser.add_argument('--dryrun', action=\"store_true\", default=False)\n parser.add_argument(\n \"--profile\", default=os.environ.get('AWS_PROFILE'),\n help=\"AWS Account Config File Profile to utilize\")\n parser.add_argument(\n \"--prefix\", default=\"custodian-\",\n help=\"The Lambda name prefix to use for clean-up\")\n parser.add_argument(\n \"--policy-regex\",\n help=\"The policy must match the regex\")\n parser.add_argument(\"-p\", \"--policies\", default=None, dest='policy_filter',\n help=\"Only use named/matched policies\")\n parser.add_argument(\n \"--assume\", default=None, dest=\"assume_role\",\n help=\"Role to assume\")\n parser.add_argument(\n \"-v\", dest=\"verbose\", action=\"store_true\", default=False,\n help='toggle verbose logging')\n return parser\n\n\ndef main():\n parser = setup_parser()\n options = parser.parse_args()\n\n log_level = logging.INFO\n if options.verbose:\n log_level = logging.DEBUG\n logging.basicConfig(\n level=log_level,\n format=\"%(asctime)s: %(name)s:%(levelname)s %(message)s\")\n logging.getLogger('botocore').setLevel(logging.ERROR)\n logging.getLogger('urllib3').setLevel(logging.ERROR)\n logging.getLogger('c7n.cache').setLevel(logging.WARNING)\n\n if not options.policy_regex:\n options.policy_regex = f\"^{options.prefix}.*\"\n\n if not options.regions:\n options.regions = [os.environ.get('AWS_DEFAULT_REGION', 'us-east-1')]\n\n files = []\n files.extend(itertools.chain(*options.config_files))\n files.extend(options.configs)\n options.config_files = files\n\n if not files:\n parser.print_help()\n sys.exit(1)\n\n policy_config = Config.empty(\n regions=options.regions,\n profile=options.profile,\n assume_role=options.assume_role)\n\n # use cloud provider to initialize policies to get region expansion\n policies = AWS().initialize_policies(\n PolicyCollection([\n p for p in load_policies(\n options, policy_config)\n if p.provider_name == 'aws'],\n policy_config),\n policy_config)\n\n resources_gc_prefix(options, policy_config, policies)\n\n\nif __name__ == '__main__':\n main()\n", "path": "tools/ops/mugc.py"}]}
| 3,072 | 203 |
gh_patches_debug_25448
|
rasdani/github-patches
|
git_diff
|
yt-project__yt-2922
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fresh yt installation not importing
<!--To help us understand and resolve your issue, please fill out the form to
the best of your ability.-->
<!--You can feel free to delete the sections that do not apply.-->
### Bug report
**Bug summary**
yt fails to import on a fresh development installation. The error is that `ModuleNotFoundError: No module named 'yt.utilities.lib.misc_utilities'` Similar to [Issue 2685](https://github.com/yt-project/yt/issues/2685)
<!--A short 1-2 sentences that succinctly describes the bug-->
**Code for reproduction**
Installed using `pip install git+git://github.com/yt-project/yt.git`
<!--A minimum code snippet required to reproduce the bug, also minimizing the
number of dependencies required.-->
<!-- If you need to use a data file to trigger the issue you're having, consider
using one of the datasets from the yt data hub (http://yt-project.org/data). If
your issue cannot be triggered using a public dataset, you can use the yt
curldrop (https://docs.hub.yt/services.html#curldrop) to share data
files. Please include a link to the dataset in the issue if you use the
curldrop.-->
```python
import yt
```
**Actual outcome**
<!--The output produced by the above code, which may be a screenshot, console
output, etc.-->
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-1-2d2292a375dc> in <module>
----> 1 import yt
~/.conda/envs/ytgit/lib/python3.8/site-packages/yt/__init__.py in <module>
63 )
64
---> 65 from yt.fields.api import (
66 field_plugins,
67 DerivedField,
~/.conda/envs/ytgit/lib/python3.8/site-packages/yt/fields/api.py in <module>
1 # from . import species_fields
----> 2 from . import (
3 angular_momentum,
4 astro_fields,
5 cosmology_fields,
~/.conda/envs/ytgit/lib/python3.8/site-packages/yt/fields/angular_momentum.py in <module>
1 import numpy as np
2
----> 3 from yt.utilities.lib.misc_utilities import (
4 obtain_position_vector,
5 obtain_relative_velocity_vector,
ModuleNotFoundError: No module named 'yt.utilities.lib.misc_utilities'
```
**Expected outcome**
<!--A description of the expected outcome from the code snippet-->
<!--If this used to work in an earlier version of yt, please note the
version it used to work on-->
**Version Information**
<!--Please specify your platform and versions of the relevant libraries you are
using:-->
* Operating System: CentOS Linux 7 (Core)x86_64
* Python Version: 3.8.5
* yt version: 4.0.dev0
* Other Libraries (if applicable): N/A
<!--Please tell us how you installed yt and python e.g., from source,
pip, conda. If you installed from conda, please specify which channel you used
if not the default-->
Installed using `pip install git+git://github.com/yt-project/yt.git`
Thanks!
</issue>
<code>
[start of setup.py]
1 import glob
2 import os
3 import sys
4 from distutils.ccompiler import get_default_compiler
5 from distutils.version import LooseVersion
6
7 import pkg_resources
8 from setuptools import find_packages, setup
9
10 from setupext import (
11 check_for_openmp,
12 check_for_pyembree,
13 create_build_ext,
14 install_ccompiler,
15 )
16
17 install_ccompiler()
18
19 try:
20 distribute_ver = LooseVersion(pkg_resources.get_distribution("distribute").version)
21 if distribute_ver < LooseVersion("0.7.3"):
22 print("Distribute is a legacy package obsoleted by setuptools.")
23 print("We strongly recommend that you just uninstall it.")
24 print("If for some reason you cannot do it, you'll need to upgrade it")
25 print("to latest version before proceeding:")
26 print(" pip install -U distribute")
27 sys.exit(1)
28 except pkg_resources.DistributionNotFound:
29 pass # yay!
30
31 VERSION = "4.0.dev0"
32
33 if os.path.exists("MANIFEST"):
34 os.remove("MANIFEST")
35
36 with open("README.md") as file:
37 long_description = file.read()
38
39 if check_for_openmp():
40 omp_args = ["-fopenmp"]
41 else:
42 omp_args = []
43
44 if os.name == "nt":
45 std_libs = []
46 else:
47 std_libs = ["m"]
48
49 if get_default_compiler() == "msvc":
50 CPP14_FLAG = ["/std:c++14"]
51 else:
52 CPP14_FLAG = ["--std=c++14"]
53
54 cythonize_aliases = {
55 "LIB_DIR": "yt/utilities/lib/",
56 "LIB_DIR_EWAH": ["yt/utilities/lib/", "yt/utilities/lib/ewahboolarray/"],
57 "LIB_DIR_GEOM": ["yt/utilities/lib/", "yt/geometry/"],
58 "LIB_DIR_GEOM_ARTIO": [
59 "yt/utilities/lib/",
60 "yt/geometry/",
61 "yt/frontends/artio/artio_headers/",
62 ],
63 "STD_LIBS": std_libs,
64 "OMP_ARGS": omp_args,
65 "FIXED_INTERP": "yt/utilities/lib/fixed_interpolator.cpp",
66 "ARTIO_SOURCE": glob.glob("yt/frontends/artio/artio_headers/*.c"),
67 "CPP14_FLAG": CPP14_FLAG,
68 }
69
70 lib_exts = [
71 "yt/geometry/*.pyx",
72 "yt/utilities/cython_fortran_utils.pyx",
73 "yt/frontends/ramses/io_utils.pyx",
74 "yt/utilities/lib/cykdtree/kdtree.pyx",
75 "yt/utilities/lib/cykdtree/utils.pyx",
76 "yt/frontends/artio/_artio_caller.pyx",
77 "yt/utilities/lib/*.pyx",
78 ]
79
80 embree_libs, embree_aliases = check_for_pyembree(std_libs)
81 cythonize_aliases.update(embree_aliases)
82 lib_exts += embree_libs
83
84 # This overrides using lib_exts, so it has to happen after lib_exts is fully defined
85 build_ext, sdist = create_build_ext(lib_exts, cythonize_aliases)
86
87 if __name__ == "__main__":
88 setup(
89 name="yt",
90 version=VERSION,
91 description="An analysis and visualization toolkit for volumetric data",
92 long_description=long_description,
93 long_description_content_type="text/markdown",
94 classifiers=[
95 "Development Status :: 5 - Production/Stable",
96 "Environment :: Console",
97 "Intended Audience :: Science/Research",
98 "License :: OSI Approved :: BSD License",
99 "Operating System :: MacOS :: MacOS X",
100 "Operating System :: POSIX :: AIX",
101 "Operating System :: POSIX :: Linux",
102 "Programming Language :: C",
103 "Programming Language :: Python :: 3",
104 "Programming Language :: Python :: 3.5",
105 "Programming Language :: Python :: 3.6",
106 "Programming Language :: Python :: 3.7",
107 "Programming Language :: Python :: 3.8",
108 "Topic :: Scientific/Engineering :: Astronomy",
109 "Topic :: Scientific/Engineering :: Physics",
110 "Topic :: Scientific/Engineering :: Visualization",
111 "Framework :: Matplotlib",
112 ],
113 keywords="astronomy astrophysics visualization " + "amr adaptivemeshrefinement",
114 entry_points={
115 "console_scripts": ["yt = yt.utilities.command_line:run_main",],
116 "nose.plugins.0.10": [
117 "answer-testing = yt.utilities.answer_testing.framework:AnswerTesting"
118 ],
119 },
120 packages=find_packages(),
121 include_package_data=True,
122 install_requires=[
123 "matplotlib>=1.5.3",
124 "setuptools>=19.6",
125 "sympy>=1.2",
126 "numpy>=1.10.4",
127 "IPython>=1.0",
128 "unyt>=2.7.2",
129 ],
130 extras_require={"hub": ["girder_client"], "mapserver": ["bottle"]},
131 cmdclass={"sdist": sdist, "build_ext": build_ext},
132 author="The yt project",
133 author_email="[email protected]",
134 url="https://github.com/yt-project/yt",
135 project_urls={
136 "Homepage": "https://yt-project.org/",
137 "Documentation": "https://yt-project.org/doc/",
138 "Source": "https://github.com/yt-project/yt/",
139 "Tracker": "https://github.com/yt-project/yt/issues",
140 },
141 license="BSD 3-Clause",
142 zip_safe=False,
143 scripts=["scripts/iyt"],
144 ext_modules=[], # !!! We override this inside build_ext above
145 python_requires=">=3.6",
146 )
147
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,7 @@
from distutils.version import LooseVersion
import pkg_resources
-from setuptools import find_packages, setup
+from setuptools import Distribution, find_packages, setup
from setupext import (
check_for_openmp,
@@ -84,6 +84,16 @@
# This overrides using lib_exts, so it has to happen after lib_exts is fully defined
build_ext, sdist = create_build_ext(lib_exts, cythonize_aliases)
+# Force setuptools to consider that there are ext modules, even if empty.
+# See https://github.com/yt-project/yt/issues/2922 and
+# https://stackoverflow.com/a/62668026/2601223 for the fix.
+class BinaryDistribution(Distribution):
+ """Distribution which always forces a binary package with platform name."""
+
+ def has_ext_modules(self):
+ return True
+
+
if __name__ == "__main__":
setup(
name="yt",
@@ -141,6 +151,7 @@
license="BSD 3-Clause",
zip_safe=False,
scripts=["scripts/iyt"],
+ distclass=BinaryDistribution,
ext_modules=[], # !!! We override this inside build_ext above
python_requires=">=3.6",
)
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,7 +5,7 @@\n from distutils.version import LooseVersion\n \n import pkg_resources\n-from setuptools import find_packages, setup\n+from setuptools import Distribution, find_packages, setup\n \n from setupext import (\n check_for_openmp,\n@@ -84,6 +84,16 @@\n # This overrides using lib_exts, so it has to happen after lib_exts is fully defined\n build_ext, sdist = create_build_ext(lib_exts, cythonize_aliases)\n \n+# Force setuptools to consider that there are ext modules, even if empty.\n+# See https://github.com/yt-project/yt/issues/2922 and\n+# https://stackoverflow.com/a/62668026/2601223 for the fix.\n+class BinaryDistribution(Distribution):\n+ \"\"\"Distribution which always forces a binary package with platform name.\"\"\"\n+\n+ def has_ext_modules(self):\n+ return True\n+\n+\n if __name__ == \"__main__\":\n setup(\n name=\"yt\",\n@@ -141,6 +151,7 @@\n license=\"BSD 3-Clause\",\n zip_safe=False,\n scripts=[\"scripts/iyt\"],\n+ distclass=BinaryDistribution,\n ext_modules=[], # !!! We override this inside build_ext above\n python_requires=\">=3.6\",\n )\n", "issue": "Fresh yt installation not importing\n<!--To help us understand and resolve your issue, please fill out the form to\r\nthe best of your ability.-->\r\n<!--You can feel free to delete the sections that do not apply.-->\r\n\r\n### Bug report\r\n\r\n**Bug summary**\r\n\r\nyt fails to import on a fresh development installation. The error is that `ModuleNotFoundError: No module named 'yt.utilities.lib.misc_utilities'` Similar to [Issue 2685](https://github.com/yt-project/yt/issues/2685)\r\n<!--A short 1-2 sentences that succinctly describes the bug-->\r\n\r\n**Code for reproduction**\r\nInstalled using `pip install git+git://github.com/yt-project/yt.git`\r\n<!--A minimum code snippet required to reproduce the bug, also minimizing the\r\nnumber of dependencies required.-->\r\n\r\n<!-- If you need to use a data file to trigger the issue you're having, consider\r\nusing one of the datasets from the yt data hub (http://yt-project.org/data). If\r\nyour issue cannot be triggered using a public dataset, you can use the yt\r\ncurldrop (https://docs.hub.yt/services.html#curldrop) to share data\r\nfiles. Please include a link to the dataset in the issue if you use the\r\ncurldrop.-->\r\n\r\n```python\r\nimport yt\r\n```\r\n\r\n**Actual outcome**\r\n\r\n<!--The output produced by the above code, which may be a screenshot, console\r\noutput, etc.-->\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-1-2d2292a375dc> in <module>\r\n----> 1 import yt\r\n\r\n~/.conda/envs/ytgit/lib/python3.8/site-packages/yt/__init__.py in <module>\r\n 63 )\r\n 64 \r\n---> 65 from yt.fields.api import (\r\n 66 field_plugins,\r\n 67 DerivedField,\r\n\r\n~/.conda/envs/ytgit/lib/python3.8/site-packages/yt/fields/api.py in <module>\r\n 1 # from . import species_fields\r\n----> 2 from . import (\r\n 3 angular_momentum,\r\n 4 astro_fields,\r\n 5 cosmology_fields,\r\n\r\n~/.conda/envs/ytgit/lib/python3.8/site-packages/yt/fields/angular_momentum.py in <module>\r\n 1 import numpy as np\r\n 2 \r\n----> 3 from yt.utilities.lib.misc_utilities import (\r\n 4 obtain_position_vector,\r\n 5 obtain_relative_velocity_vector,\r\n\r\nModuleNotFoundError: No module named 'yt.utilities.lib.misc_utilities'\r\n```\r\n\r\n**Expected outcome**\r\n\r\n<!--A description of the expected outcome from the code snippet-->\r\n<!--If this used to work in an earlier version of yt, please note the\r\nversion it used to work on-->\r\n\r\n**Version Information**\r\n<!--Please specify your platform and versions of the relevant libraries you are\r\nusing:-->\r\n * Operating System: CentOS Linux 7 (Core)x86_64\r\n * Python Version: 3.8.5\r\n * yt version: 4.0.dev0\r\n * Other Libraries (if applicable): N/A\r\n\r\n<!--Please tell us how you installed yt and python e.g., from source,\r\npip, conda. If you installed from conda, please specify which channel you used\r\nif not the default-->\r\n\r\nInstalled using `pip install git+git://github.com/yt-project/yt.git`\r\n\r\n\r\nThanks!\n", "before_files": [{"content": "import glob\nimport os\nimport sys\nfrom distutils.ccompiler import get_default_compiler\nfrom distutils.version import LooseVersion\n\nimport pkg_resources\nfrom setuptools import find_packages, setup\n\nfrom setupext import (\n check_for_openmp,\n check_for_pyembree,\n create_build_ext,\n install_ccompiler,\n)\n\ninstall_ccompiler()\n\ntry:\n distribute_ver = LooseVersion(pkg_resources.get_distribution(\"distribute\").version)\n if distribute_ver < LooseVersion(\"0.7.3\"):\n print(\"Distribute is a legacy package obsoleted by setuptools.\")\n print(\"We strongly recommend that you just uninstall it.\")\n print(\"If for some reason you cannot do it, you'll need to upgrade it\")\n print(\"to latest version before proceeding:\")\n print(\" pip install -U distribute\")\n sys.exit(1)\nexcept pkg_resources.DistributionNotFound:\n pass # yay!\n\nVERSION = \"4.0.dev0\"\n\nif os.path.exists(\"MANIFEST\"):\n os.remove(\"MANIFEST\")\n\nwith open(\"README.md\") as file:\n long_description = file.read()\n\nif check_for_openmp():\n omp_args = [\"-fopenmp\"]\nelse:\n omp_args = []\n\nif os.name == \"nt\":\n std_libs = []\nelse:\n std_libs = [\"m\"]\n\nif get_default_compiler() == \"msvc\":\n CPP14_FLAG = [\"/std:c++14\"]\nelse:\n CPP14_FLAG = [\"--std=c++14\"]\n\ncythonize_aliases = {\n \"LIB_DIR\": \"yt/utilities/lib/\",\n \"LIB_DIR_EWAH\": [\"yt/utilities/lib/\", \"yt/utilities/lib/ewahboolarray/\"],\n \"LIB_DIR_GEOM\": [\"yt/utilities/lib/\", \"yt/geometry/\"],\n \"LIB_DIR_GEOM_ARTIO\": [\n \"yt/utilities/lib/\",\n \"yt/geometry/\",\n \"yt/frontends/artio/artio_headers/\",\n ],\n \"STD_LIBS\": std_libs,\n \"OMP_ARGS\": omp_args,\n \"FIXED_INTERP\": \"yt/utilities/lib/fixed_interpolator.cpp\",\n \"ARTIO_SOURCE\": glob.glob(\"yt/frontends/artio/artio_headers/*.c\"),\n \"CPP14_FLAG\": CPP14_FLAG,\n}\n\nlib_exts = [\n \"yt/geometry/*.pyx\",\n \"yt/utilities/cython_fortran_utils.pyx\",\n \"yt/frontends/ramses/io_utils.pyx\",\n \"yt/utilities/lib/cykdtree/kdtree.pyx\",\n \"yt/utilities/lib/cykdtree/utils.pyx\",\n \"yt/frontends/artio/_artio_caller.pyx\",\n \"yt/utilities/lib/*.pyx\",\n]\n\nembree_libs, embree_aliases = check_for_pyembree(std_libs)\ncythonize_aliases.update(embree_aliases)\nlib_exts += embree_libs\n\n# This overrides using lib_exts, so it has to happen after lib_exts is fully defined\nbuild_ext, sdist = create_build_ext(lib_exts, cythonize_aliases)\n\nif __name__ == \"__main__\":\n setup(\n name=\"yt\",\n version=VERSION,\n description=\"An analysis and visualization toolkit for volumetric data\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: BSD License\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX :: AIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: C\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering :: Astronomy\",\n \"Topic :: Scientific/Engineering :: Physics\",\n \"Topic :: Scientific/Engineering :: Visualization\",\n \"Framework :: Matplotlib\",\n ],\n keywords=\"astronomy astrophysics visualization \" + \"amr adaptivemeshrefinement\",\n entry_points={\n \"console_scripts\": [\"yt = yt.utilities.command_line:run_main\",],\n \"nose.plugins.0.10\": [\n \"answer-testing = yt.utilities.answer_testing.framework:AnswerTesting\"\n ],\n },\n packages=find_packages(),\n include_package_data=True,\n install_requires=[\n \"matplotlib>=1.5.3\",\n \"setuptools>=19.6\",\n \"sympy>=1.2\",\n \"numpy>=1.10.4\",\n \"IPython>=1.0\",\n \"unyt>=2.7.2\",\n ],\n extras_require={\"hub\": [\"girder_client\"], \"mapserver\": [\"bottle\"]},\n cmdclass={\"sdist\": sdist, \"build_ext\": build_ext},\n author=\"The yt project\",\n author_email=\"[email protected]\",\n url=\"https://github.com/yt-project/yt\",\n project_urls={\n \"Homepage\": \"https://yt-project.org/\",\n \"Documentation\": \"https://yt-project.org/doc/\",\n \"Source\": \"https://github.com/yt-project/yt/\",\n \"Tracker\": \"https://github.com/yt-project/yt/issues\",\n },\n license=\"BSD 3-Clause\",\n zip_safe=False,\n scripts=[\"scripts/iyt\"],\n ext_modules=[], # !!! We override this inside build_ext above\n python_requires=\">=3.6\",\n )\n", "path": "setup.py"}]}
| 2,816 | 308 |
gh_patches_debug_16891
|
rasdani/github-patches
|
git_diff
|
medtagger__MedTagger-447
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bump Python to 3.7.0
## Expected Behavior
MedTagger should always use the latest version of Python technologies and follow the rabbit instead of leaving technical dept :)
## Actual Behavior
We've got Python 3.6.x right now.
## Additional comment
Remember about Makefiles, Dockerfiles, TravisCI and more(?).
**WATCH OUT!** It's relatively new. Some of our dependencies may not work properly! Find out if `numpy` and other libs supports it!
**BLOCKED BY:**
- [SimpleITK](https://github.com/SimpleITK/SimpleITK/releases) - next release (>1.1.0) will be fine to use with Python3.7.
</issue>
<code>
[start of backend/scripts/import_data.py]
1 """Script that will fill MedTagger with data.
2
3 How to use it?
4 --------------
5 At first, download some scans from anywhere on the Internet. You can use example dataset from Data Science Bowl 2017:
6 https://www.kaggle.com/c/data-science-bowl-2017/data (look for 'sample_images.7z' file).
7
8 Then, place these data (unzipped) anywhere on your computer and run this script by:
9
10 (venv) $ python3.6 scripts/import_data.py --source=./dir_with_scans/
11
12 Please keep all scans with given structure:
13
14 |
15 `-- dir_with_scans
16 |-- 0a0c32c9e08cc2ea76a71649de56be6d
17 | |-- 0a67f9edb4915467ac16a565955898d3.dcm
18 | |-- 0eb4e3cae3de93e50431cf12bdc6c93d.dcm
19 | `-- ...
20 |-- 0a38e7597ca26f9374f8ea2770ba870d
21 | |-- 0bad9c3a3890617f78a905b78bc60f99.dcm
22 | |-- 1cffdd431884c2792ae0cbecec1c9e14.dcm
23 | `-- ...
24 `-- ...
25
26 """
27 import os
28 import argparse
29 import glob
30 import logging
31 import logging.config
32
33 from medtagger.repositories import scans as ScansRepository, datasets as DatasetsRepository
34 from medtagger.workers.storage import parse_dicom_and_update_slice
35
36
37 logging.config.fileConfig('logging.conf')
38 logger = logging.getLogger(__name__)
39
40 parser = argparse.ArgumentParser(description='Import data to the MedTagger.')
41 parser.add_argument('--source', type=str, required=True, help='Source directory')
42 parser.add_argument('--dataset', type=str, required=True, help='Dataset key for these scans')
43 args = parser.parse_args()
44
45
46 if __name__ == '__main__':
47 logger.info('Checking Dataset...')
48 dataset = DatasetsRepository.get_dataset_by_key(args.dataset)
49
50 source = args.source.rstrip('/')
51 for scan_directory in glob.iglob(source + '/*'):
52 if not os.path.isdir(scan_directory):
53 logger.warning('"%s" is not a directory. Skipping...', scan_directory)
54 continue
55
56 logger.info('Adding new Scan from "%s".', scan_directory)
57 slice_names = glob.glob(scan_directory + '/*.dcm')
58 number_of_slices = len(slice_names)
59 scan = ScansRepository.add_new_scan(dataset, number_of_slices, None)
60
61 for slice_name in slice_names:
62 logger.info('Adding new Slice to Scan "%s" based on "%s".', scan.id, slice_name)
63 with open(slice_name, 'rb') as slice_dicom_file:
64 _slice = scan.add_slice()
65 image = slice_dicom_file.read()
66 parse_dicom_and_update_slice.delay(_slice.id, image)
67
[end of backend/scripts/import_data.py]
[start of backend/scripts/convert_dicoms_to_png.py]
1 """Script that will convert multiple dicoms to PNG format.
2
3 How to use it?
4 --------------
5 At first, download some scans from anywhere on the Internet. You can use example dataset from Data Science Bowl 2017:
6 https://www.kaggle.com/c/data-science-bowl-2017/data (look for 'sample_images.7z' file).
7
8 Then, place these data (unzipped) anywhere on your computer and run this script by:
9
10 (venv) $ python3.6 scripts/dicoms_to_png.py --input=./dir_with_scans/ --output=./dir_with_scans/converted/
11
12 Name of the converted Dicom file is a position of the scan on the z axis.
13 """
14 import os
15 import argparse
16
17 import SimpleITK as sitk
18 from PIL import Image
19
20 from medtagger.conversion import convert_slice_to_normalized_8bit_array
21 from medtagger.definitions import DicomTag
22 from medtagger.dicoms import read_list
23
24
25 parser = argparse.ArgumentParser(description='Convert dicoms to png format.')
26 parser.add_argument('--input', type=str, required=True, help='Full path to directory where dicoms are located')
27 parser.add_argument('--output', type=str, required=True,
28 help='Full path to directory where converted dicoms would be located')
29
30 args = parser.parse_args()
31 dicoms_folder_path = args.input
32 converted_dicoms_folder_path = args.output
33
34 dicoms = [sitk.ReadImage(dicoms_folder_path + d) for d in os.listdir(dicoms_folder_path) if
35 os.path.isfile(dicoms_folder_path + d)]
36 min_position = abs(min(float((read_list(dicom, DicomTag.IMAGE_POSITION_PATIENT) or [])[2]) for dicom in dicoms))
37
38 if not os.path.exists(converted_dicoms_folder_path):
39 os.mkdir(converted_dicoms_folder_path)
40
41 for single_dicom in dicoms:
42 image_bytes = convert_slice_to_normalized_8bit_array(single_dicom)
43 slice_position = float((read_list(single_dicom, DicomTag.IMAGE_POSITION_PATIENT) or [])[2])
44 converted_dicom_name = '{0:.2f}'.format(slice_position + min_position)
45 Image.fromarray(image_bytes, 'L').save(converted_dicoms_folder_path + converted_dicom_name + '.png')
46
[end of backend/scripts/convert_dicoms_to_png.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/scripts/convert_dicoms_to_png.py b/backend/scripts/convert_dicoms_to_png.py
--- a/backend/scripts/convert_dicoms_to_png.py
+++ b/backend/scripts/convert_dicoms_to_png.py
@@ -7,7 +7,7 @@
Then, place these data (unzipped) anywhere on your computer and run this script by:
- (venv) $ python3.6 scripts/dicoms_to_png.py --input=./dir_with_scans/ --output=./dir_with_scans/converted/
+ (venv) $ python3.7 scripts/dicoms_to_png.py --input=./dir_with_scans/ --output=./dir_with_scans/converted/
Name of the converted Dicom file is a position of the scan on the z axis.
"""
diff --git a/backend/scripts/import_data.py b/backend/scripts/import_data.py
--- a/backend/scripts/import_data.py
+++ b/backend/scripts/import_data.py
@@ -7,7 +7,7 @@
Then, place these data (unzipped) anywhere on your computer and run this script by:
- (venv) $ python3.6 scripts/import_data.py --source=./dir_with_scans/
+ (venv) $ python3.7 scripts/import_data.py --source=./dir_with_scans/
Please keep all scans with given structure:
|
{"golden_diff": "diff --git a/backend/scripts/convert_dicoms_to_png.py b/backend/scripts/convert_dicoms_to_png.py\n--- a/backend/scripts/convert_dicoms_to_png.py\n+++ b/backend/scripts/convert_dicoms_to_png.py\n@@ -7,7 +7,7 @@\n \n Then, place these data (unzipped) anywhere on your computer and run this script by:\n \n- (venv) $ python3.6 scripts/dicoms_to_png.py --input=./dir_with_scans/ --output=./dir_with_scans/converted/\n+ (venv) $ python3.7 scripts/dicoms_to_png.py --input=./dir_with_scans/ --output=./dir_with_scans/converted/\n \n Name of the converted Dicom file is a position of the scan on the z axis.\n \"\"\"\ndiff --git a/backend/scripts/import_data.py b/backend/scripts/import_data.py\n--- a/backend/scripts/import_data.py\n+++ b/backend/scripts/import_data.py\n@@ -7,7 +7,7 @@\n \n Then, place these data (unzipped) anywhere on your computer and run this script by:\n \n- (venv) $ python3.6 scripts/import_data.py --source=./dir_with_scans/\n+ (venv) $ python3.7 scripts/import_data.py --source=./dir_with_scans/\n \n Please keep all scans with given structure:\n", "issue": "Bump Python to 3.7.0\n## Expected Behavior\r\n\r\nMedTagger should always use the latest version of Python technologies and follow the rabbit instead of leaving technical dept :)\r\n\r\n## Actual Behavior\r\n\r\nWe've got Python 3.6.x right now.\r\n\r\n## Additional comment\r\n\r\nRemember about Makefiles, Dockerfiles, TravisCI and more(?).\r\n\r\n**WATCH OUT!** It's relatively new. Some of our dependencies may not work properly! Find out if `numpy` and other libs supports it!\r\n\r\n**BLOCKED BY:**\r\n- [SimpleITK](https://github.com/SimpleITK/SimpleITK/releases) - next release (>1.1.0) will be fine to use with Python3.7.\n", "before_files": [{"content": "\"\"\"Script that will fill MedTagger with data.\n\nHow to use it?\n--------------\nAt first, download some scans from anywhere on the Internet. You can use example dataset from Data Science Bowl 2017:\nhttps://www.kaggle.com/c/data-science-bowl-2017/data (look for 'sample_images.7z' file).\n\nThen, place these data (unzipped) anywhere on your computer and run this script by:\n\n (venv) $ python3.6 scripts/import_data.py --source=./dir_with_scans/\n\nPlease keep all scans with given structure:\n\n |\n `-- dir_with_scans\n |-- 0a0c32c9e08cc2ea76a71649de56be6d\n | |-- 0a67f9edb4915467ac16a565955898d3.dcm\n | |-- 0eb4e3cae3de93e50431cf12bdc6c93d.dcm\n | `-- ...\n |-- 0a38e7597ca26f9374f8ea2770ba870d\n | |-- 0bad9c3a3890617f78a905b78bc60f99.dcm\n | |-- 1cffdd431884c2792ae0cbecec1c9e14.dcm\n | `-- ...\n `-- ...\n\n\"\"\"\nimport os\nimport argparse\nimport glob\nimport logging\nimport logging.config\n\nfrom medtagger.repositories import scans as ScansRepository, datasets as DatasetsRepository\nfrom medtagger.workers.storage import parse_dicom_and_update_slice\n\n\nlogging.config.fileConfig('logging.conf')\nlogger = logging.getLogger(__name__)\n\nparser = argparse.ArgumentParser(description='Import data to the MedTagger.')\nparser.add_argument('--source', type=str, required=True, help='Source directory')\nparser.add_argument('--dataset', type=str, required=True, help='Dataset key for these scans')\nargs = parser.parse_args()\n\n\nif __name__ == '__main__':\n logger.info('Checking Dataset...')\n dataset = DatasetsRepository.get_dataset_by_key(args.dataset)\n\n source = args.source.rstrip('/')\n for scan_directory in glob.iglob(source + '/*'):\n if not os.path.isdir(scan_directory):\n logger.warning('\"%s\" is not a directory. Skipping...', scan_directory)\n continue\n\n logger.info('Adding new Scan from \"%s\".', scan_directory)\n slice_names = glob.glob(scan_directory + '/*.dcm')\n number_of_slices = len(slice_names)\n scan = ScansRepository.add_new_scan(dataset, number_of_slices, None)\n\n for slice_name in slice_names:\n logger.info('Adding new Slice to Scan \"%s\" based on \"%s\".', scan.id, slice_name)\n with open(slice_name, 'rb') as slice_dicom_file:\n _slice = scan.add_slice()\n image = slice_dicom_file.read()\n parse_dicom_and_update_slice.delay(_slice.id, image)\n", "path": "backend/scripts/import_data.py"}, {"content": "\"\"\"Script that will convert multiple dicoms to PNG format.\n\nHow to use it?\n--------------\nAt first, download some scans from anywhere on the Internet. You can use example dataset from Data Science Bowl 2017:\nhttps://www.kaggle.com/c/data-science-bowl-2017/data (look for 'sample_images.7z' file).\n\nThen, place these data (unzipped) anywhere on your computer and run this script by:\n\n (venv) $ python3.6 scripts/dicoms_to_png.py --input=./dir_with_scans/ --output=./dir_with_scans/converted/\n\nName of the converted Dicom file is a position of the scan on the z axis.\n\"\"\"\nimport os\nimport argparse\n\nimport SimpleITK as sitk\nfrom PIL import Image\n\nfrom medtagger.conversion import convert_slice_to_normalized_8bit_array\nfrom medtagger.definitions import DicomTag\nfrom medtagger.dicoms import read_list\n\n\nparser = argparse.ArgumentParser(description='Convert dicoms to png format.')\nparser.add_argument('--input', type=str, required=True, help='Full path to directory where dicoms are located')\nparser.add_argument('--output', type=str, required=True,\n help='Full path to directory where converted dicoms would be located')\n\nargs = parser.parse_args()\ndicoms_folder_path = args.input\nconverted_dicoms_folder_path = args.output\n\ndicoms = [sitk.ReadImage(dicoms_folder_path + d) for d in os.listdir(dicoms_folder_path) if\n os.path.isfile(dicoms_folder_path + d)]\nmin_position = abs(min(float((read_list(dicom, DicomTag.IMAGE_POSITION_PATIENT) or [])[2]) for dicom in dicoms))\n\nif not os.path.exists(converted_dicoms_folder_path):\n os.mkdir(converted_dicoms_folder_path)\n\nfor single_dicom in dicoms:\n image_bytes = convert_slice_to_normalized_8bit_array(single_dicom)\n slice_position = float((read_list(single_dicom, DicomTag.IMAGE_POSITION_PATIENT) or [])[2])\n converted_dicom_name = '{0:.2f}'.format(slice_position + min_position)\n Image.fromarray(image_bytes, 'L').save(converted_dicoms_folder_path + converted_dicom_name + '.png')\n", "path": "backend/scripts/convert_dicoms_to_png.py"}]}
| 2,121 | 291 |
gh_patches_debug_8533
|
rasdani/github-patches
|
git_diff
|
pyinstaller__pyinstaller-2112
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cherrypy >= 6.1.0 fails tests
From the cherrypy [changelog](https://github.com/cherrypy/cherrypy/blob/master/CHANGES.txt):
```
6.1.0
-----
* Combined wsgiserver2 and wsgiserver3 modules into a
single module, ``cherrypy.wsgiserver``.
```
</issue>
<code>
[start of PyInstaller/hooks/hook-cherrypy.py]
1 #-----------------------------------------------------------------------------
2 # Copyright (c) 2015-2016, PyInstaller Development Team.
3 #
4 # Distributed under the terms of the GNU General Public License with exception
5 # for distributing bootloader.
6 #
7 # The full license is in the file COPYING.txt, distributed with this software.
8 #-----------------------------------------------------------------------------
9 #
10 # CherryPy is a minimalist Python web framework.
11 #
12 # http://www.cherrypy.org/
13 #
14 # Tested with CherryPy 5.0.1
15
16
17 from PyInstaller.utils.hooks import collect_submodules
18
19
20 hiddenimports = collect_submodules('cherrypy.wsgiserver')
[end of PyInstaller/hooks/hook-cherrypy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/PyInstaller/hooks/hook-cherrypy.py b/PyInstaller/hooks/hook-cherrypy.py
deleted file mode 100644
--- a/PyInstaller/hooks/hook-cherrypy.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#-----------------------------------------------------------------------------
-# Copyright (c) 2015-2016, PyInstaller Development Team.
-#
-# Distributed under the terms of the GNU General Public License with exception
-# for distributing bootloader.
-#
-# The full license is in the file COPYING.txt, distributed with this software.
-#-----------------------------------------------------------------------------
-#
-# CherryPy is a minimalist Python web framework.
-#
-# http://www.cherrypy.org/
-#
-# Tested with CherryPy 5.0.1
-
-
-from PyInstaller.utils.hooks import collect_submodules
-
-
-hiddenimports = collect_submodules('cherrypy.wsgiserver')
\ No newline at end of file
|
{"golden_diff": "diff --git a/PyInstaller/hooks/hook-cherrypy.py b/PyInstaller/hooks/hook-cherrypy.py\ndeleted file mode 100644\n--- a/PyInstaller/hooks/hook-cherrypy.py\n+++ /dev/null\n@@ -1,20 +0,0 @@\n-#-----------------------------------------------------------------------------\n-# Copyright (c) 2015-2016, PyInstaller Development Team.\n-#\n-# Distributed under the terms of the GNU General Public License with exception\n-# for distributing bootloader.\n-#\n-# The full license is in the file COPYING.txt, distributed with this software.\n-#-----------------------------------------------------------------------------\n-#\n-# CherryPy is a minimalist Python web framework.\n-#\n-# http://www.cherrypy.org/\n-#\n-# Tested with CherryPy 5.0.1\n-\n-\n-from PyInstaller.utils.hooks import collect_submodules\n-\n-\n-hiddenimports = collect_submodules('cherrypy.wsgiserver')\n\\ No newline at end of file\n", "issue": "cherrypy >= 6.1.0 fails tests\nFrom the cherrypy [changelog](https://github.com/cherrypy/cherrypy/blob/master/CHANGES.txt):\n\n```\n6.1.0\n-----\n\n* Combined wsgiserver2 and wsgiserver3 modules into a\n single module, ``cherrypy.wsgiserver``.\n```\n\n", "before_files": [{"content": "#-----------------------------------------------------------------------------\n# Copyright (c) 2015-2016, PyInstaller Development Team.\n#\n# Distributed under the terms of the GNU General Public License with exception\n# for distributing bootloader.\n#\n# The full license is in the file COPYING.txt, distributed with this software.\n#-----------------------------------------------------------------------------\n#\n# CherryPy is a minimalist Python web framework.\n#\n# http://www.cherrypy.org/\n#\n# Tested with CherryPy 5.0.1\n\n\nfrom PyInstaller.utils.hooks import collect_submodules\n\n\nhiddenimports = collect_submodules('cherrypy.wsgiserver')", "path": "PyInstaller/hooks/hook-cherrypy.py"}]}
| 782 | 214 |
gh_patches_debug_38994
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-1123
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[EHN] Add `jointly` option for `min_max_scale`
<!-- Thank you for your PR!
BEFORE YOU CONTINUE! Please add the appropriate three-letter abbreviation to your title.
The abbreviations can be:
- [DOC]: Documentation fixes.
- [ENH]: Code contributions and new features.
- [TST]: Test-related contributions.
- [INF]: Infrastructure-related contributions.
Also, do not forget to tag the relevant issue here as well.
Finally, as commits come in, don't forget to regularly rebase!
-->
# PR Description
Please describe the changes proposed in the pull request:
- Add an option for `min_max_scale` support to transform each column values or entire values
- Default transform each column, similar behavior to [sklearn.preprocessing.MinMaxScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)
<!-- Doing so provides maintainers with context on what the PR is, and can help us more effectively review your PR. -->
<!-- Please also identify below which issue that has been raised that you are going to close. -->
**This PR resolves #1067.**
<!-- As you go down the PR template, please feel free to delete sections that are irrelevant. -->
# PR Checklist
<!-- This checklist exists for newcomers who are not yet familiar with our requirements. If you are experienced with
the project, please feel free to delete this section. -->
Please ensure that you have done the following:
1. [x] PR in from a fork off your branch. Do not PR from `<your_username>`:`dev`, but rather from `<your_username>`:`<feature-branch_name>`.
<!-- Doing this helps us keep the commit history much cleaner than it would otherwise be. -->
2. [x] If you're not on the contributors list, add yourself to `AUTHORS.md`.
<!-- We'd like to acknowledge your contributions! -->
3. [x] Add a line to `CHANGELOG.md` under the latest version header (i.e. the one that is "on deck") describing the contribution.
- Do use some discretion here; if there are multiple PRs that are related, keep them in a single line.
# Automatic checks
There will be automatic checks run on the PR. These include:
- Building a preview of the docs on Netlify
- Automatically linting the code
- Making sure the code is documented
- Making sure that all tests are passed
- Making sure that code coverage doesn't go down.
# Relevant Reviewers
<!-- Finally, please tag relevant maintainers to review. -->
Please tag maintainers to review.
- @ericmjl
</issue>
<code>
[start of janitor/functions/min_max_scale.py]
1 from __future__ import annotations
2
3 import pandas_flavor as pf
4 import pandas as pd
5
6 from janitor.utils import deprecated_alias
7 from janitor.utils import deprecated_kwargs
8
9
10 @pf.register_dataframe_method
11 @deprecated_kwargs(
12 "old_min",
13 "old_max",
14 "new_min",
15 "new_max",
16 message=(
17 "The keyword argument {argument!r} of {func_name!r} is deprecated. "
18 "Please use 'feature_range' instead."
19 ),
20 )
21 @deprecated_alias(col_name="column_name")
22 def min_max_scale(
23 df: pd.DataFrame,
24 feature_range: tuple[int | float, int | float] = (0, 1),
25 column_name: str | int | list[str | int] | pd.Index = None,
26 jointly: bool = False,
27 ) -> pd.DataFrame:
28 """
29 Scales DataFrame to between a minimum and maximum value.
30
31 One can optionally set a new target **minimum** and **maximum** value
32 using the `feature_range` keyword argument.
33
34 If `column_name` is specified, then only that column(s) of data is scaled.
35 Otherwise, the entire dataframe is scaled.
36 If `jointly` is `True`, the `column_names` provided entire dataframe will
37 be regnozied as the one to jointly scale. Otherwise, each column of data
38 will be scaled separately.
39
40 Example: Basic usage.
41
42 >>> import pandas as pd
43 >>> import janitor
44 >>> df = pd.DataFrame({'a':[1, 2], 'b':[0, 1]})
45 >>> df.min_max_scale()
46 a b
47 0 0.0 0.0
48 1 1.0 1.0
49 >>> df.min_max_scale(jointly=True)
50 a b
51 0 0.5 0.0
52 1 1.0 0.5
53
54 Example: Setting custom minimum and maximum.
55
56 >>> import pandas as pd
57 >>> import janitor
58 >>> df = pd.DataFrame({'a':[1, 2], 'b':[0, 1]})
59 >>> df.min_max_scale(feature_range=(0, 100))
60 a b
61 0 0.0 0.0
62 1 100.0 100.0
63 >>> df.min_max_scale(feature_range=(0, 100), jointly=True)
64 a b
65 0 50.0 0.0
66 1 100.0 50.0
67
68 Example: Apply min-max to the selected columns.
69
70 >>> import pandas as pd
71 >>> import janitor
72 >>> df = pd.DataFrame({'a':[1, 2], 'b':[0, 1], 'c': [1, 0]})
73 >>> df.min_max_scale(
74 ... feature_range=(0, 100),
75 ... column_name=["a", "c"],
76 ... )
77 a b c
78 0 0.0 0 100.0
79 1 100.0 1 0.0
80 >>> df.min_max_scale(
81 ... feature_range=(0, 100),
82 ... column_name=["a", "c"],
83 ... jointly=True,
84 ... )
85 a b c
86 0 50.0 0 50.0
87 1 100.0 1 0.0
88 >>> df.min_max_scale(feature_range=(0, 100), column_name='a')
89 a b c
90 0 0.0 0 1
91 1 100.0 1 0
92
93 The aforementioned example might be applied to something like scaling the
94 isoelectric points of amino acids. While technically they range from
95 approx 3-10, we can also think of them on the pH scale which ranges from
96 1 to 14. Hence, 3 gets scaled not to 0 but approx. 0.15 instead, while 10
97 gets scaled to approx. 0.69 instead.
98
99 :param df: A pandas DataFrame.
100 :param feature_range: (optional) Desired range of transformed data.
101 :param column_name: (optional) The column on which to perform scaling.
102 :param jointly: (bool) Scale the entire data if Ture.
103 :returns: A pandas DataFrame with scaled data.
104 :raises ValueError: if `feature_range` isn't tuple type.
105 :raises ValueError: if the length of `feature_range` isn't equal to two.
106 :raises ValueError: if the element of `feature_range` isn't number type.
107 :raises ValueError: if `feature_range[1]` <= `feature_range[0]`.
108
109 Changed in version 0.24.0: Deleted "old_min", "old_max", "new_min", and
110 "new_max" options.
111 Changed in version 0.24.0: Added "feature_range", and "jointly" options.
112 """
113
114 if not (
115 isinstance(feature_range, (tuple, list))
116 and len(feature_range) == 2
117 and all((isinstance(i, (int, float))) for i in feature_range)
118 and feature_range[1] > feature_range[0]
119 ):
120 raise ValueError(
121 "`feature_range` should be a range type contains number element, "
122 "the first element must be greater than the second one"
123 )
124
125 if column_name is not None:
126 df = df.copy() # Avoid to change the original DataFrame.
127
128 old_feature_range = df[column_name].pipe(min_max_value, jointly)
129 df[column_name] = df[column_name].pipe(
130 apply_min_max,
131 *old_feature_range,
132 *feature_range,
133 )
134 else:
135 old_feature_range = df.pipe(min_max_value, jointly)
136 df = df.pipe(
137 apply_min_max,
138 *old_feature_range,
139 *feature_range,
140 )
141
142 return df
143
144
145 def min_max_value(df: pd.DataFrame, jointly: bool) -> tuple:
146 """
147 Return the minimum and maximum of DataFrame.
148
149 Use the `jointly` flag to control returning entire data or each column.
150
151 .. # noqa: DAR101
152 .. # noqa: DAR201
153 """
154
155 if jointly:
156 mmin = df.min().min()
157 mmax = df.max().max()
158 else:
159 mmin = df.min()
160 mmax = df.max()
161
162 return mmin, mmax
163
164
165 def apply_min_max(
166 df: pd.DataFrame,
167 old_min: int | float | pd.Series,
168 old_max: int | float | pd.Series,
169 new_min: int | float | pd.Series,
170 new_max: int | float | pd.Series,
171 ) -> pd.DataFrame:
172 """
173 Apply minimax scaler to DataFrame.
174
175 Notes
176 -----
177 - Inputting minimum and maximum type
178 - int or float : It will apply minimax to the entire DataFrame.
179 - Series : It will apply minimax to each column.
180
181 .. # noqa: DAR101
182 .. # noqa: DAR201
183 """
184
185 old_range = old_max - old_min
186 new_range = new_max - new_min
187
188 return (df - old_min) * new_range / old_range + new_min
189
[end of janitor/functions/min_max_scale.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/janitor/functions/min_max_scale.py b/janitor/functions/min_max_scale.py
--- a/janitor/functions/min_max_scale.py
+++ b/janitor/functions/min_max_scale.py
@@ -96,6 +96,12 @@
1 to 14. Hence, 3 gets scaled not to 0 but approx. 0.15 instead, while 10
gets scaled to approx. 0.69 instead.
+ !!! summary "Version Changed"
+
+ - 0.24.0
+ - Deleted `old_min`, `old_max`, `new_min`, and `new_max` options.
+ - Added `feature_range`, and `jointly` options.
+
:param df: A pandas DataFrame.
:param feature_range: (optional) Desired range of transformed data.
:param column_name: (optional) The column on which to perform scaling.
@@ -105,10 +111,6 @@
:raises ValueError: if the length of `feature_range` isn't equal to two.
:raises ValueError: if the element of `feature_range` isn't number type.
:raises ValueError: if `feature_range[1]` <= `feature_range[0]`.
-
- Changed in version 0.24.0: Deleted "old_min", "old_max", "new_min", and
- "new_max" options.
- Changed in version 0.24.0: Added "feature_range", and "jointly" options.
"""
if not (
@@ -125,16 +127,16 @@
if column_name is not None:
df = df.copy() # Avoid to change the original DataFrame.
- old_feature_range = df[column_name].pipe(min_max_value, jointly)
+ old_feature_range = df[column_name].pipe(_min_max_value, jointly)
df[column_name] = df[column_name].pipe(
- apply_min_max,
+ _apply_min_max,
*old_feature_range,
*feature_range,
)
else:
- old_feature_range = df.pipe(min_max_value, jointly)
+ old_feature_range = df.pipe(_min_max_value, jointly)
df = df.pipe(
- apply_min_max,
+ _apply_min_max,
*old_feature_range,
*feature_range,
)
@@ -142,7 +144,7 @@
return df
-def min_max_value(df: pd.DataFrame, jointly: bool) -> tuple:
+def _min_max_value(df: pd.DataFrame, jointly: bool) -> tuple:
"""
Return the minimum and maximum of DataFrame.
@@ -162,7 +164,7 @@
return mmin, mmax
-def apply_min_max(
+def _apply_min_max(
df: pd.DataFrame,
old_min: int | float | pd.Series,
old_max: int | float | pd.Series,
|
{"golden_diff": "diff --git a/janitor/functions/min_max_scale.py b/janitor/functions/min_max_scale.py\n--- a/janitor/functions/min_max_scale.py\n+++ b/janitor/functions/min_max_scale.py\n@@ -96,6 +96,12 @@\n 1 to 14. Hence, 3 gets scaled not to 0 but approx. 0.15 instead, while 10\r\n gets scaled to approx. 0.69 instead.\r\n \r\n+ !!! summary \"Version Changed\"\r\n+\r\n+ - 0.24.0\r\n+ - Deleted `old_min`, `old_max`, `new_min`, and `new_max` options.\r\n+ - Added `feature_range`, and `jointly` options.\r\n+\r\n :param df: A pandas DataFrame.\r\n :param feature_range: (optional) Desired range of transformed data.\r\n :param column_name: (optional) The column on which to perform scaling.\r\n@@ -105,10 +111,6 @@\n :raises ValueError: if the length of `feature_range` isn't equal to two.\r\n :raises ValueError: if the element of `feature_range` isn't number type.\r\n :raises ValueError: if `feature_range[1]` <= `feature_range[0]`.\r\n-\r\n- Changed in version 0.24.0: Deleted \"old_min\", \"old_max\", \"new_min\", and\r\n- \"new_max\" options.\r\n- Changed in version 0.24.0: Added \"feature_range\", and \"jointly\" options.\r\n \"\"\"\r\n \r\n if not (\r\n@@ -125,16 +127,16 @@\n if column_name is not None:\r\n df = df.copy() # Avoid to change the original DataFrame.\r\n \r\n- old_feature_range = df[column_name].pipe(min_max_value, jointly)\r\n+ old_feature_range = df[column_name].pipe(_min_max_value, jointly)\r\n df[column_name] = df[column_name].pipe(\r\n- apply_min_max,\r\n+ _apply_min_max,\r\n *old_feature_range,\r\n *feature_range,\r\n )\r\n else:\r\n- old_feature_range = df.pipe(min_max_value, jointly)\r\n+ old_feature_range = df.pipe(_min_max_value, jointly)\r\n df = df.pipe(\r\n- apply_min_max,\r\n+ _apply_min_max,\r\n *old_feature_range,\r\n *feature_range,\r\n )\r\n@@ -142,7 +144,7 @@\n return df\r\n \r\n \r\n-def min_max_value(df: pd.DataFrame, jointly: bool) -> tuple:\r\n+def _min_max_value(df: pd.DataFrame, jointly: bool) -> tuple:\r\n \"\"\"\r\n Return the minimum and maximum of DataFrame.\r\n \r\n@@ -162,7 +164,7 @@\n return mmin, mmax\r\n \r\n \r\n-def apply_min_max(\r\n+def _apply_min_max(\r\n df: pd.DataFrame,\r\n old_min: int | float | pd.Series,\r\n old_max: int | float | pd.Series,\n", "issue": "[EHN] Add `jointly` option for `min_max_scale`\n<!-- Thank you for your PR!\r\n\r\nBEFORE YOU CONTINUE! Please add the appropriate three-letter abbreviation to your title.\r\n\r\nThe abbreviations can be:\r\n- [DOC]: Documentation fixes.\r\n- [ENH]: Code contributions and new features.\r\n- [TST]: Test-related contributions.\r\n- [INF]: Infrastructure-related contributions.\r\n\r\nAlso, do not forget to tag the relevant issue here as well.\r\n\r\nFinally, as commits come in, don't forget to regularly rebase!\r\n-->\r\n\r\n# PR Description\r\n\r\nPlease describe the changes proposed in the pull request:\r\n\r\n- Add an option for `min_max_scale` support to transform each column values or entire values\r\n- Default transform each column, similar behavior to [sklearn.preprocessing.MinMaxScaler](http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html)\r\n\r\n<!-- Doing so provides maintainers with context on what the PR is, and can help us more effectively review your PR. -->\r\n\r\n<!-- Please also identify below which issue that has been raised that you are going to close. -->\r\n\r\n**This PR resolves #1067.**\r\n\r\n<!-- As you go down the PR template, please feel free to delete sections that are irrelevant. -->\r\n\r\n# PR Checklist\r\n\r\n<!-- This checklist exists for newcomers who are not yet familiar with our requirements. If you are experienced with\r\nthe project, please feel free to delete this section. -->\r\n\r\nPlease ensure that you have done the following:\r\n\r\n1. [x] PR in from a fork off your branch. Do not PR from `<your_username>`:`dev`, but rather from `<your_username>`:`<feature-branch_name>`.\r\n<!-- Doing this helps us keep the commit history much cleaner than it would otherwise be. -->\r\n2. [x] If you're not on the contributors list, add yourself to `AUTHORS.md`.\r\n<!-- We'd like to acknowledge your contributions! -->\r\n3. [x] Add a line to `CHANGELOG.md` under the latest version header (i.e. the one that is \"on deck\") describing the contribution.\r\n - Do use some discretion here; if there are multiple PRs that are related, keep them in a single line.\r\n\r\n# Automatic checks\r\n\r\nThere will be automatic checks run on the PR. These include:\r\n\r\n- Building a preview of the docs on Netlify\r\n- Automatically linting the code\r\n- Making sure the code is documented\r\n- Making sure that all tests are passed\r\n- Making sure that code coverage doesn't go down.\r\n\r\n# Relevant Reviewers\r\n\r\n<!-- Finally, please tag relevant maintainers to review. -->\r\n\r\nPlease tag maintainers to review.\r\n\r\n- @ericmjl\r\n\n", "before_files": [{"content": "from __future__ import annotations\r\n\r\nimport pandas_flavor as pf\r\nimport pandas as pd\r\n\r\nfrom janitor.utils import deprecated_alias\r\nfrom janitor.utils import deprecated_kwargs\r\n\r\n\r\[email protected]_dataframe_method\r\n@deprecated_kwargs(\r\n \"old_min\",\r\n \"old_max\",\r\n \"new_min\",\r\n \"new_max\",\r\n message=(\r\n \"The keyword argument {argument!r} of {func_name!r} is deprecated. \"\r\n \"Please use 'feature_range' instead.\"\r\n ),\r\n)\r\n@deprecated_alias(col_name=\"column_name\")\r\ndef min_max_scale(\r\n df: pd.DataFrame,\r\n feature_range: tuple[int | float, int | float] = (0, 1),\r\n column_name: str | int | list[str | int] | pd.Index = None,\r\n jointly: bool = False,\r\n) -> pd.DataFrame:\r\n \"\"\"\r\n Scales DataFrame to between a minimum and maximum value.\r\n\r\n One can optionally set a new target **minimum** and **maximum** value\r\n using the `feature_range` keyword argument.\r\n\r\n If `column_name` is specified, then only that column(s) of data is scaled.\r\n Otherwise, the entire dataframe is scaled.\r\n If `jointly` is `True`, the `column_names` provided entire dataframe will\r\n be regnozied as the one to jointly scale. Otherwise, each column of data\r\n will be scaled separately.\r\n\r\n Example: Basic usage.\r\n\r\n >>> import pandas as pd\r\n >>> import janitor\r\n >>> df = pd.DataFrame({'a':[1, 2], 'b':[0, 1]})\r\n >>> df.min_max_scale()\r\n a b\r\n 0 0.0 0.0\r\n 1 1.0 1.0\r\n >>> df.min_max_scale(jointly=True)\r\n a b\r\n 0 0.5 0.0\r\n 1 1.0 0.5\r\n\r\n Example: Setting custom minimum and maximum.\r\n\r\n >>> import pandas as pd\r\n >>> import janitor\r\n >>> df = pd.DataFrame({'a':[1, 2], 'b':[0, 1]})\r\n >>> df.min_max_scale(feature_range=(0, 100))\r\n a b\r\n 0 0.0 0.0\r\n 1 100.0 100.0\r\n >>> df.min_max_scale(feature_range=(0, 100), jointly=True)\r\n a b\r\n 0 50.0 0.0\r\n 1 100.0 50.0\r\n\r\n Example: Apply min-max to the selected columns.\r\n\r\n >>> import pandas as pd\r\n >>> import janitor\r\n >>> df = pd.DataFrame({'a':[1, 2], 'b':[0, 1], 'c': [1, 0]})\r\n >>> df.min_max_scale(\r\n ... feature_range=(0, 100),\r\n ... column_name=[\"a\", \"c\"],\r\n ... )\r\n a b c\r\n 0 0.0 0 100.0\r\n 1 100.0 1 0.0\r\n >>> df.min_max_scale(\r\n ... feature_range=(0, 100),\r\n ... column_name=[\"a\", \"c\"],\r\n ... jointly=True,\r\n ... )\r\n a b c\r\n 0 50.0 0 50.0\r\n 1 100.0 1 0.0\r\n >>> df.min_max_scale(feature_range=(0, 100), column_name='a')\r\n a b c\r\n 0 0.0 0 1\r\n 1 100.0 1 0\r\n\r\n The aforementioned example might be applied to something like scaling the\r\n isoelectric points of amino acids. While technically they range from\r\n approx 3-10, we can also think of them on the pH scale which ranges from\r\n 1 to 14. Hence, 3 gets scaled not to 0 but approx. 0.15 instead, while 10\r\n gets scaled to approx. 0.69 instead.\r\n\r\n :param df: A pandas DataFrame.\r\n :param feature_range: (optional) Desired range of transformed data.\r\n :param column_name: (optional) The column on which to perform scaling.\r\n :param jointly: (bool) Scale the entire data if Ture.\r\n :returns: A pandas DataFrame with scaled data.\r\n :raises ValueError: if `feature_range` isn't tuple type.\r\n :raises ValueError: if the length of `feature_range` isn't equal to two.\r\n :raises ValueError: if the element of `feature_range` isn't number type.\r\n :raises ValueError: if `feature_range[1]` <= `feature_range[0]`.\r\n\r\n Changed in version 0.24.0: Deleted \"old_min\", \"old_max\", \"new_min\", and\r\n \"new_max\" options.\r\n Changed in version 0.24.0: Added \"feature_range\", and \"jointly\" options.\r\n \"\"\"\r\n\r\n if not (\r\n isinstance(feature_range, (tuple, list))\r\n and len(feature_range) == 2\r\n and all((isinstance(i, (int, float))) for i in feature_range)\r\n and feature_range[1] > feature_range[0]\r\n ):\r\n raise ValueError(\r\n \"`feature_range` should be a range type contains number element, \"\r\n \"the first element must be greater than the second one\"\r\n )\r\n\r\n if column_name is not None:\r\n df = df.copy() # Avoid to change the original DataFrame.\r\n\r\n old_feature_range = df[column_name].pipe(min_max_value, jointly)\r\n df[column_name] = df[column_name].pipe(\r\n apply_min_max,\r\n *old_feature_range,\r\n *feature_range,\r\n )\r\n else:\r\n old_feature_range = df.pipe(min_max_value, jointly)\r\n df = df.pipe(\r\n apply_min_max,\r\n *old_feature_range,\r\n *feature_range,\r\n )\r\n\r\n return df\r\n\r\n\r\ndef min_max_value(df: pd.DataFrame, jointly: bool) -> tuple:\r\n \"\"\"\r\n Return the minimum and maximum of DataFrame.\r\n\r\n Use the `jointly` flag to control returning entire data or each column.\r\n\r\n .. # noqa: DAR101\r\n .. # noqa: DAR201\r\n \"\"\"\r\n\r\n if jointly:\r\n mmin = df.min().min()\r\n mmax = df.max().max()\r\n else:\r\n mmin = df.min()\r\n mmax = df.max()\r\n\r\n return mmin, mmax\r\n\r\n\r\ndef apply_min_max(\r\n df: pd.DataFrame,\r\n old_min: int | float | pd.Series,\r\n old_max: int | float | pd.Series,\r\n new_min: int | float | pd.Series,\r\n new_max: int | float | pd.Series,\r\n) -> pd.DataFrame:\r\n \"\"\"\r\n Apply minimax scaler to DataFrame.\r\n\r\n Notes\r\n -----\r\n - Inputting minimum and maximum type\r\n - int or float : It will apply minimax to the entire DataFrame.\r\n - Series : It will apply minimax to each column.\r\n\r\n .. # noqa: DAR101\r\n .. # noqa: DAR201\r\n \"\"\"\r\n\r\n old_range = old_max - old_min\r\n new_range = new_max - new_min\r\n\r\n return (df - old_min) * new_range / old_range + new_min\r\n", "path": "janitor/functions/min_max_scale.py"}]}
| 3,220 | 654 |
gh_patches_debug_25371
|
rasdani/github-patches
|
git_diff
|
vispy__vispy-463
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug when running Vispy offline for the first time
There appears to be a bug when you run Vispy offline and you don't have the freetype thing already downloaded. Not completely sure about the exact conditions responsible for the crash, require some testing...
</issue>
<code>
[start of vispy/util/fonts/_freetype.py]
1 # -*- coding: utf-8 -*-
2 # -----------------------------------------------------------------------------
3 # Copyright (c) 2014, Vispy Development Team. All Rights Reserved.
4 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
5 # -----------------------------------------------------------------------------
6
7 # Use freetype to get glyph bitmaps
8
9 import sys
10 import numpy as np
11
12 from ...ext.freetype import (FT_LOAD_RENDER, FT_LOAD_NO_HINTING,
13 FT_LOAD_NO_AUTOHINT, Face)
14
15
16 # Convert face to filename
17 from ._vispy_fonts import _vispy_fonts, _get_vispy_font_filename
18 if sys.platform.startswith('linux'):
19 from ...ext.fontconfig import find_font
20 elif sys.platform.startswith('win'):
21 from ._win32 import find_font # noqa, analysis:ignore
22 else:
23 raise NotImplementedError
24
25 _font_dict = {}
26
27
28 def _load_font(face, bold, italic):
29 key = '%s-%s-%s' % (face, bold, italic)
30 if key in _font_dict:
31 return _font_dict[key]
32 if face in _vispy_fonts:
33 fname = _get_vispy_font_filename(face, bold, italic)
34 else:
35 fname = find_font(face, bold, italic)
36 font = Face(fname)
37 _font_dict[key] = font
38 return font
39
40
41 def _load_glyph(f, char, glyphs_dict):
42 """Load glyph from font into dict"""
43 flags = FT_LOAD_RENDER | FT_LOAD_NO_HINTING | FT_LOAD_NO_AUTOHINT
44 face = _load_font(f['face'], f['bold'], f['italic'])
45 face.set_char_size(f['size'] * 64)
46 # get the character of interest
47 face.load_char(char, flags)
48 bitmap = face.glyph.bitmap
49 width = face.glyph.bitmap.width
50 height = face.glyph.bitmap.rows
51 bitmap = np.array(bitmap.buffer)
52 w0 = bitmap.size // height if bitmap.size > 0 else 0
53 bitmap.shape = (height, w0)
54 bitmap = bitmap[:, :width].astype(np.ubyte)
55
56 left = face.glyph.bitmap_left
57 top = face.glyph.bitmap_top
58 advance = face.glyph.advance.x / 64.
59 glyph = dict(char=char, offset=(left, top), bitmap=bitmap,
60 advance=advance, kerning={})
61 glyphs_dict[char] = glyph
62 # Generate kerning
63 for other_char, other_glyph in glyphs_dict.items():
64 kerning = face.get_kerning(other_char, char)
65 glyph['kerning'][other_char] = kerning.x / 64.
66 kerning = face.get_kerning(char, other_char)
67 other_glyph['kerning'][char] = kerning.x / 64.
68
[end of vispy/util/fonts/_freetype.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/vispy/util/fonts/_freetype.py b/vispy/util/fonts/_freetype.py
--- a/vispy/util/fonts/_freetype.py
+++ b/vispy/util/fonts/_freetype.py
@@ -9,9 +9,6 @@
import sys
import numpy as np
-from ...ext.freetype import (FT_LOAD_RENDER, FT_LOAD_NO_HINTING,
- FT_LOAD_NO_AUTOHINT, Face)
-
# Convert face to filename
from ._vispy_fonts import _vispy_fonts, _get_vispy_font_filename
@@ -25,7 +22,11 @@
_font_dict = {}
+# Nest freetype imports in case someone doesn't have freetype on their system
+# and isn't using fonts (Windows)
+
def _load_font(face, bold, italic):
+ from ...ext.freetype import Face
key = '%s-%s-%s' % (face, bold, italic)
if key in _font_dict:
return _font_dict[key]
@@ -40,6 +41,8 @@
def _load_glyph(f, char, glyphs_dict):
"""Load glyph from font into dict"""
+ from ...ext.freetype import (FT_LOAD_RENDER, FT_LOAD_NO_HINTING,
+ FT_LOAD_NO_AUTOHINT)
flags = FT_LOAD_RENDER | FT_LOAD_NO_HINTING | FT_LOAD_NO_AUTOHINT
face = _load_font(f['face'], f['bold'], f['italic'])
face.set_char_size(f['size'] * 64)
|
{"golden_diff": "diff --git a/vispy/util/fonts/_freetype.py b/vispy/util/fonts/_freetype.py\n--- a/vispy/util/fonts/_freetype.py\n+++ b/vispy/util/fonts/_freetype.py\n@@ -9,9 +9,6 @@\n import sys\n import numpy as np\n \n-from ...ext.freetype import (FT_LOAD_RENDER, FT_LOAD_NO_HINTING,\n- FT_LOAD_NO_AUTOHINT, Face)\n-\n \n # Convert face to filename\n from ._vispy_fonts import _vispy_fonts, _get_vispy_font_filename\n@@ -25,7 +22,11 @@\n _font_dict = {}\n \n \n+# Nest freetype imports in case someone doesn't have freetype on their system\n+# and isn't using fonts (Windows)\n+\n def _load_font(face, bold, italic):\n+ from ...ext.freetype import Face\n key = '%s-%s-%s' % (face, bold, italic)\n if key in _font_dict:\n return _font_dict[key]\n@@ -40,6 +41,8 @@\n \n def _load_glyph(f, char, glyphs_dict):\n \"\"\"Load glyph from font into dict\"\"\"\n+ from ...ext.freetype import (FT_LOAD_RENDER, FT_LOAD_NO_HINTING,\n+ FT_LOAD_NO_AUTOHINT)\n flags = FT_LOAD_RENDER | FT_LOAD_NO_HINTING | FT_LOAD_NO_AUTOHINT\n face = _load_font(f['face'], f['bold'], f['italic'])\n face.set_char_size(f['size'] * 64)\n", "issue": "Bug when running Vispy offline for the first time\nThere appears to be a bug when you run Vispy offline and you don't have the freetype thing already downloaded. Not completely sure about the exact conditions responsible for the crash, require some testing...\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# -----------------------------------------------------------------------------\n# Copyright (c) 2014, Vispy Development Team. All Rights Reserved.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n# -----------------------------------------------------------------------------\n\n# Use freetype to get glyph bitmaps\n\nimport sys\nimport numpy as np\n\nfrom ...ext.freetype import (FT_LOAD_RENDER, FT_LOAD_NO_HINTING,\n FT_LOAD_NO_AUTOHINT, Face)\n\n\n# Convert face to filename\nfrom ._vispy_fonts import _vispy_fonts, _get_vispy_font_filename\nif sys.platform.startswith('linux'):\n from ...ext.fontconfig import find_font\nelif sys.platform.startswith('win'):\n from ._win32 import find_font # noqa, analysis:ignore\nelse:\n raise NotImplementedError\n\n_font_dict = {}\n\n\ndef _load_font(face, bold, italic):\n key = '%s-%s-%s' % (face, bold, italic)\n if key in _font_dict:\n return _font_dict[key]\n if face in _vispy_fonts:\n fname = _get_vispy_font_filename(face, bold, italic)\n else:\n fname = find_font(face, bold, italic)\n font = Face(fname)\n _font_dict[key] = font\n return font\n\n\ndef _load_glyph(f, char, glyphs_dict):\n \"\"\"Load glyph from font into dict\"\"\"\n flags = FT_LOAD_RENDER | FT_LOAD_NO_HINTING | FT_LOAD_NO_AUTOHINT\n face = _load_font(f['face'], f['bold'], f['italic'])\n face.set_char_size(f['size'] * 64)\n # get the character of interest\n face.load_char(char, flags)\n bitmap = face.glyph.bitmap\n width = face.glyph.bitmap.width\n height = face.glyph.bitmap.rows\n bitmap = np.array(bitmap.buffer)\n w0 = bitmap.size // height if bitmap.size > 0 else 0\n bitmap.shape = (height, w0)\n bitmap = bitmap[:, :width].astype(np.ubyte)\n\n left = face.glyph.bitmap_left\n top = face.glyph.bitmap_top\n advance = face.glyph.advance.x / 64.\n glyph = dict(char=char, offset=(left, top), bitmap=bitmap,\n advance=advance, kerning={})\n glyphs_dict[char] = glyph\n # Generate kerning\n for other_char, other_glyph in glyphs_dict.items():\n kerning = face.get_kerning(other_char, char)\n glyph['kerning'][other_char] = kerning.x / 64.\n kerning = face.get_kerning(char, other_char)\n other_glyph['kerning'][char] = kerning.x / 64.\n", "path": "vispy/util/fonts/_freetype.py"}]}
| 1,316 | 339 |
gh_patches_debug_4608
|
rasdani/github-patches
|
git_diff
|
pyro-ppl__pyro-3101
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`poutine.block` does not work in Python 3.10
### Reproducible code
```python
from pyro import poutine
class A:
@poutine.block
def run(self):
return 1
a = A()
a.run() # error
```
This causes the issue https://github.com/pyro-ppl/pyro/issues/3018
[Support for Python 3.10] MCMC example in documentation does not work: `AttributeError: __enter__`
I am following the MCMC example in the documentation: <https://docs.pyro.ai/en/stable/mcmc.html#nuts>
```python
import pyro
import pyro.distributions as dist
from pyro.infer import MCMC, NUTS
import torch
true_coefs = torch.tensor([1., 2., 3.])
data = torch.randn(2000, 3)
dim = 3
labels = dist.Bernoulli(logits=(true_coefs * data).sum(-1)).sample()
def model(data):
coefs_mean = torch.zeros(dim)
coefs = pyro.sample('beta', dist.Normal(coefs_mean, torch.ones(3)))
y = pyro.sample('y', dist.Bernoulli(logits=(coefs * data).sum(-1)), obs=labels)
return y
nuts_kernel = NUTS(model, adapt_step_size=True)
mcmc = MCMC(nuts_kernel, num_samples=500, warmup_steps=300)
mcmc.run(data)
```
But it raises an error:
```
Traceback (most recent call last):
File "/home/ayaka/Projects/test/main.py", line 19, in <module>
mcmc.run(data)
File "/home/ayaka/venv/lib/python3.10/site-packages/pyro/poutine/messenger.py", line 11, in _context_wrap
with context:
AttributeError: __enter__
```
Pyro version: 1.8.0
</issue>
<code>
[start of pyro/poutine/messenger.py]
1 # Copyright (c) 2017-2019 Uber Technologies, Inc.
2 # SPDX-License-Identifier: Apache-2.0
3
4 from contextlib import contextmanager
5 from functools import partial
6
7 from .runtime import _PYRO_STACK
8
9
10 def _context_wrap(context, fn, *args, **kwargs):
11 with context:
12 return fn(*args, **kwargs)
13
14
15 class _bound_partial(partial):
16 """
17 Converts a (possibly) bound method into a partial function to
18 support class methods as arguments to handlers.
19 """
20
21 def __get__(self, instance, owner):
22 if instance is None:
23 return self
24 return partial(self.func, instance)
25
26
27 def unwrap(fn):
28 """
29 Recursively unwraps poutines.
30 """
31 while True:
32 if isinstance(fn, _bound_partial):
33 fn = fn.func
34 continue
35 if isinstance(fn, partial) and len(fn.args) >= 2:
36 fn = fn.args[1] # extract from partial(handler, fn)
37 continue
38 return fn
39
40
41 class Messenger:
42 """
43 Context manager class that modifies behavior
44 and adds side effects to stochastic functions
45 i.e. callables containing Pyro primitive statements.
46
47 This is the base Messenger class.
48 It implements the default behavior for all Pyro primitives,
49 so that the joint distribution induced by a stochastic function fn
50 is identical to the joint distribution induced by ``Messenger()(fn)``.
51
52 Class of transformers for messages passed during inference.
53 Most inference operations are implemented in subclasses of this.
54 """
55
56 def __call__(self, fn):
57 if not callable(fn):
58 raise ValueError(
59 "{} is not callable, did you mean to pass it as a keyword arg?".format(
60 fn
61 )
62 )
63 wraps = _bound_partial(partial(_context_wrap, self, fn))
64 return wraps
65
66 def __enter__(self):
67 """
68 :returns: self
69 :rtype: pyro.poutine.Messenger
70
71 Installs this messenger at the bottom of the Pyro stack.
72
73 Can be overloaded to add any additional per-call setup functionality,
74 but the derived class must always push itself onto the stack, usually
75 by calling super().__enter__().
76
77 Derived versions cannot be overridden to take arguments
78 and must always return self.
79 """
80 if not (self in _PYRO_STACK):
81 # if this poutine is not already installed,
82 # put it on the bottom of the stack.
83 _PYRO_STACK.append(self)
84
85 # necessary to return self because the return value of __enter__
86 # is bound to VAR in with EXPR as VAR.
87 return self
88 else:
89 # note: currently we raise an error if trying to install a poutine twice.
90 # However, this isn't strictly necessary,
91 # and blocks recursive poutine execution patterns like
92 # like calling self.__call__ inside of self.__call__
93 # or with Handler(...) as p: with p: <BLOCK>
94 # It's hard to imagine use cases for this pattern,
95 # but it could in principle be enabled...
96 raise ValueError("cannot install a Messenger instance twice")
97
98 def __exit__(self, exc_type, exc_value, traceback):
99 """
100 :param exc_type: exception type, e.g. ValueError
101 :param exc_value: exception instance?
102 :param traceback: traceback for exception handling
103 :returns: None
104 :rtype: None
105
106 Removes this messenger from the bottom of the Pyro stack.
107 If an exception is raised, removes this messenger and everything below it.
108 Always called after every execution of self.fn via self.__call__.
109
110 Can be overloaded by derived classes to add any other per-call teardown functionality,
111 but the stack must always be popped by the derived class,
112 usually by calling super().__exit__(*args).
113
114 Derived versions cannot be overridden to take other arguments,
115 and must always return None or False.
116
117 The arguments are the mandatory arguments used by a with statement.
118 Users should never be specifying these.
119 They are all None unless the body of the with statement raised an exception.
120 """
121 if exc_type is None: # callee or enclosed block returned successfully
122 # if the callee or enclosed block returned successfully,
123 # this poutine should be on the bottom of the stack.
124 # If so, remove it from the stack.
125 # if not, raise a ValueError because something really weird happened.
126 if _PYRO_STACK[-1] == self:
127 _PYRO_STACK.pop()
128 else:
129 # should never get here, but just in case...
130 raise ValueError("This Messenger is not on the bottom of the stack")
131 else: # the wrapped function or block raised an exception
132 # poutine exception handling:
133 # when the callee or enclosed block raises an exception,
134 # find this poutine's position in the stack,
135 # then remove it and everything below it in the stack.
136 if self in _PYRO_STACK:
137 loc = _PYRO_STACK.index(self)
138 for i in range(loc, len(_PYRO_STACK)):
139 _PYRO_STACK.pop()
140
141 def _reset(self):
142 pass
143
144 def _process_message(self, msg):
145 """
146 :param msg: current message at a trace site
147 :returns: None
148
149 Process the message by calling appropriate method of itself based
150 on message type. The message is updated in place.
151 """
152 method = getattr(self, "_pyro_{}".format(msg["type"]), None)
153 if method is not None:
154 return method(msg)
155 return None
156
157 def _postprocess_message(self, msg):
158 method = getattr(self, "_pyro_post_{}".format(msg["type"]), None)
159 if method is not None:
160 return method(msg)
161 return None
162
163 @classmethod
164 def register(cls, fn=None, type=None, post=None):
165 """
166 :param fn: function implementing operation
167 :param str type: name of the operation
168 (also passed to :func:`~pyro.poutine.runtime.effectful`)
169 :param bool post: if `True`, use this operation as postprocess
170
171 Dynamically add operations to an effect.
172 Useful for generating wrappers for libraries.
173
174 Example::
175
176 @SomeMessengerClass.register
177 def some_function(msg)
178 ...do_something...
179 return msg
180
181 """
182 if fn is None:
183 return lambda x: cls.register(x, type=type, post=post)
184
185 if type is None:
186 raise ValueError("An operation type name must be provided")
187
188 setattr(cls, "_pyro_" + ("post_" if post else "") + type, staticmethod(fn))
189 return fn
190
191 @classmethod
192 def unregister(cls, fn=None, type=None):
193 """
194 :param fn: function implementing operation
195 :param str type: name of the operation
196 (also passed to :func:`~pyro.poutine.runtime.effectful`)
197
198 Dynamically remove operations from an effect.
199 Useful for removing wrappers from libraries.
200
201 Example::
202
203 SomeMessengerClass.unregister(some_function, "name")
204 """
205 if type is None:
206 raise ValueError("An operation type name must be provided")
207
208 try:
209 delattr(cls, "_pyro_post_" + type)
210 except AttributeError:
211 pass
212
213 try:
214 delattr(cls, "_pyro_" + type)
215 except AttributeError:
216 pass
217
218 return fn
219
220
221 @contextmanager
222 def block_messengers(predicate):
223 """
224 EXPERIMENTAL Context manager to temporarily remove matching messengers from
225 the _PYRO_STACK. Note this does not call the ``.__exit__()`` and
226 ``.__enter__()`` methods.
227
228 This is useful to selectively block enclosing handlers.
229
230 :param callable predicate: A predicate mapping messenger instance to boolean.
231 This mutes all messengers ``m`` for which ``bool(predicate(m)) is True``.
232 :yields: A list of matched messengers that are blocked.
233 """
234 blocked = {}
235 try:
236 for i, messenger in enumerate(_PYRO_STACK):
237 if predicate(messenger):
238 blocked[i] = messenger
239 _PYRO_STACK[i] = Messenger() # trivial messenger
240 yield list(blocked.values())
241 finally:
242 for i, messenger in blocked.items():
243 _PYRO_STACK[i] = messenger
244
[end of pyro/poutine/messenger.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pyro/poutine/messenger.py b/pyro/poutine/messenger.py
--- a/pyro/poutine/messenger.py
+++ b/pyro/poutine/messenger.py
@@ -18,6 +18,14 @@
support class methods as arguments to handlers.
"""
+ # Use '__slots__' for func to avoid the issue
+ # `_bound_partial(_bound_partial(f)).func is f`
+ # in Python 3.10.
+ __slots__ = "func"
+
+ def __init__(self, func):
+ self.func = func
+
def __get__(self, instance, owner):
if instance is None:
return self
|
{"golden_diff": "diff --git a/pyro/poutine/messenger.py b/pyro/poutine/messenger.py\n--- a/pyro/poutine/messenger.py\n+++ b/pyro/poutine/messenger.py\n@@ -18,6 +18,14 @@\n support class methods as arguments to handlers.\n \"\"\"\n \n+ # Use '__slots__' for func to avoid the issue\n+ # `_bound_partial(_bound_partial(f)).func is f`\n+ # in Python 3.10.\n+ __slots__ = \"func\"\n+\n+ def __init__(self, func):\n+ self.func = func\n+\n def __get__(self, instance, owner):\n if instance is None:\n return self\n", "issue": "`poutine.block` does not work in Python 3.10\n### Reproducible code\r\n\r\n```python\r\nfrom pyro import poutine\r\n\r\nclass A:\r\n @poutine.block\r\n def run(self):\r\n return 1\r\n\r\na = A()\r\na.run() # error\r\n```\r\n\r\nThis causes the issue https://github.com/pyro-ppl/pyro/issues/3018\n[Support for Python 3.10] MCMC example in documentation does not work: `AttributeError: __enter__`\nI am following the MCMC example in the documentation: <https://docs.pyro.ai/en/stable/mcmc.html#nuts>\r\n\r\n```python\r\nimport pyro\r\nimport pyro.distributions as dist\r\nfrom pyro.infer import MCMC, NUTS\r\nimport torch\r\n\r\ntrue_coefs = torch.tensor([1., 2., 3.])\r\ndata = torch.randn(2000, 3)\r\ndim = 3\r\nlabels = dist.Bernoulli(logits=(true_coefs * data).sum(-1)).sample()\r\n\r\ndef model(data):\r\n coefs_mean = torch.zeros(dim)\r\n coefs = pyro.sample('beta', dist.Normal(coefs_mean, torch.ones(3)))\r\n y = pyro.sample('y', dist.Bernoulli(logits=(coefs * data).sum(-1)), obs=labels)\r\n return y\r\n\r\nnuts_kernel = NUTS(model, adapt_step_size=True)\r\nmcmc = MCMC(nuts_kernel, num_samples=500, warmup_steps=300)\r\nmcmc.run(data)\r\n```\r\n\r\nBut it raises an error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/ayaka/Projects/test/main.py\", line 19, in <module>\r\n mcmc.run(data)\r\n File \"/home/ayaka/venv/lib/python3.10/site-packages/pyro/poutine/messenger.py\", line 11, in _context_wrap\r\n with context:\r\nAttributeError: __enter__\r\n```\r\n\r\nPyro version: 1.8.0\n", "before_files": [{"content": "# Copyright (c) 2017-2019 Uber Technologies, Inc.\n# SPDX-License-Identifier: Apache-2.0\n\nfrom contextlib import contextmanager\nfrom functools import partial\n\nfrom .runtime import _PYRO_STACK\n\n\ndef _context_wrap(context, fn, *args, **kwargs):\n with context:\n return fn(*args, **kwargs)\n\n\nclass _bound_partial(partial):\n \"\"\"\n Converts a (possibly) bound method into a partial function to\n support class methods as arguments to handlers.\n \"\"\"\n\n def __get__(self, instance, owner):\n if instance is None:\n return self\n return partial(self.func, instance)\n\n\ndef unwrap(fn):\n \"\"\"\n Recursively unwraps poutines.\n \"\"\"\n while True:\n if isinstance(fn, _bound_partial):\n fn = fn.func\n continue\n if isinstance(fn, partial) and len(fn.args) >= 2:\n fn = fn.args[1] # extract from partial(handler, fn)\n continue\n return fn\n\n\nclass Messenger:\n \"\"\"\n Context manager class that modifies behavior\n and adds side effects to stochastic functions\n i.e. callables containing Pyro primitive statements.\n\n This is the base Messenger class.\n It implements the default behavior for all Pyro primitives,\n so that the joint distribution induced by a stochastic function fn\n is identical to the joint distribution induced by ``Messenger()(fn)``.\n\n Class of transformers for messages passed during inference.\n Most inference operations are implemented in subclasses of this.\n \"\"\"\n\n def __call__(self, fn):\n if not callable(fn):\n raise ValueError(\n \"{} is not callable, did you mean to pass it as a keyword arg?\".format(\n fn\n )\n )\n wraps = _bound_partial(partial(_context_wrap, self, fn))\n return wraps\n\n def __enter__(self):\n \"\"\"\n :returns: self\n :rtype: pyro.poutine.Messenger\n\n Installs this messenger at the bottom of the Pyro stack.\n\n Can be overloaded to add any additional per-call setup functionality,\n but the derived class must always push itself onto the stack, usually\n by calling super().__enter__().\n\n Derived versions cannot be overridden to take arguments\n and must always return self.\n \"\"\"\n if not (self in _PYRO_STACK):\n # if this poutine is not already installed,\n # put it on the bottom of the stack.\n _PYRO_STACK.append(self)\n\n # necessary to return self because the return value of __enter__\n # is bound to VAR in with EXPR as VAR.\n return self\n else:\n # note: currently we raise an error if trying to install a poutine twice.\n # However, this isn't strictly necessary,\n # and blocks recursive poutine execution patterns like\n # like calling self.__call__ inside of self.__call__\n # or with Handler(...) as p: with p: <BLOCK>\n # It's hard to imagine use cases for this pattern,\n # but it could in principle be enabled...\n raise ValueError(\"cannot install a Messenger instance twice\")\n\n def __exit__(self, exc_type, exc_value, traceback):\n \"\"\"\n :param exc_type: exception type, e.g. ValueError\n :param exc_value: exception instance?\n :param traceback: traceback for exception handling\n :returns: None\n :rtype: None\n\n Removes this messenger from the bottom of the Pyro stack.\n If an exception is raised, removes this messenger and everything below it.\n Always called after every execution of self.fn via self.__call__.\n\n Can be overloaded by derived classes to add any other per-call teardown functionality,\n but the stack must always be popped by the derived class,\n usually by calling super().__exit__(*args).\n\n Derived versions cannot be overridden to take other arguments,\n and must always return None or False.\n\n The arguments are the mandatory arguments used by a with statement.\n Users should never be specifying these.\n They are all None unless the body of the with statement raised an exception.\n \"\"\"\n if exc_type is None: # callee or enclosed block returned successfully\n # if the callee or enclosed block returned successfully,\n # this poutine should be on the bottom of the stack.\n # If so, remove it from the stack.\n # if not, raise a ValueError because something really weird happened.\n if _PYRO_STACK[-1] == self:\n _PYRO_STACK.pop()\n else:\n # should never get here, but just in case...\n raise ValueError(\"This Messenger is not on the bottom of the stack\")\n else: # the wrapped function or block raised an exception\n # poutine exception handling:\n # when the callee or enclosed block raises an exception,\n # find this poutine's position in the stack,\n # then remove it and everything below it in the stack.\n if self in _PYRO_STACK:\n loc = _PYRO_STACK.index(self)\n for i in range(loc, len(_PYRO_STACK)):\n _PYRO_STACK.pop()\n\n def _reset(self):\n pass\n\n def _process_message(self, msg):\n \"\"\"\n :param msg: current message at a trace site\n :returns: None\n\n Process the message by calling appropriate method of itself based\n on message type. The message is updated in place.\n \"\"\"\n method = getattr(self, \"_pyro_{}\".format(msg[\"type\"]), None)\n if method is not None:\n return method(msg)\n return None\n\n def _postprocess_message(self, msg):\n method = getattr(self, \"_pyro_post_{}\".format(msg[\"type\"]), None)\n if method is not None:\n return method(msg)\n return None\n\n @classmethod\n def register(cls, fn=None, type=None, post=None):\n \"\"\"\n :param fn: function implementing operation\n :param str type: name of the operation\n (also passed to :func:`~pyro.poutine.runtime.effectful`)\n :param bool post: if `True`, use this operation as postprocess\n\n Dynamically add operations to an effect.\n Useful for generating wrappers for libraries.\n\n Example::\n\n @SomeMessengerClass.register\n def some_function(msg)\n ...do_something...\n return msg\n\n \"\"\"\n if fn is None:\n return lambda x: cls.register(x, type=type, post=post)\n\n if type is None:\n raise ValueError(\"An operation type name must be provided\")\n\n setattr(cls, \"_pyro_\" + (\"post_\" if post else \"\") + type, staticmethod(fn))\n return fn\n\n @classmethod\n def unregister(cls, fn=None, type=None):\n \"\"\"\n :param fn: function implementing operation\n :param str type: name of the operation\n (also passed to :func:`~pyro.poutine.runtime.effectful`)\n\n Dynamically remove operations from an effect.\n Useful for removing wrappers from libraries.\n\n Example::\n\n SomeMessengerClass.unregister(some_function, \"name\")\n \"\"\"\n if type is None:\n raise ValueError(\"An operation type name must be provided\")\n\n try:\n delattr(cls, \"_pyro_post_\" + type)\n except AttributeError:\n pass\n\n try:\n delattr(cls, \"_pyro_\" + type)\n except AttributeError:\n pass\n\n return fn\n\n\n@contextmanager\ndef block_messengers(predicate):\n \"\"\"\n EXPERIMENTAL Context manager to temporarily remove matching messengers from\n the _PYRO_STACK. Note this does not call the ``.__exit__()`` and\n ``.__enter__()`` methods.\n\n This is useful to selectively block enclosing handlers.\n\n :param callable predicate: A predicate mapping messenger instance to boolean.\n This mutes all messengers ``m`` for which ``bool(predicate(m)) is True``.\n :yields: A list of matched messengers that are blocked.\n \"\"\"\n blocked = {}\n try:\n for i, messenger in enumerate(_PYRO_STACK):\n if predicate(messenger):\n blocked[i] = messenger\n _PYRO_STACK[i] = Messenger() # trivial messenger\n yield list(blocked.values())\n finally:\n for i, messenger in blocked.items():\n _PYRO_STACK[i] = messenger\n", "path": "pyro/poutine/messenger.py"}]}
| 3,406 | 156 |
gh_patches_debug_8387
|
rasdani/github-patches
|
git_diff
|
elastic__apm-agent-python-1352
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
apm-agent-python and structlog - mapper_parsing_exception for `event.dataset`
**Describe the bug**: When using `structlog` and with `elasticapm` and the log processer `elasticapm.structlog_processor`, we have recently seen Logstash refuse our logger with the following error:
```
[2021-10-05T12:10:10,746][WARN ][logstash.outputs.elasticsearch][main][a2a92c7c0ddf765b1969e7e8d4a302b6deca976af4c80a2d9706ccdf2486267b] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2021.10.05", :routing=>nil}, {"stage_environment"=>"prod", "service.name"=>"flow", "company"=>"<PayerCompany: Hidden Company Name [Parent Company: Hidden Company Name [CRN: DKXXXXXXXX]] [CRN: DKXXXXXXXX]>", "@version"=>"1", "host"=>"167.71.1.240", "sentry"=>"skipped", "timestamp"=>"2021-10-05T12:10:00.483890Z", "logger"=>"account_service.models", "event.dataset"=>"flow", "event"=>"PayerCompany change state request", "level"=>"debug", "port"=>58652, "new"=>"APPROVED", "override"=>false, "@timestamp"=>2021-10-05T12:10:10.462Z, "old"=>"APPROVED", "modline"=>"account_service.models:159"}], :response=>{"index"=>{"_index"=>"logstash-2021.10.05", "_type"=>"_doc", "_id"=>"ST1cUHwBFM723LU2e_JV", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Could not dynamically add mapping for field [event.dataset]. Existing mapping for [event] must be of type object but found [text]."}}}}
```
We setup our structlog like this in the Django settings file:
```python
processors = [
structlog.stdlib.filter_by_level,
structlog.stdlib.add_log_level,
structlog.stdlib.add_logger_name,
log_processors.add_module_and_lineno,
log_processors.normalize_datatypes,
log_processors.attach_environment,
log_processors.timestamper,
structlog_processor, # this is the processor that seems to cause the error
SentryJsonProcessor(level=logging.ERROR, tag_keys="__all__"),
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.processors.UnicodeDecoder(),
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
]
# Structlog
structlog.configure(
processors=processors,
context_class=structlog.threadlocal.wrap_dict(dict),
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
```
If we remove `structlog_processor` all loggers are received by Logstash with no problems.
If we write a small customer log processor that strips the `event.dataset` (e.g. `del event_dict["event.dataset"]` it also works fine again.
**To Reproduce**
1. Setup structlog
2. Add the elasticapm.structlog_processor to the list of processors for structlog
3. Send a logger (e.g. `logger.info("test")`).
**Environment (please complete the following information)**
- OS: Debian (slim)
- Python version: 3.9.7
- Framework and version [e.g. Django 2.1]: 3.2.8
- APM Server version: 7.15
- Agent version: N/A
**Additional context**
We use `python-logstash-async` for delivery of the logs to the logstash server.
requirements.txt (excerpts)
```pip
elastic-apm==6.5.0
elasticsearch==7.15.0
django-structlog==2.1.3
python-logstash-async==2.3.0
structlog==21.1.0
structlog-sentry==1.4.0
```
</issue>
<code>
[start of elasticapm/handlers/structlog.py]
1 # Copyright (c) 2019, Elasticsearch BV
2 # All rights reserved.
3 #
4 # Redistribution and use in source and binary forms, with or without
5 # modification, are permitted provided that the following conditions are met:
6 #
7 # * Redistributions of source code must retain the above copyright notice, this
8 # list of conditions and the following disclaimer.
9 #
10 # * Redistributions in binary form must reproduce the above copyright notice,
11 # this list of conditions and the following disclaimer in the documentation
12 # and/or other materials provided with the distribution.
13 #
14 # * Neither the name of the copyright holder nor the names of its
15 # contributors may be used to endorse or promote products derived from
16 # this software without specific prior written permission.
17 #
18 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
19 # AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
20 # IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
21 # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
22 # FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
23 # DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
24 # SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
25 # CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
26 # OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
27
28
29 from __future__ import absolute_import
30
31 from elasticapm import get_client
32 from elasticapm.traces import execution_context
33
34
35 def structlog_processor(logger, method_name, event_dict):
36 """
37 Add three new entries to the event_dict for any processed events:
38
39 * transaction.id
40 * trace.id
41 * span.id
42
43 Only adds non-None IDs.
44
45 :param logger:
46 Unused (logger instance in structlog)
47 :param method_name:
48 Unused (wrapped method_name)
49 :param event_dict:
50 Event dictionary for the event we're processing
51 :return:
52 `event_dict`, with three new entries.
53 """
54 transaction = execution_context.get_transaction()
55 if transaction:
56 event_dict["transaction.id"] = transaction.id
57 client = get_client()
58 if client:
59 event_dict["service.name"] = client.config.service_name
60 event_dict["event.dataset"] = f"{client.config.service_name}"
61 if transaction and transaction.trace_parent:
62 event_dict["trace.id"] = transaction.trace_parent.trace_id
63 span = execution_context.get_span()
64 if span and span.id:
65 event_dict["span.id"] = span.id
66 return event_dict
67
[end of elasticapm/handlers/structlog.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticapm/handlers/structlog.py b/elasticapm/handlers/structlog.py
--- a/elasticapm/handlers/structlog.py
+++ b/elasticapm/handlers/structlog.py
@@ -57,7 +57,7 @@
client = get_client()
if client:
event_dict["service.name"] = client.config.service_name
- event_dict["event.dataset"] = f"{client.config.service_name}"
+ event_dict["event"] = {"dataset": f"{client.config.service_name}"}
if transaction and transaction.trace_parent:
event_dict["trace.id"] = transaction.trace_parent.trace_id
span = execution_context.get_span()
|
{"golden_diff": "diff --git a/elasticapm/handlers/structlog.py b/elasticapm/handlers/structlog.py\n--- a/elasticapm/handlers/structlog.py\n+++ b/elasticapm/handlers/structlog.py\n@@ -57,7 +57,7 @@\n client = get_client()\n if client:\n event_dict[\"service.name\"] = client.config.service_name\n- event_dict[\"event.dataset\"] = f\"{client.config.service_name}\"\n+ event_dict[\"event\"] = {\"dataset\": f\"{client.config.service_name}\"}\n if transaction and transaction.trace_parent:\n event_dict[\"trace.id\"] = transaction.trace_parent.trace_id\n span = execution_context.get_span()\n", "issue": "apm-agent-python and structlog - mapper_parsing_exception for `event.dataset` \n**Describe the bug**: When using `structlog` and with `elasticapm` and the log processer `elasticapm.structlog_processor`, we have recently seen Logstash refuse our logger with the following error: \r\n\r\n```\r\n[2021-10-05T12:10:10,746][WARN ][logstash.outputs.elasticsearch][main][a2a92c7c0ddf765b1969e7e8d4a302b6deca976af4c80a2d9706ccdf2486267b] Could not index event to Elasticsearch. {:status=>400, :action=>[\"index\", {:_id=>nil, :_index=>\"logstash-2021.10.05\", :routing=>nil}, {\"stage_environment\"=>\"prod\", \"service.name\"=>\"flow\", \"company\"=>\"<PayerCompany: Hidden Company Name [Parent Company: Hidden Company Name [CRN: DKXXXXXXXX]] [CRN: DKXXXXXXXX]>\", \"@version\"=>\"1\", \"host\"=>\"167.71.1.240\", \"sentry\"=>\"skipped\", \"timestamp\"=>\"2021-10-05T12:10:00.483890Z\", \"logger\"=>\"account_service.models\", \"event.dataset\"=>\"flow\", \"event\"=>\"PayerCompany change state request\", \"level\"=>\"debug\", \"port\"=>58652, \"new\"=>\"APPROVED\", \"override\"=>false, \"@timestamp\"=>2021-10-05T12:10:10.462Z, \"old\"=>\"APPROVED\", \"modline\"=>\"account_service.models:159\"}], :response=>{\"index\"=>{\"_index\"=>\"logstash-2021.10.05\", \"_type\"=>\"_doc\", \"_id\"=>\"ST1cUHwBFM723LU2e_JV\", \"status\"=>400, \"error\"=>{\"type\"=>\"mapper_parsing_exception\", \"reason\"=>\"Could not dynamically add mapping for field [event.dataset]. Existing mapping for [event] must be of type object but found [text].\"}}}}\r\n```\r\n\r\nWe setup our structlog like this in the Django settings file: \r\n\r\n```python\r\nprocessors = [\r\n structlog.stdlib.filter_by_level,\r\n structlog.stdlib.add_log_level,\r\n structlog.stdlib.add_logger_name,\r\n log_processors.add_module_and_lineno,\r\n log_processors.normalize_datatypes,\r\n log_processors.attach_environment,\r\n log_processors.timestamper,\r\n structlog_processor, # this is the processor that seems to cause the error\r\n SentryJsonProcessor(level=logging.ERROR, tag_keys=\"__all__\"),\r\n structlog.stdlib.PositionalArgumentsFormatter(),\r\n structlog.processors.StackInfoRenderer(),\r\n structlog.processors.format_exc_info,\r\n structlog.processors.UnicodeDecoder(),\r\n structlog.stdlib.ProcessorFormatter.wrap_for_formatter,\r\n]\r\n\r\n\r\n# Structlog\r\nstructlog.configure(\r\n processors=processors,\r\n context_class=structlog.threadlocal.wrap_dict(dict),\r\n logger_factory=structlog.stdlib.LoggerFactory(),\r\n wrapper_class=structlog.stdlib.BoundLogger,\r\n cache_logger_on_first_use=True,\r\n)\r\n```\r\n\r\nIf we remove `structlog_processor` all loggers are received by Logstash with no problems. \r\n\r\nIf we write a small customer log processor that strips the `event.dataset` (e.g. `del event_dict[\"event.dataset\"]` it also works fine again. \r\n\r\n**To Reproduce**\r\n\r\n1. Setup structlog\r\n2. Add the elasticapm.structlog_processor to the list of processors for structlog\r\n3. Send a logger (e.g. `logger.info(\"test\")`). \r\n\r\n**Environment (please complete the following information)**\r\n- OS: Debian (slim)\r\n- Python version: 3.9.7\r\n- Framework and version [e.g. Django 2.1]: 3.2.8\r\n- APM Server version: 7.15\r\n- Agent version: N/A\r\n\r\n\r\n**Additional context**\r\n\r\nWe use `python-logstash-async` for delivery of the logs to the logstash server. \r\n\r\nrequirements.txt (excerpts)\r\n```pip\r\nelastic-apm==6.5.0\r\nelasticsearch==7.15.0\r\ndjango-structlog==2.1.3\r\npython-logstash-async==2.3.0\r\nstructlog==21.1.0\r\nstructlog-sentry==1.4.0\r\n``` \n", "before_files": [{"content": "# Copyright (c) 2019, Elasticsearch BV\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice, this\n# list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# * Neither the name of the copyright holder nor the names of its\n# contributors may be used to endorse or promote products derived from\n# this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE\n# FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR\n# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER\n# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,\n# OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n\n\nfrom __future__ import absolute_import\n\nfrom elasticapm import get_client\nfrom elasticapm.traces import execution_context\n\n\ndef structlog_processor(logger, method_name, event_dict):\n \"\"\"\n Add three new entries to the event_dict for any processed events:\n\n * transaction.id\n * trace.id\n * span.id\n\n Only adds non-None IDs.\n\n :param logger:\n Unused (logger instance in structlog)\n :param method_name:\n Unused (wrapped method_name)\n :param event_dict:\n Event dictionary for the event we're processing\n :return:\n `event_dict`, with three new entries.\n \"\"\"\n transaction = execution_context.get_transaction()\n if transaction:\n event_dict[\"transaction.id\"] = transaction.id\n client = get_client()\n if client:\n event_dict[\"service.name\"] = client.config.service_name\n event_dict[\"event.dataset\"] = f\"{client.config.service_name}\"\n if transaction and transaction.trace_parent:\n event_dict[\"trace.id\"] = transaction.trace_parent.trace_id\n span = execution_context.get_span()\n if span and span.id:\n event_dict[\"span.id\"] = span.id\n return event_dict\n", "path": "elasticapm/handlers/structlog.py"}]}
| 2,243 | 156 |
gh_patches_debug_11649
|
rasdani/github-patches
|
git_diff
|
pypa__cibuildwheel-129
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MacOS travis-ci build stalls
I noticed today that our MacOS travis-CI build using cibuildwheel has started stalling at the following point of the cibuildwheel setup:
```bash
+ pip install --upgrade setuptools
Collecting setuptools
Downloading https://files.pythonhosted.org/packages/37/06/754589caf971b0d2d48f151c2586f62902d93dc908e2fd9b9b9f6aa3c9dd/setuptools-40.6.3-py2.py3-none-any.whl (573kB)
Installing collected packages: setuptools
Found existing installation: setuptools 28.8.0
Uninstalling setuptools-28.8.0:
No output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.
Check the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received
The build has been terminated
```
This hasn't affected our Windows/Linux builds, and no changes to our devops pipeline has occurred (and even then, only superficial Python commits in our codebase were committed).
This issue happens no matter how many times we restart the build, and seems odd - this step is usually instantaneous on previous Mac cibuildwheel builds.
Since this is a command that is called by cibuildwheel, has there been a recent change that breaks this step?
</issue>
<code>
[start of cibuildwheel/macos.py]
1 from __future__ import print_function
2 import os, subprocess, shlex, sys, shutil
3 from collections import namedtuple
4 from glob import glob
5 try:
6 from shlex import quote as shlex_quote
7 except ImportError:
8 from pipes import quote as shlex_quote
9
10 from .util import prepare_command, get_build_verbosity_extra_flags
11
12
13 def build(project_dir, output_dir, test_command, test_requires, before_build, build_verbosity, build_selector, environment):
14 PythonConfiguration = namedtuple('PythonConfiguration', ['version', 'identifier', 'url'])
15 python_configurations = [
16 PythonConfiguration(version='2.7', identifier='cp27-macosx_10_6_intel', url='https://www.python.org/ftp/python/2.7.15/python-2.7.15-macosx10.6.pkg'),
17 PythonConfiguration(version='3.4', identifier='cp34-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.4.4/python-3.4.4-macosx10.6.pkg'),
18 PythonConfiguration(version='3.5', identifier='cp35-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.5.4/python-3.5.4-macosx10.6.pkg'),
19 PythonConfiguration(version='3.6', identifier='cp36-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.6.8/python-3.6.8-macosx10.6.pkg'),
20 PythonConfiguration(version='3.7', identifier='cp37-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.7.2/python-3.7.2-macosx10.6.pkg'),
21 ]
22 get_pip_url = 'https://bootstrap.pypa.io/get-pip.py'
23 get_pip_script = '/tmp/get-pip.py'
24
25 pkgs_output = subprocess.check_output(['pkgutil', '--pkgs'])
26 if sys.version_info[0] >= 3:
27 pkgs_output = pkgs_output.decode('utf8')
28 installed_system_packages = pkgs_output.splitlines()
29
30 def call(args, env=None, cwd=None, shell=False):
31 # print the command executing for the logs
32 if shell:
33 print('+ %s' % args)
34 else:
35 print('+ ' + ' '.join(shlex_quote(a) for a in args))
36
37 return subprocess.check_call(args, env=env, cwd=cwd, shell=shell)
38
39 abs_project_dir = os.path.abspath(project_dir)
40
41 # get latest pip once and for all
42 call(['curl', '-L', '-o', get_pip_script, get_pip_url])
43
44 for config in python_configurations:
45 if not build_selector(config.identifier):
46 print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)
47 continue
48
49 # if this version of python isn't installed, get it from python.org and install
50 python_package_identifier = 'org.python.Python.PythonFramework-%s' % config.version
51 if python_package_identifier not in installed_system_packages:
52 # download the pkg
53 call(['curl', '-L', '-o', '/tmp/Python.pkg', config.url])
54 # install
55 call(['sudo', 'installer', '-pkg', '/tmp/Python.pkg', '-target', '/'])
56 # patch open ssl
57 if config.version in ('3.4', '3.5'):
58 call(['curl', '-fsSLo', '/tmp/python-patch.tar.gz', 'https://github.com/mayeut/patch-macos-python-openssl/releases/download/v1.0.2q/patch-macos-python-%s-openssl-v1.0.2q.tar.gz' % config.version])
59 call(['sudo', 'tar', '-C', '/Library/Frameworks/Python.framework/Versions/%s/' % config.version, '-xmf', '/tmp/python-patch.tar.gz'])
60
61 installation_bin_path = '/Library/Frameworks/Python.framework/Versions/{}/bin'.format(config.version)
62
63 # Python bin folders on Mac don't symlink python3 to python, so we do that
64 # so `python` and `pip` always point to the active configuration.
65 if os.path.exists('/tmp/cibw_bin'):
66 shutil.rmtree('/tmp/cibw_bin')
67 os.makedirs('/tmp/cibw_bin')
68
69 if config.version[0] == '3':
70 os.symlink(os.path.join(installation_bin_path, 'python3'), '/tmp/cibw_bin/python')
71 os.symlink(os.path.join(installation_bin_path, 'python3-config'), '/tmp/cibw_bin/python-config')
72 os.symlink(os.path.join(installation_bin_path, 'pip3'), '/tmp/cibw_bin/pip')
73
74 env = os.environ.copy()
75 env['PATH'] = os.pathsep.join([
76 '/tmp/cibw_bin',
77 installation_bin_path,
78 env['PATH'],
79 ])
80 env = environment.as_dictionary(prev_environment=env)
81
82 # check what version we're on
83 call(['which', 'python'], env=env)
84 call(['python', '--version'], env=env)
85
86 # install pip & wheel
87 call(['python', get_pip_script, '--no-setuptools', '--no-wheel'], env=env)
88 call(['pip', '--version'], env=env)
89 # sudo required, because the removal of the old version of setuptools might cause problems with newer pip versions (see issue #122)
90 call(['sudo', 'pip', 'install', '--upgrade', 'setuptools'], env=env)
91 call(['pip', 'install', 'wheel'], env=env)
92 call(['pip', 'install', 'delocate'], env=env)
93
94 # setup dirs
95 if os.path.exists('/tmp/built_wheel'):
96 shutil.rmtree('/tmp/built_wheel')
97 os.makedirs('/tmp/built_wheel')
98 if os.path.exists('/tmp/delocated_wheel'):
99 shutil.rmtree('/tmp/delocated_wheel')
100 os.makedirs('/tmp/delocated_wheel')
101
102 # run the before_build command
103 if before_build:
104 before_build_prepared = prepare_command(before_build, project=abs_project_dir)
105 call(before_build_prepared, env=env, shell=True)
106
107 # build the wheel
108 call(['pip', 'wheel', abs_project_dir, '-w', '/tmp/built_wheel', '--no-deps'] + get_build_verbosity_extra_flags(build_verbosity), env=env)
109 built_wheel = glob('/tmp/built_wheel/*.whl')[0]
110
111 if built_wheel.endswith('none-any.whl'):
112 # pure python wheel - just move
113 shutil.move(built_wheel, '/tmp/delocated_wheel')
114 else:
115 # list the dependencies
116 call(['delocate-listdeps', built_wheel], env=env)
117 # rebuild the wheel with shared libraries included and place in output dir
118 call(['delocate-wheel', '-w', '/tmp/delocated_wheel', built_wheel], env=env)
119 delocated_wheel = glob('/tmp/delocated_wheel/*.whl')[0]
120
121 # install the wheel
122 call(['pip', 'install', delocated_wheel], env=env)
123
124 # test the wheel
125 if test_requires:
126 call(['pip', 'install'] + test_requires, env=env)
127 if test_command:
128 # run the tests from $HOME, with an absolute path in the command
129 # (this ensures that Python runs the tests against the installed wheel
130 # and not the repo code)
131 test_command_prepared = prepare_command(test_command, project=abs_project_dir)
132 call(test_command_prepared, cwd=os.environ['HOME'], env=env, shell=True)
133
134 # we're all done here; move it to output (overwrite existing)
135 dst = os.path.join(output_dir, os.path.basename(delocated_wheel))
136 shutil.move(delocated_wheel, dst)
137
[end of cibuildwheel/macos.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cibuildwheel/macos.py b/cibuildwheel/macos.py
--- a/cibuildwheel/macos.py
+++ b/cibuildwheel/macos.py
@@ -86,8 +86,7 @@
# install pip & wheel
call(['python', get_pip_script, '--no-setuptools', '--no-wheel'], env=env)
call(['pip', '--version'], env=env)
- # sudo required, because the removal of the old version of setuptools might cause problems with newer pip versions (see issue #122)
- call(['sudo', 'pip', 'install', '--upgrade', 'setuptools'], env=env)
+ call(['pip', 'install', '--upgrade', 'setuptools'], env=env)
call(['pip', 'install', 'wheel'], env=env)
call(['pip', 'install', 'delocate'], env=env)
|
{"golden_diff": "diff --git a/cibuildwheel/macos.py b/cibuildwheel/macos.py\n--- a/cibuildwheel/macos.py\n+++ b/cibuildwheel/macos.py\n@@ -86,8 +86,7 @@\n # install pip & wheel\n call(['python', get_pip_script, '--no-setuptools', '--no-wheel'], env=env)\n call(['pip', '--version'], env=env)\n- # sudo required, because the removal of the old version of setuptools might cause problems with newer pip versions (see issue #122)\n- call(['sudo', 'pip', 'install', '--upgrade', 'setuptools'], env=env)\n+ call(['pip', 'install', '--upgrade', 'setuptools'], env=env)\n call(['pip', 'install', 'wheel'], env=env)\n call(['pip', 'install', 'delocate'], env=env)\n", "issue": "MacOS travis-ci build stalls\nI noticed today that our MacOS travis-CI build using cibuildwheel has started stalling at the following point of the cibuildwheel setup:\r\n\r\n```bash\r\n+ pip install --upgrade setuptools\r\nCollecting setuptools\r\n Downloading https://files.pythonhosted.org/packages/37/06/754589caf971b0d2d48f151c2586f62902d93dc908e2fd9b9b9f6aa3c9dd/setuptools-40.6.3-py2.py3-none-any.whl (573kB)\r\nInstalling collected packages: setuptools\r\n Found existing installation: setuptools 28.8.0\r\n Uninstalling setuptools-28.8.0:\r\n\r\nNo output has been received in the last 10m0s, this potentially indicates a stalled build or something wrong with the build itself.\r\nCheck the details on how to adjust your build configuration on: https://docs.travis-ci.com/user/common-build-problems/#Build-times-out-because-no-output-was-received\r\nThe build has been terminated\r\n```\r\n\r\nThis hasn't affected our Windows/Linux builds, and no changes to our devops pipeline has occurred (and even then, only superficial Python commits in our codebase were committed). \r\n\r\nThis issue happens no matter how many times we restart the build, and seems odd - this step is usually instantaneous on previous Mac cibuildwheel builds.\r\n\r\nSince this is a command that is called by cibuildwheel, has there been a recent change that breaks this step?\n", "before_files": [{"content": "from __future__ import print_function\nimport os, subprocess, shlex, sys, shutil\nfrom collections import namedtuple\nfrom glob import glob\ntry:\n from shlex import quote as shlex_quote\nexcept ImportError:\n from pipes import quote as shlex_quote\n\nfrom .util import prepare_command, get_build_verbosity_extra_flags\n\n\ndef build(project_dir, output_dir, test_command, test_requires, before_build, build_verbosity, build_selector, environment):\n PythonConfiguration = namedtuple('PythonConfiguration', ['version', 'identifier', 'url'])\n python_configurations = [\n PythonConfiguration(version='2.7', identifier='cp27-macosx_10_6_intel', url='https://www.python.org/ftp/python/2.7.15/python-2.7.15-macosx10.6.pkg'),\n PythonConfiguration(version='3.4', identifier='cp34-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.4.4/python-3.4.4-macosx10.6.pkg'),\n PythonConfiguration(version='3.5', identifier='cp35-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.5.4/python-3.5.4-macosx10.6.pkg'),\n PythonConfiguration(version='3.6', identifier='cp36-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.6.8/python-3.6.8-macosx10.6.pkg'),\n PythonConfiguration(version='3.7', identifier='cp37-macosx_10_6_intel', url='https://www.python.org/ftp/python/3.7.2/python-3.7.2-macosx10.6.pkg'),\n ]\n get_pip_url = 'https://bootstrap.pypa.io/get-pip.py'\n get_pip_script = '/tmp/get-pip.py'\n\n pkgs_output = subprocess.check_output(['pkgutil', '--pkgs'])\n if sys.version_info[0] >= 3:\n pkgs_output = pkgs_output.decode('utf8')\n installed_system_packages = pkgs_output.splitlines()\n\n def call(args, env=None, cwd=None, shell=False):\n # print the command executing for the logs\n if shell:\n print('+ %s' % args)\n else:\n print('+ ' + ' '.join(shlex_quote(a) for a in args))\n\n return subprocess.check_call(args, env=env, cwd=cwd, shell=shell)\n\n abs_project_dir = os.path.abspath(project_dir)\n\n # get latest pip once and for all\n call(['curl', '-L', '-o', get_pip_script, get_pip_url])\n\n for config in python_configurations:\n if not build_selector(config.identifier):\n print('cibuildwheel: Skipping build %s' % config.identifier, file=sys.stderr)\n continue\n\n # if this version of python isn't installed, get it from python.org and install\n python_package_identifier = 'org.python.Python.PythonFramework-%s' % config.version\n if python_package_identifier not in installed_system_packages:\n # download the pkg\n call(['curl', '-L', '-o', '/tmp/Python.pkg', config.url])\n # install\n call(['sudo', 'installer', '-pkg', '/tmp/Python.pkg', '-target', '/'])\n # patch open ssl\n if config.version in ('3.4', '3.5'):\n call(['curl', '-fsSLo', '/tmp/python-patch.tar.gz', 'https://github.com/mayeut/patch-macos-python-openssl/releases/download/v1.0.2q/patch-macos-python-%s-openssl-v1.0.2q.tar.gz' % config.version])\n call(['sudo', 'tar', '-C', '/Library/Frameworks/Python.framework/Versions/%s/' % config.version, '-xmf', '/tmp/python-patch.tar.gz'])\n\n installation_bin_path = '/Library/Frameworks/Python.framework/Versions/{}/bin'.format(config.version)\n\n # Python bin folders on Mac don't symlink python3 to python, so we do that\n # so `python` and `pip` always point to the active configuration.\n if os.path.exists('/tmp/cibw_bin'):\n shutil.rmtree('/tmp/cibw_bin')\n os.makedirs('/tmp/cibw_bin')\n\n if config.version[0] == '3':\n os.symlink(os.path.join(installation_bin_path, 'python3'), '/tmp/cibw_bin/python')\n os.symlink(os.path.join(installation_bin_path, 'python3-config'), '/tmp/cibw_bin/python-config')\n os.symlink(os.path.join(installation_bin_path, 'pip3'), '/tmp/cibw_bin/pip')\n\n env = os.environ.copy()\n env['PATH'] = os.pathsep.join([\n '/tmp/cibw_bin',\n installation_bin_path,\n env['PATH'],\n ])\n env = environment.as_dictionary(prev_environment=env)\n\n # check what version we're on\n call(['which', 'python'], env=env)\n call(['python', '--version'], env=env)\n\n # install pip & wheel\n call(['python', get_pip_script, '--no-setuptools', '--no-wheel'], env=env)\n call(['pip', '--version'], env=env)\n # sudo required, because the removal of the old version of setuptools might cause problems with newer pip versions (see issue #122)\n call(['sudo', 'pip', 'install', '--upgrade', 'setuptools'], env=env)\n call(['pip', 'install', 'wheel'], env=env)\n call(['pip', 'install', 'delocate'], env=env)\n\n # setup dirs\n if os.path.exists('/tmp/built_wheel'):\n shutil.rmtree('/tmp/built_wheel')\n os.makedirs('/tmp/built_wheel')\n if os.path.exists('/tmp/delocated_wheel'):\n shutil.rmtree('/tmp/delocated_wheel')\n os.makedirs('/tmp/delocated_wheel')\n\n # run the before_build command\n if before_build:\n before_build_prepared = prepare_command(before_build, project=abs_project_dir)\n call(before_build_prepared, env=env, shell=True)\n\n # build the wheel\n call(['pip', 'wheel', abs_project_dir, '-w', '/tmp/built_wheel', '--no-deps'] + get_build_verbosity_extra_flags(build_verbosity), env=env)\n built_wheel = glob('/tmp/built_wheel/*.whl')[0]\n\n if built_wheel.endswith('none-any.whl'):\n # pure python wheel - just move\n shutil.move(built_wheel, '/tmp/delocated_wheel')\n else:\n # list the dependencies\n call(['delocate-listdeps', built_wheel], env=env)\n # rebuild the wheel with shared libraries included and place in output dir\n call(['delocate-wheel', '-w', '/tmp/delocated_wheel', built_wheel], env=env)\n delocated_wheel = glob('/tmp/delocated_wheel/*.whl')[0]\n\n # install the wheel\n call(['pip', 'install', delocated_wheel], env=env)\n\n # test the wheel\n if test_requires:\n call(['pip', 'install'] + test_requires, env=env)\n if test_command:\n # run the tests from $HOME, with an absolute path in the command\n # (this ensures that Python runs the tests against the installed wheel\n # and not the repo code)\n test_command_prepared = prepare_command(test_command, project=abs_project_dir)\n call(test_command_prepared, cwd=os.environ['HOME'], env=env, shell=True)\n\n # we're all done here; move it to output (overwrite existing)\n dst = os.path.join(output_dir, os.path.basename(delocated_wheel))\n shutil.move(delocated_wheel, dst)\n", "path": "cibuildwheel/macos.py"}]}
| 2,949 | 198 |
gh_patches_debug_16503
|
rasdani/github-patches
|
git_diff
|
bridgecrewio__checkov-1061
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Dockerfile scan fails when in directory and used -f
**Describe the bug**
When running directory scan checkov shows Dockerfile failed checks. When scanning file no errors are shown.
**To Reproduce**
Create Dockerfile in directory `test` with content:
```
FROM debian:buster
ENV CHECKOV_VERSION 1.0.775
RUN export DEBIAN_FRONTEND=noninteractive && \
apt-get -y update && \
apt-get -y --no-install-recommends install wget unzip ca-certificates git python3 python3-pip python3-setuptools python3-wheel && \
pip3 install -U checkov=="${CHECKOV_VERSION}"
```
`checkov -f test/Dockerfile` won't show errors
`checkov -d test` will show error
**Expected behavior**
Show error in both cases.
**Screenshots**
<img width="892" alt="Screenshot 2021-04-10 at 09 39 21" src="https://user-images.githubusercontent.com/672767/114262507-a54dde80-99e0-11eb-9e9e-3e3f5d2d2a7f.png">
**Desktop (please complete the following information):**
- OS: MacOS 11.2.3
- Python: 3.9.4
- Checkov Version 2.0.27
</issue>
<code>
[start of checkov/dockerfile/runner.py]
1 import logging
2 import os
3 from dockerfile_parse.constants import DOCKERFILE_FILENAME
4
5 from checkov.common.output.record import Record
6 from checkov.common.output.report import Report
7 from checkov.common.runners.base_runner import BaseRunner, filter_ignored_directories
8 from checkov.dockerfile.parser import parse, collect_skipped_checks
9 from checkov.dockerfile.registry import registry
10 from checkov.runner_filter import RunnerFilter
11
12 DOCKER_FILE_MASK = [DOCKERFILE_FILENAME]
13
14
15 class Runner(BaseRunner):
16 check_type = "dockerfile"
17
18 def run(self, root_folder, external_checks_dir=None, files=None, runner_filter=RunnerFilter(),
19 collect_skip_comments=True):
20 report = Report(self.check_type)
21 definitions = {}
22 definitions_raw = {}
23 parsing_errors = {}
24 files_list = []
25 if external_checks_dir:
26 for directory in external_checks_dir:
27 registry.load_external_checks(directory)
28
29 if files:
30 for file in files:
31 if file in DOCKER_FILE_MASK:
32 (definitions[file], definitions_raw[file]) = parse(file)
33
34 if root_folder:
35 for root, d_names, f_names in os.walk(root_folder):
36 filter_ignored_directories(d_names)
37 for file in f_names:
38 if file in DOCKER_FILE_MASK:
39 files_list.append(os.path.join(root, file))
40
41 for file in files_list:
42 relative_file_path = f'/{os.path.relpath(file, os.path.commonprefix((root_folder, file)))}'
43 try:
44 (definitions[relative_file_path], definitions_raw[relative_file_path]) = parse(file)
45 except TypeError:
46 logging.info(f'Dockerfile skipping {file} as it is not a valid dockerfile template')
47
48 for docker_file_path in definitions.keys():
49
50 # There are a few cases here. If -f was used, there could be a leading / because it's an absolute path,
51 # or there will be no leading slash; root_folder will always be none.
52 # If -d is used, root_folder will be the value given, and -f will start with a / (hardcoded above).
53 # The goal here is simply to get a valid path to the file (which docker_file_path does not always give).
54 if docker_file_path[0] == '/':
55 path_to_convert = (root_folder + docker_file_path) if root_folder else docker_file_path
56 else:
57 path_to_convert = (os.path.join(root_folder, docker_file_path)) if root_folder else docker_file_path
58
59 file_abs_path = os.path.abspath(path_to_convert)
60 skipped_checks = collect_skipped_checks(definitions[docker_file_path])
61 instructions = definitions[docker_file_path]
62
63 results = registry.scan(docker_file_path, instructions, skipped_checks,
64 runner_filter)
65 for check, check_result in results.items():
66 result_configuration = check_result['results_configuration']
67 startline = 0
68 endline = 0
69 result_instruction = ""
70 if result_configuration:
71 startline = result_configuration['startline']
72 endline = result_configuration['endline']
73 result_instruction = result_configuration["instruction"]
74
75 codeblock = []
76 self.calc_record_codeblock(codeblock, definitions_raw, docker_file_path, endline, startline)
77 record = Record(check_id=check.id, check_name=check.name, check_result=check_result,
78 code_block=codeblock,
79 file_path=docker_file_path,
80 file_line_range=[startline,
81 endline],
82 resource="{}.{}".format(docker_file_path,
83 result_instruction,
84 startline),
85 evaluations=None, check_class=check.__class__.__module__,
86 file_abs_path=file_abs_path, entity_tags=None)
87 report.add_record(record=record)
88
89 return report
90
91
92 def calc_record_codeblock(self, codeblock, definitions_raw, docker_file_path, endline, startline):
93 for line in range(startline, endline + 1):
94 codeblock.append((line, definitions_raw[docker_file_path][line]))
95
[end of checkov/dockerfile/runner.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/checkov/dockerfile/runner.py b/checkov/dockerfile/runner.py
--- a/checkov/dockerfile/runner.py
+++ b/checkov/dockerfile/runner.py
@@ -15,7 +15,7 @@
class Runner(BaseRunner):
check_type = "dockerfile"
- def run(self, root_folder, external_checks_dir=None, files=None, runner_filter=RunnerFilter(),
+ def run(self, root_folder=None, external_checks_dir=None, files=None, runner_filter=RunnerFilter(),
collect_skip_comments=True):
report = Report(self.check_type)
definitions = {}
@@ -28,7 +28,7 @@
if files:
for file in files:
- if file in DOCKER_FILE_MASK:
+ if os.path.basename(file) in DOCKER_FILE_MASK:
(definitions[file], definitions_raw[file]) = parse(file)
if root_folder:
|
{"golden_diff": "diff --git a/checkov/dockerfile/runner.py b/checkov/dockerfile/runner.py\n--- a/checkov/dockerfile/runner.py\n+++ b/checkov/dockerfile/runner.py\n@@ -15,7 +15,7 @@\n class Runner(BaseRunner):\n check_type = \"dockerfile\"\n \n- def run(self, root_folder, external_checks_dir=None, files=None, runner_filter=RunnerFilter(),\n+ def run(self, root_folder=None, external_checks_dir=None, files=None, runner_filter=RunnerFilter(),\n collect_skip_comments=True):\n report = Report(self.check_type)\n definitions = {}\n@@ -28,7 +28,7 @@\n \n if files:\n for file in files:\n- if file in DOCKER_FILE_MASK:\n+ if os.path.basename(file) in DOCKER_FILE_MASK:\n (definitions[file], definitions_raw[file]) = parse(file)\n \n if root_folder:\n", "issue": "Dockerfile scan fails when in directory and used -f\n**Describe the bug**\r\nWhen running directory scan checkov shows Dockerfile failed checks. When scanning file no errors are shown.\r\n\r\n**To Reproduce**\r\nCreate Dockerfile in directory `test` with content:\r\n```\r\nFROM debian:buster\r\n\r\nENV CHECKOV_VERSION 1.0.775\r\n\r\nRUN export DEBIAN_FRONTEND=noninteractive && \\\r\n apt-get -y update && \\\r\n apt-get -y --no-install-recommends install wget unzip ca-certificates git python3 python3-pip python3-setuptools python3-wheel && \\\r\n pip3 install -U checkov==\"${CHECKOV_VERSION}\"\r\n```\r\n\r\n`checkov -f test/Dockerfile` won't show errors\r\n`checkov -d test` will show error\r\n\r\n**Expected behavior**\r\nShow error in both cases.\r\n\r\n**Screenshots**\r\n<img width=\"892\" alt=\"Screenshot 2021-04-10 at 09 39 21\" src=\"https://user-images.githubusercontent.com/672767/114262507-a54dde80-99e0-11eb-9e9e-3e3f5d2d2a7f.png\">\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: MacOS 11.2.3\r\n - Python: 3.9.4\r\n - Checkov Version 2.0.27\r\n\n", "before_files": [{"content": "import logging\nimport os\nfrom dockerfile_parse.constants import DOCKERFILE_FILENAME\n\nfrom checkov.common.output.record import Record\nfrom checkov.common.output.report import Report\nfrom checkov.common.runners.base_runner import BaseRunner, filter_ignored_directories\nfrom checkov.dockerfile.parser import parse, collect_skipped_checks\nfrom checkov.dockerfile.registry import registry\nfrom checkov.runner_filter import RunnerFilter\n\nDOCKER_FILE_MASK = [DOCKERFILE_FILENAME]\n\n\nclass Runner(BaseRunner):\n check_type = \"dockerfile\"\n\n def run(self, root_folder, external_checks_dir=None, files=None, runner_filter=RunnerFilter(),\n collect_skip_comments=True):\n report = Report(self.check_type)\n definitions = {}\n definitions_raw = {}\n parsing_errors = {}\n files_list = []\n if external_checks_dir:\n for directory in external_checks_dir:\n registry.load_external_checks(directory)\n\n if files:\n for file in files:\n if file in DOCKER_FILE_MASK:\n (definitions[file], definitions_raw[file]) = parse(file)\n\n if root_folder:\n for root, d_names, f_names in os.walk(root_folder):\n filter_ignored_directories(d_names)\n for file in f_names:\n if file in DOCKER_FILE_MASK:\n files_list.append(os.path.join(root, file))\n\n for file in files_list:\n relative_file_path = f'/{os.path.relpath(file, os.path.commonprefix((root_folder, file)))}'\n try:\n (definitions[relative_file_path], definitions_raw[relative_file_path]) = parse(file)\n except TypeError:\n logging.info(f'Dockerfile skipping {file} as it is not a valid dockerfile template')\n\n for docker_file_path in definitions.keys():\n\n # There are a few cases here. If -f was used, there could be a leading / because it's an absolute path,\n # or there will be no leading slash; root_folder will always be none.\n # If -d is used, root_folder will be the value given, and -f will start with a / (hardcoded above).\n # The goal here is simply to get a valid path to the file (which docker_file_path does not always give).\n if docker_file_path[0] == '/':\n path_to_convert = (root_folder + docker_file_path) if root_folder else docker_file_path\n else:\n path_to_convert = (os.path.join(root_folder, docker_file_path)) if root_folder else docker_file_path\n\n file_abs_path = os.path.abspath(path_to_convert)\n skipped_checks = collect_skipped_checks(definitions[docker_file_path])\n instructions = definitions[docker_file_path]\n\n results = registry.scan(docker_file_path, instructions, skipped_checks,\n runner_filter)\n for check, check_result in results.items():\n result_configuration = check_result['results_configuration']\n startline = 0\n endline = 0\n result_instruction = \"\"\n if result_configuration:\n startline = result_configuration['startline']\n endline = result_configuration['endline']\n result_instruction = result_configuration[\"instruction\"]\n\n codeblock = []\n self.calc_record_codeblock(codeblock, definitions_raw, docker_file_path, endline, startline)\n record = Record(check_id=check.id, check_name=check.name, check_result=check_result,\n code_block=codeblock,\n file_path=docker_file_path,\n file_line_range=[startline,\n endline],\n resource=\"{}.{}\".format(docker_file_path,\n result_instruction,\n startline),\n evaluations=None, check_class=check.__class__.__module__,\n file_abs_path=file_abs_path, entity_tags=None)\n report.add_record(record=record)\n\n return report\n\n\n def calc_record_codeblock(self, codeblock, definitions_raw, docker_file_path, endline, startline):\n for line in range(startline, endline + 1):\n codeblock.append((line, definitions_raw[docker_file_path][line]))\n", "path": "checkov/dockerfile/runner.py"}]}
| 1,888 | 197 |
gh_patches_debug_22436
|
rasdani/github-patches
|
git_diff
|
common-workflow-language__cwltool-1346
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
drop Python3.5 support on 2020-09-13
Something to look forward to :-)
https://devguide.python.org/#status-of-python-branches
Branch | Schedule | Status | First release | End-of-life
-- | -- | -- | -- | --
3.5 | PEP 478 | security | 2015-09-13 | 2020-09-13
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python3
2 """Setup for the reference implementation of the CWL standards."""
3 import os
4 import sys
5
6 import setuptools.command.egg_info as egg_info_cmd
7 from setuptools import setup
8
9 SETUP_DIR = os.path.dirname(__file__)
10 README = os.path.join(SETUP_DIR, "README.rst")
11
12 try:
13 import gittaggers
14
15 Tagger = gittaggers.EggInfoFromGit
16 except ImportError:
17 Tagger = egg_info_cmd.egg_info
18
19 NEEDS_PYTEST = {"pytest", "test", "ptr"}.intersection(sys.argv)
20 PYTEST_RUNNER = ["pytest-runner", "pytest-cov"] if NEEDS_PYTEST else []
21 USE_MYPYC = False
22 # To compile with mypyc, a mypyc checkout must be present on the PYTHONPATH
23 if len(sys.argv) > 1 and sys.argv[1] == "--use-mypyc":
24 sys.argv.pop(1)
25 USE_MYPYC = True
26 if os.getenv("CWLTOOL_USE_MYPYC", None) == "1":
27 USE_MYPYC = True
28
29 if USE_MYPYC:
30 mypyc_targets = [
31 "cwltool/argparser.py",
32 "cwltool/builder.py",
33 "cwltool/checker.py",
34 "cwltool/command_line_tool.py",
35 # "cwltool/context.py", # monkeypatching
36 "cwltool/cwlrdf.py",
37 "cwltool/docker_id.py",
38 "cwltool/docker.py",
39 "cwltool/udocker.py",
40 "cwltool/errors.py",
41 "cwltool/executors.py",
42 "cwltool/expression.py",
43 "cwltool/factory.py",
44 "cwltool/flatten.py",
45 # "cwltool/__init__.py",
46 "cwltool/job.py",
47 "cwltool/load_tool.py",
48 # "cwltool/loghandler.py", # so we can monkeypatch the logger from tests
49 # "cwltool/__main__.py",
50 "cwltool/main.py",
51 "cwltool/mutation.py",
52 "cwltool/pack.py",
53 # "cwltool/pathmapper.py", # class PathMapper needs to be subclassable
54 "cwltool/process.py",
55 "cwltool/procgenerator.py",
56 # "cwltool/provenance.py", # WritableBag is having issues
57 "cwltool/resolver.py",
58 # "cwltool/sandboxjs.py", # probably not speed critical, tests need to mock components
59 "cwltool/secrets.py",
60 "cwltool/singularity.py",
61 "cwltool/software_requirements.py",
62 "cwltool/stdfsaccess.py",
63 "cwltool/subgraph.py",
64 "cwltool/update.py",
65 "cwltool/utils.py",
66 "cwltool/validate_js.py",
67 "cwltool/workflow.py",
68 ]
69
70 from mypyc.build import mypycify
71
72 opt_level = os.getenv("MYPYC_OPT_LEVEL", "3")
73 ext_modules = mypycify(mypyc_targets, opt_level=opt_level)
74 else:
75 ext_modules = []
76
77 setup(
78 name="cwltool",
79 version="3.0",
80 description="Common workflow language reference implementation",
81 long_description=open(README).read(),
82 long_description_content_type="text/x-rst",
83 author="Common workflow language working group",
84 author_email="[email protected]",
85 url="https://github.com/common-workflow-language/cwltool",
86 download_url="https://github.com/common-workflow-language/cwltool",
87 ext_modules=ext_modules,
88 # platforms='', # empty as is conveyed by the classifier below
89 # license='', # empty as is conveyed by the classifier below
90 packages=["cwltool", "cwltool.tests"],
91 package_dir={"cwltool.tests": "tests"},
92 include_package_data=True,
93 install_requires=[
94 "setuptools",
95 "requests >= 2.6.1", # >= 2.6.1 to workaround
96 # https://github.com/ionrock/cachecontrol/issues/137
97 "ruamel.yaml >= 0.12.4, <= 0.16.5",
98 "rdflib >= 4.2.2, < 4.3.0",
99 "shellescape >= 3.4.1, < 3.5",
100 "schema-salad >= 7, < 8",
101 "mypy-extensions",
102 "psutil",
103 "prov == 1.5.1",
104 "bagit >= 1.6.4",
105 "typing-extensions",
106 "coloredlogs",
107 'pydot >= 1.4.1',
108 ],
109 extras_require={
110 ':python_version<"3.6"': ["typing >= 3.5.3"],
111 "deps": ["galaxy-tool-util"],
112 "docs": ["sphinx >= 2.2", "sphinx-rtd-theme"],
113 },
114 python_requires=">=3.5, <4",
115 setup_requires=PYTEST_RUNNER,
116 test_suite="tests",
117 tests_require=[
118 "pytest < 7",
119 "mock >= 2.0.0",
120 "pytest-mock >= 1.10.0",
121 "arcp >= 0.2.0",
122 "rdflib-jsonld >= 0.4.0",
123 ],
124 entry_points={"console_scripts": ["cwltool=cwltool.main:run"]},
125 zip_safe=True,
126 cmdclass={"egg_info": Tagger},
127 classifiers=[
128 "Development Status :: 5 - Production/Stable",
129 "Environment :: Console",
130 "Intended Audience :: Developers",
131 "Intended Audience :: Science/Research",
132 "Intended Audience :: Healthcare Industry",
133 "License :: OSI Approved :: Apache Software License",
134 "Natural Language :: English",
135 "Operating System :: MacOS :: MacOS X",
136 "Operating System :: POSIX",
137 "Operating System :: POSIX :: Linux",
138 "Operating System :: OS Independent",
139 "Operating System :: Microsoft :: Windows",
140 "Operating System :: Microsoft :: Windows :: Windows 10",
141 "Operating System :: Microsoft :: Windows :: Windows 8.1",
142 # 'Operating System :: Microsoft :: Windows :: Windows 8', # not tested
143 # 'Operating System :: Microsoft :: Windows :: Windows 7', # not tested
144 "Programming Language :: Python :: 3",
145 "Programming Language :: Python :: 3.5",
146 "Programming Language :: Python :: 3.6",
147 "Programming Language :: Python :: 3.7",
148 "Programming Language :: Python :: 3.8",
149 "Topic :: Scientific/Engineering",
150 "Topic :: Scientific/Engineering :: Bio-Informatics",
151 "Topic :: Scientific/Engineering :: Astronomy",
152 "Topic :: Scientific/Engineering :: Atmospheric Science",
153 "Topic :: Scientific/Engineering :: Information Analysis",
154 "Topic :: Scientific/Engineering :: Medical Science Apps.",
155 "Topic :: System :: Distributed Computing",
156 "Topic :: Utilities",
157 ],
158 )
159
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -107,11 +107,10 @@
'pydot >= 1.4.1',
],
extras_require={
- ':python_version<"3.6"': ["typing >= 3.5.3"],
"deps": ["galaxy-tool-util"],
"docs": ["sphinx >= 2.2", "sphinx-rtd-theme"],
},
- python_requires=">=3.5, <4",
+ python_requires=">=3.6, <4",
setup_requires=PYTEST_RUNNER,
test_suite="tests",
tests_require=[
@@ -142,7 +141,6 @@
# 'Operating System :: Microsoft :: Windows :: Windows 8', # not tested
# 'Operating System :: Microsoft :: Windows :: Windows 7', # not tested
"Programming Language :: Python :: 3",
- "Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -107,11 +107,10 @@\n 'pydot >= 1.4.1',\n ],\n extras_require={\n- ':python_version<\"3.6\"': [\"typing >= 3.5.3\"],\n \"deps\": [\"galaxy-tool-util\"],\n \"docs\": [\"sphinx >= 2.2\", \"sphinx-rtd-theme\"],\n },\n- python_requires=\">=3.5, <4\",\n+ python_requires=\">=3.6, <4\",\n setup_requires=PYTEST_RUNNER,\n test_suite=\"tests\",\n tests_require=[\n@@ -142,7 +141,6 @@\n # 'Operating System :: Microsoft :: Windows :: Windows 8', # not tested\n # 'Operating System :: Microsoft :: Windows :: Windows 7', # not tested\n \"Programming Language :: Python :: 3\",\n- \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n", "issue": "drop Python3.5 support on 2020-09-13\nSomething to look forward to :-)\r\n\r\nhttps://devguide.python.org/#status-of-python-branches\r\n\r\n\r\nBranch | Schedule | Status | First release | End-of-life\r\n-- | -- | -- | -- | --\r\n3.5 | PEP 478 | security | 2015-09-13 | 2020-09-13\r\n\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "#!/usr/bin/env python3\n\"\"\"Setup for the reference implementation of the CWL standards.\"\"\"\nimport os\nimport sys\n\nimport setuptools.command.egg_info as egg_info_cmd\nfrom setuptools import setup\n\nSETUP_DIR = os.path.dirname(__file__)\nREADME = os.path.join(SETUP_DIR, \"README.rst\")\n\ntry:\n import gittaggers\n\n Tagger = gittaggers.EggInfoFromGit\nexcept ImportError:\n Tagger = egg_info_cmd.egg_info\n\nNEEDS_PYTEST = {\"pytest\", \"test\", \"ptr\"}.intersection(sys.argv)\nPYTEST_RUNNER = [\"pytest-runner\", \"pytest-cov\"] if NEEDS_PYTEST else []\nUSE_MYPYC = False\n# To compile with mypyc, a mypyc checkout must be present on the PYTHONPATH\nif len(sys.argv) > 1 and sys.argv[1] == \"--use-mypyc\":\n sys.argv.pop(1)\n USE_MYPYC = True\nif os.getenv(\"CWLTOOL_USE_MYPYC\", None) == \"1\":\n USE_MYPYC = True\n\nif USE_MYPYC:\n mypyc_targets = [\n \"cwltool/argparser.py\",\n \"cwltool/builder.py\",\n \"cwltool/checker.py\",\n \"cwltool/command_line_tool.py\",\n # \"cwltool/context.py\", # monkeypatching\n \"cwltool/cwlrdf.py\",\n \"cwltool/docker_id.py\",\n \"cwltool/docker.py\",\n \"cwltool/udocker.py\",\n \"cwltool/errors.py\",\n \"cwltool/executors.py\",\n \"cwltool/expression.py\",\n \"cwltool/factory.py\",\n \"cwltool/flatten.py\",\n # \"cwltool/__init__.py\",\n \"cwltool/job.py\",\n \"cwltool/load_tool.py\",\n # \"cwltool/loghandler.py\", # so we can monkeypatch the logger from tests\n # \"cwltool/__main__.py\",\n \"cwltool/main.py\",\n \"cwltool/mutation.py\",\n \"cwltool/pack.py\",\n # \"cwltool/pathmapper.py\", # class PathMapper needs to be subclassable\n \"cwltool/process.py\",\n \"cwltool/procgenerator.py\",\n # \"cwltool/provenance.py\", # WritableBag is having issues\n \"cwltool/resolver.py\",\n # \"cwltool/sandboxjs.py\", # probably not speed critical, tests need to mock components\n \"cwltool/secrets.py\",\n \"cwltool/singularity.py\",\n \"cwltool/software_requirements.py\",\n \"cwltool/stdfsaccess.py\",\n \"cwltool/subgraph.py\",\n \"cwltool/update.py\",\n \"cwltool/utils.py\",\n \"cwltool/validate_js.py\",\n \"cwltool/workflow.py\",\n ]\n\n from mypyc.build import mypycify\n\n opt_level = os.getenv(\"MYPYC_OPT_LEVEL\", \"3\")\n ext_modules = mypycify(mypyc_targets, opt_level=opt_level)\nelse:\n ext_modules = []\n\nsetup(\n name=\"cwltool\",\n version=\"3.0\",\n description=\"Common workflow language reference implementation\",\n long_description=open(README).read(),\n long_description_content_type=\"text/x-rst\",\n author=\"Common workflow language working group\",\n author_email=\"[email protected]\",\n url=\"https://github.com/common-workflow-language/cwltool\",\n download_url=\"https://github.com/common-workflow-language/cwltool\",\n ext_modules=ext_modules,\n # platforms='', # empty as is conveyed by the classifier below\n # license='', # empty as is conveyed by the classifier below\n packages=[\"cwltool\", \"cwltool.tests\"],\n package_dir={\"cwltool.tests\": \"tests\"},\n include_package_data=True,\n install_requires=[\n \"setuptools\",\n \"requests >= 2.6.1\", # >= 2.6.1 to workaround\n # https://github.com/ionrock/cachecontrol/issues/137\n \"ruamel.yaml >= 0.12.4, <= 0.16.5\",\n \"rdflib >= 4.2.2, < 4.3.0\",\n \"shellescape >= 3.4.1, < 3.5\",\n \"schema-salad >= 7, < 8\",\n \"mypy-extensions\",\n \"psutil\",\n \"prov == 1.5.1\",\n \"bagit >= 1.6.4\",\n \"typing-extensions\",\n \"coloredlogs\",\n 'pydot >= 1.4.1',\n ],\n extras_require={\n ':python_version<\"3.6\"': [\"typing >= 3.5.3\"],\n \"deps\": [\"galaxy-tool-util\"],\n \"docs\": [\"sphinx >= 2.2\", \"sphinx-rtd-theme\"],\n },\n python_requires=\">=3.5, <4\",\n setup_requires=PYTEST_RUNNER,\n test_suite=\"tests\",\n tests_require=[\n \"pytest < 7\",\n \"mock >= 2.0.0\",\n \"pytest-mock >= 1.10.0\",\n \"arcp >= 0.2.0\",\n \"rdflib-jsonld >= 0.4.0\",\n ],\n entry_points={\"console_scripts\": [\"cwltool=cwltool.main:run\"]},\n zip_safe=True,\n cmdclass={\"egg_info\": Tagger},\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Environment :: Console\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"Intended Audience :: Healthcare Industry\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Natural Language :: English\",\n \"Operating System :: MacOS :: MacOS X\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Operating System :: OS Independent\",\n \"Operating System :: Microsoft :: Windows\",\n \"Operating System :: Microsoft :: Windows :: Windows 10\",\n \"Operating System :: Microsoft :: Windows :: Windows 8.1\",\n # 'Operating System :: Microsoft :: Windows :: Windows 8', # not tested\n # 'Operating System :: Microsoft :: Windows :: Windows 7', # not tested\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n \"Topic :: Scientific/Engineering\",\n \"Topic :: Scientific/Engineering :: Bio-Informatics\",\n \"Topic :: Scientific/Engineering :: Astronomy\",\n \"Topic :: Scientific/Engineering :: Atmospheric Science\",\n \"Topic :: Scientific/Engineering :: Information Analysis\",\n \"Topic :: Scientific/Engineering :: Medical Science Apps.\",\n \"Topic :: System :: Distributed Computing\",\n \"Topic :: Utilities\",\n ],\n)\n", "path": "setup.py"}]}
| 2,560 | 263 |
gh_patches_debug_41075
|
rasdani/github-patches
|
git_diff
|
vyperlang__vyper-828
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Solidity Compatible ERC20 broken
The [Solidity compatible ERC20 token](https://github.com/ethereum/vyper/blob/master/examples/tokens/ERC20_solidity_compatible/ERC20.v.py) no longer compiles, since it was not updated after the removal of separate uint256 math functions. This is a super easy fix. I can do it later in the week if no one gets to it before then.
</issue>
<code>
[start of examples/tokens/ERC20_solidity_compatible/ERC20.v.py]
1 # Solidity-Compatible EIP20/ERC20 Token
2 # Implements https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20-token-standard.md
3 # Author: Phil Daian
4
5 # The use of the uint256 datatype as in this token is not
6 # recommended, as it can pose security risks.
7
8 # This token is intended as a proof of concept towards
9 # language interoperability and not for production use.
10
11 # Events issued by the contract
12 Transfer: event({_from: indexed(address), _to: indexed(address), _value: uint256})
13 Approval: event({_owner: indexed(address), _spender: indexed(address), _value: uint256})
14
15 balances: uint256[address]
16 allowances: (uint256[address])[address]
17 num_issued: uint256
18
19 @public
20 @payable
21 def deposit():
22 _value: uint256 = convert(msg.value, 'uint256')
23 _sender: address = msg.sender
24 self.balances[_sender] = uint256_add(self.balances[_sender], _value)
25 self.num_issued = uint256_add(self.num_issued, _value)
26 # Fire deposit event as transfer from 0x0
27 log.Transfer(0x0000000000000000000000000000000000000000, _sender, _value)
28
29 @public
30 def withdraw(_value : uint256) -> bool:
31 _sender: address = msg.sender
32 # Make sure sufficient funds are present, op will not underflow supply
33 # implicitly through overflow protection
34 self.balances[_sender] = uint256_sub(self.balances[_sender], _value)
35 self.num_issued = uint256_sub(self.num_issued, _value)
36 send(_sender, as_wei_value(convert(_value, 'int128'), 'wei'))
37 # Fire withdraw event as transfer to 0x0
38 log.Transfer(_sender, 0x0000000000000000000000000000000000000000, _value)
39 return true
40
41 @public
42 @constant
43 def totalSupply() -> uint256:
44 return self.num_issued
45
46 @public
47 @constant
48 def balanceOf(_owner : address) -> uint256:
49 return self.balances[_owner]
50
51 @public
52 def transfer(_to : address, _value : uint256) -> bool:
53 _sender: address = msg.sender
54 # Make sure sufficient funds are present implicitly through overflow protection
55 self.balances[_sender] = uint256_sub(self.balances[_sender], _value)
56 self.balances[_to] = uint256_add(self.balances[_to], _value)
57 # Fire transfer event
58 log.Transfer(_sender, _to, _value)
59 return true
60
61 @public
62 def transferFrom(_from : address, _to : address, _value : uint256) -> bool:
63 _sender: address = msg.sender
64 allowance: uint256 = self.allowances[_from][_sender]
65 # Make sure sufficient funds/allowance are present implicitly through overflow protection
66 self.balances[_from] = uint256_sub(self.balances[_from], _value)
67 self.balances[_to] = uint256_add(self.balances[_to], _value)
68 self.allowances[_from][_sender] = uint256_sub(allowance, _value)
69 # Fire transfer event
70 log.Transfer(_from, _to, _value)
71 return true
72
73 @public
74 def approve(_spender : address, _value : uint256) -> bool:
75 _sender: address = msg.sender
76 self.allowances[_sender][_spender] = _value
77 # Fire approval event
78 log.Approval(_sender, _spender, _value)
79 return true
80
81 @public
82 @constant
83 def allowance(_owner : address, _spender : address) -> uint256:
84 return self.allowances[_owner][_spender]
85
86
[end of examples/tokens/ERC20_solidity_compatible/ERC20.v.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/tokens/ERC20_solidity_compatible/ERC20.v.py b/examples/tokens/ERC20_solidity_compatible/ERC20.v.py
--- a/examples/tokens/ERC20_solidity_compatible/ERC20.v.py
+++ b/examples/tokens/ERC20_solidity_compatible/ERC20.v.py
@@ -21,8 +21,8 @@
def deposit():
_value: uint256 = convert(msg.value, 'uint256')
_sender: address = msg.sender
- self.balances[_sender] = uint256_add(self.balances[_sender], _value)
- self.num_issued = uint256_add(self.num_issued, _value)
+ self.balances[_sender] = self.balances[_sender] + _value
+ self.num_issued = self.num_issued + _value
# Fire deposit event as transfer from 0x0
log.Transfer(0x0000000000000000000000000000000000000000, _sender, _value)
@@ -31,12 +31,12 @@
_sender: address = msg.sender
# Make sure sufficient funds are present, op will not underflow supply
# implicitly through overflow protection
- self.balances[_sender] = uint256_sub(self.balances[_sender], _value)
- self.num_issued = uint256_sub(self.num_issued, _value)
+ self.balances[_sender] = self.balances[_sender] - _value
+ self.num_issued = self.num_issued - _value
send(_sender, as_wei_value(convert(_value, 'int128'), 'wei'))
# Fire withdraw event as transfer to 0x0
log.Transfer(_sender, 0x0000000000000000000000000000000000000000, _value)
- return true
+ return True
@public
@constant
@@ -52,23 +52,23 @@
def transfer(_to : address, _value : uint256) -> bool:
_sender: address = msg.sender
# Make sure sufficient funds are present implicitly through overflow protection
- self.balances[_sender] = uint256_sub(self.balances[_sender], _value)
- self.balances[_to] = uint256_add(self.balances[_to], _value)
+ self.balances[_sender] = self.balances[_sender] - _value
+ self.balances[_to] = self.balances[_to] + _value
# Fire transfer event
log.Transfer(_sender, _to, _value)
- return true
+ return True
@public
def transferFrom(_from : address, _to : address, _value : uint256) -> bool:
_sender: address = msg.sender
allowance: uint256 = self.allowances[_from][_sender]
# Make sure sufficient funds/allowance are present implicitly through overflow protection
- self.balances[_from] = uint256_sub(self.balances[_from], _value)
- self.balances[_to] = uint256_add(self.balances[_to], _value)
- self.allowances[_from][_sender] = uint256_sub(allowance, _value)
+ self.balances[_from] = self.balances[_from] - _value
+ self.balances[_to] = self.balances[_to] + _value
+ self.allowances[_from][_sender] = allowance - _value
# Fire transfer event
log.Transfer(_from, _to, _value)
- return true
+ return True
@public
def approve(_spender : address, _value : uint256) -> bool:
@@ -76,7 +76,7 @@
self.allowances[_sender][_spender] = _value
# Fire approval event
log.Approval(_sender, _spender, _value)
- return true
+ return True
@public
@constant
|
{"golden_diff": "diff --git a/examples/tokens/ERC20_solidity_compatible/ERC20.v.py b/examples/tokens/ERC20_solidity_compatible/ERC20.v.py\n--- a/examples/tokens/ERC20_solidity_compatible/ERC20.v.py\n+++ b/examples/tokens/ERC20_solidity_compatible/ERC20.v.py\n@@ -21,8 +21,8 @@\n def deposit():\n _value: uint256 = convert(msg.value, 'uint256')\n _sender: address = msg.sender\n- self.balances[_sender] = uint256_add(self.balances[_sender], _value)\n- self.num_issued = uint256_add(self.num_issued, _value)\n+ self.balances[_sender] = self.balances[_sender] + _value\n+ self.num_issued = self.num_issued + _value\n # Fire deposit event as transfer from 0x0\n log.Transfer(0x0000000000000000000000000000000000000000, _sender, _value)\n \n@@ -31,12 +31,12 @@\n _sender: address = msg.sender\n # Make sure sufficient funds are present, op will not underflow supply\n # implicitly through overflow protection\n- self.balances[_sender] = uint256_sub(self.balances[_sender], _value)\n- self.num_issued = uint256_sub(self.num_issued, _value)\n+ self.balances[_sender] = self.balances[_sender] - _value\n+ self.num_issued = self.num_issued - _value\n send(_sender, as_wei_value(convert(_value, 'int128'), 'wei'))\n # Fire withdraw event as transfer to 0x0\n log.Transfer(_sender, 0x0000000000000000000000000000000000000000, _value)\n- return true\n+ return True\n \n @public\n @constant\n@@ -52,23 +52,23 @@\n def transfer(_to : address, _value : uint256) -> bool:\n _sender: address = msg.sender\n # Make sure sufficient funds are present implicitly through overflow protection\n- self.balances[_sender] = uint256_sub(self.balances[_sender], _value)\n- self.balances[_to] = uint256_add(self.balances[_to], _value)\n+ self.balances[_sender] = self.balances[_sender] - _value\n+ self.balances[_to] = self.balances[_to] + _value\n # Fire transfer event\n log.Transfer(_sender, _to, _value)\n- return true\n+ return True\n \n @public\n def transferFrom(_from : address, _to : address, _value : uint256) -> bool:\n _sender: address = msg.sender\n allowance: uint256 = self.allowances[_from][_sender]\n # Make sure sufficient funds/allowance are present implicitly through overflow protection\n- self.balances[_from] = uint256_sub(self.balances[_from], _value)\n- self.balances[_to] = uint256_add(self.balances[_to], _value)\n- self.allowances[_from][_sender] = uint256_sub(allowance, _value)\n+ self.balances[_from] = self.balances[_from] - _value\n+ self.balances[_to] = self.balances[_to] + _value\n+ self.allowances[_from][_sender] = allowance - _value\n # Fire transfer event\n log.Transfer(_from, _to, _value)\n- return true\n+ return True\n \n @public\n def approve(_spender : address, _value : uint256) -> bool:\n@@ -76,7 +76,7 @@\n self.allowances[_sender][_spender] = _value\n # Fire approval event\n log.Approval(_sender, _spender, _value)\n- return true\n+ return True\n \n @public\n @constant\n", "issue": "Solidity Compatible ERC20 broken\nThe [Solidity compatible ERC20 token](https://github.com/ethereum/vyper/blob/master/examples/tokens/ERC20_solidity_compatible/ERC20.v.py) no longer compiles, since it was not updated after the removal of separate uint256 math functions. This is a super easy fix. I can do it later in the week if no one gets to it before then. \n", "before_files": [{"content": "# Solidity-Compatible EIP20/ERC20 Token\n# Implements https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20-token-standard.md\n# Author: Phil Daian\n\n# The use of the uint256 datatype as in this token is not\n# recommended, as it can pose security risks.\n\n# This token is intended as a proof of concept towards\n# language interoperability and not for production use.\n\n# Events issued by the contract\nTransfer: event({_from: indexed(address), _to: indexed(address), _value: uint256})\nApproval: event({_owner: indexed(address), _spender: indexed(address), _value: uint256})\n\nbalances: uint256[address]\nallowances: (uint256[address])[address]\nnum_issued: uint256\n\n@public\n@payable\ndef deposit():\n _value: uint256 = convert(msg.value, 'uint256')\n _sender: address = msg.sender\n self.balances[_sender] = uint256_add(self.balances[_sender], _value)\n self.num_issued = uint256_add(self.num_issued, _value)\n # Fire deposit event as transfer from 0x0\n log.Transfer(0x0000000000000000000000000000000000000000, _sender, _value)\n\n@public\ndef withdraw(_value : uint256) -> bool:\n _sender: address = msg.sender\n # Make sure sufficient funds are present, op will not underflow supply\n # implicitly through overflow protection\n self.balances[_sender] = uint256_sub(self.balances[_sender], _value)\n self.num_issued = uint256_sub(self.num_issued, _value)\n send(_sender, as_wei_value(convert(_value, 'int128'), 'wei'))\n # Fire withdraw event as transfer to 0x0\n log.Transfer(_sender, 0x0000000000000000000000000000000000000000, _value)\n return true\n\n@public\n@constant\ndef totalSupply() -> uint256:\n return self.num_issued\n\n@public\n@constant\ndef balanceOf(_owner : address) -> uint256:\n return self.balances[_owner]\n\n@public\ndef transfer(_to : address, _value : uint256) -> bool:\n _sender: address = msg.sender\n # Make sure sufficient funds are present implicitly through overflow protection\n self.balances[_sender] = uint256_sub(self.balances[_sender], _value)\n self.balances[_to] = uint256_add(self.balances[_to], _value)\n # Fire transfer event\n log.Transfer(_sender, _to, _value)\n return true\n\n@public\ndef transferFrom(_from : address, _to : address, _value : uint256) -> bool:\n _sender: address = msg.sender\n allowance: uint256 = self.allowances[_from][_sender]\n # Make sure sufficient funds/allowance are present implicitly through overflow protection\n self.balances[_from] = uint256_sub(self.balances[_from], _value)\n self.balances[_to] = uint256_add(self.balances[_to], _value)\n self.allowances[_from][_sender] = uint256_sub(allowance, _value)\n # Fire transfer event\n log.Transfer(_from, _to, _value)\n return true\n\n@public\ndef approve(_spender : address, _value : uint256) -> bool:\n _sender: address = msg.sender\n self.allowances[_sender][_spender] = _value\n # Fire approval event\n log.Approval(_sender, _spender, _value)\n return true\n\n@public\n@constant\ndef allowance(_owner : address, _spender : address) -> uint256:\n return self.allowances[_owner][_spender]\n\n", "path": "examples/tokens/ERC20_solidity_compatible/ERC20.v.py"}]}
| 1,764 | 989 |
gh_patches_debug_29690
|
rasdani/github-patches
|
git_diff
|
openstates__openstates-scrapers-1455
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
MA: committee scraper for 2017
State: MA
says that it is skipping every page, I believe the site was rewritten and so will need a complete rewrite
</issue>
<code>
[start of openstates/ma/committees.py]
1 from billy.scrape.committees import CommitteeScraper, Committee
2
3 import lxml.html
4
5
6 class MACommitteeScraper(CommitteeScraper):
7 jurisdiction = 'ma'
8
9 def scrape(self, term, chambers):
10 page_types = []
11 if 'upper' in chambers:
12 page_types += ['Senate', 'Joint']
13 if 'lower' in chambers:
14 page_types += ['House']
15 chamber_mapping = {'Senate': 'upper',
16 'House': 'lower',
17 'Joint': 'joint'}
18
19 foundComms = []
20
21 for page_type in page_types:
22 url = 'http://www.malegislature.gov/Committees/' + page_type
23
24 html = self.get(url, verify=False).text
25 doc = lxml.html.fromstring(html)
26 doc.make_links_absolute('http://www.malegislature.gov')
27
28 for com_url in doc.xpath('//ul[@class="committeeList"]/li/a/@href'):
29 chamber = chamber_mapping[page_type]
30 self.scrape_committee(chamber, com_url)
31
32 def scrape_committee(self, chamber, url):
33 html = self.get(url, verify=False).text
34 doc = lxml.html.fromstring(html)
35
36 name = doc.xpath('//span[@class="committeeShortName"]/text()')
37 if len(name) == 0:
38 self.warning("Had to skip this malformed page.")
39 return
40 # Because of http://www.malegislature.gov/Committees/Senate/S29 this
41 # XXX: hack had to be pushed in. Remove me ASAP. This just skips
42 # malformed pages.
43
44 name = name[0]
45 com = Committee(chamber, name)
46 com.add_source(url)
47
48 # get both titles and names, order is consistent
49 titles = doc.xpath('//p[@class="rankingMemberTitle"]/text()')
50 names = doc.xpath('//p[@class="rankingMemberName"]/a/text()')
51
52 for title, name in zip(titles, names):
53 com.add_member(name, title)
54
55 for member in doc.xpath('//div[@class="committeeRegularMembers"]//a/text()'):
56 com.add_member(member)
57
58 if com['members']:
59 self.save_committee(com)
60
[end of openstates/ma/committees.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openstates/ma/committees.py b/openstates/ma/committees.py
--- a/openstates/ma/committees.py
+++ b/openstates/ma/committees.py
@@ -16,8 +16,6 @@
'House': 'lower',
'Joint': 'joint'}
- foundComms = []
-
for page_type in page_types:
url = 'http://www.malegislature.gov/Committees/' + page_type
@@ -33,27 +31,15 @@
html = self.get(url, verify=False).text
doc = lxml.html.fromstring(html)
- name = doc.xpath('//span[@class="committeeShortName"]/text()')
- if len(name) == 0:
- self.warning("Had to skip this malformed page.")
- return
- # Because of http://www.malegislature.gov/Committees/Senate/S29 this
- # XXX: hack had to be pushed in. Remove me ASAP. This just skips
- # malformed pages.
-
- name = name[0]
+ name = doc.xpath('//title/text()')[0]
com = Committee(chamber, name)
com.add_source(url)
- # get both titles and names, order is consistent
- titles = doc.xpath('//p[@class="rankingMemberTitle"]/text()')
- names = doc.xpath('//p[@class="rankingMemberName"]/a/text()')
-
- for title, name in zip(titles, names):
- com.add_member(name, title)
-
- for member in doc.xpath('//div[@class="committeeRegularMembers"]//a/text()'):
- com.add_member(member)
+ members = doc.xpath('//a[contains(@href, "/Legislators/Profile")]')
+ for member in members:
+ title = member.xpath('../span')
+ role = title[0].text.lower() if title else 'member'
+ com.add_member(member.text, role)
if com['members']:
self.save_committee(com)
|
{"golden_diff": "diff --git a/openstates/ma/committees.py b/openstates/ma/committees.py\n--- a/openstates/ma/committees.py\n+++ b/openstates/ma/committees.py\n@@ -16,8 +16,6 @@\n 'House': 'lower',\n 'Joint': 'joint'}\n \n- foundComms = []\n-\n for page_type in page_types:\n url = 'http://www.malegislature.gov/Committees/' + page_type\n \n@@ -33,27 +31,15 @@\n html = self.get(url, verify=False).text\n doc = lxml.html.fromstring(html)\n \n- name = doc.xpath('//span[@class=\"committeeShortName\"]/text()')\n- if len(name) == 0:\n- self.warning(\"Had to skip this malformed page.\")\n- return\n- # Because of http://www.malegislature.gov/Committees/Senate/S29 this\n- # XXX: hack had to be pushed in. Remove me ASAP. This just skips\n- # malformed pages.\n-\n- name = name[0]\n+ name = doc.xpath('//title/text()')[0]\n com = Committee(chamber, name)\n com.add_source(url)\n \n- # get both titles and names, order is consistent\n- titles = doc.xpath('//p[@class=\"rankingMemberTitle\"]/text()')\n- names = doc.xpath('//p[@class=\"rankingMemberName\"]/a/text()')\n-\n- for title, name in zip(titles, names):\n- com.add_member(name, title)\n-\n- for member in doc.xpath('//div[@class=\"committeeRegularMembers\"]//a/text()'):\n- com.add_member(member)\n+ members = doc.xpath('//a[contains(@href, \"/Legislators/Profile\")]')\n+ for member in members:\n+ title = member.xpath('../span')\n+ role = title[0].text.lower() if title else 'member'\n+ com.add_member(member.text, role)\n \n if com['members']:\n self.save_committee(com)\n", "issue": "MA: committee scraper for 2017\nState: MA\r\n\r\nsays that it is skipping every page, I believe the site was rewritten and so will need a complete rewrite\n", "before_files": [{"content": "from billy.scrape.committees import CommitteeScraper, Committee\n\nimport lxml.html\n\n\nclass MACommitteeScraper(CommitteeScraper):\n jurisdiction = 'ma'\n\n def scrape(self, term, chambers):\n page_types = []\n if 'upper' in chambers:\n page_types += ['Senate', 'Joint']\n if 'lower' in chambers:\n page_types += ['House']\n chamber_mapping = {'Senate': 'upper',\n 'House': 'lower',\n 'Joint': 'joint'}\n\n foundComms = []\n\n for page_type in page_types:\n url = 'http://www.malegislature.gov/Committees/' + page_type\n\n html = self.get(url, verify=False).text\n doc = lxml.html.fromstring(html)\n doc.make_links_absolute('http://www.malegislature.gov')\n\n for com_url in doc.xpath('//ul[@class=\"committeeList\"]/li/a/@href'):\n chamber = chamber_mapping[page_type]\n self.scrape_committee(chamber, com_url)\n\n def scrape_committee(self, chamber, url):\n html = self.get(url, verify=False).text\n doc = lxml.html.fromstring(html)\n\n name = doc.xpath('//span[@class=\"committeeShortName\"]/text()')\n if len(name) == 0:\n self.warning(\"Had to skip this malformed page.\")\n return\n # Because of http://www.malegislature.gov/Committees/Senate/S29 this\n # XXX: hack had to be pushed in. Remove me ASAP. This just skips\n # malformed pages.\n\n name = name[0]\n com = Committee(chamber, name)\n com.add_source(url)\n\n # get both titles and names, order is consistent\n titles = doc.xpath('//p[@class=\"rankingMemberTitle\"]/text()')\n names = doc.xpath('//p[@class=\"rankingMemberName\"]/a/text()')\n\n for title, name in zip(titles, names):\n com.add_member(name, title)\n\n for member in doc.xpath('//div[@class=\"committeeRegularMembers\"]//a/text()'):\n com.add_member(member)\n\n if com['members']:\n self.save_committee(com)\n", "path": "openstates/ma/committees.py"}]}
| 1,172 | 456 |
gh_patches_debug_27810
|
rasdani/github-patches
|
git_diff
|
cocotb__cocotb-1898
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Provide friendlier handling of invalid COCOTB_LOG_LEVEL
When running with `COCOTB_LOG_LEVEL=debug` I get the shown stack trace.
We should be able to improve the user experience a bit by:
- Converting values to ALLCAPS, i.e. debug becomes DEBUG
- Provide a more helpful error message with allowed options instead of a stacktrace.
```
sim_build/fifo
-.--ns INFO cocotb.gpi ..mbed/gpi_embed.cpp:74 in set_program_name_in_venv Did not detect Python virtual environment. Using system-wide Python interpreter
-.--ns INFO cocotb.gpi ../gpi/GpiCommon.cpp:105 in gpi_print_registered_impl VPI registered
-.--ns INFO cocotb.gpi ..mbed/gpi_embed.cpp:245 in embed_sim_init Python interpreter initialized and cocotb loaded!
Traceback (most recent call last):
File "/home/philipp/.local/lib/python3.8/site-packages/cocotb/__init__.py", line 175, in _initialise_testbench
_setup_logging()
File "/home/philipp/.local/lib/python3.8/site-packages/cocotb/__init__.py", line 76, in _setup_logging
cocotb.log.default_config()
File "/home/philipp/.local/lib/python3.8/site-packages/cocotb/log.py", line 95, in default_config
log.setLevel(_default_log)
File "/usr/lib64/python3.8/logging/__init__.py", line 1409, in setLevel
self.level = _checkLevel(level)
File "/usr/lib64/python3.8/logging/__init__.py", line 197, in _checkLevel
raise TypeError("Level not an integer or a valid string: %r" % level)
TypeError: Level not an integer or a valid string: <function debug at 0x7f8653508430>
```
</issue>
<code>
[start of cocotb/log.py]
1 # Copyright (c) 2013, 2018 Potential Ventures Ltd
2 # Copyright (c) 2013 SolarFlare Communications Inc
3 # All rights reserved.
4 #
5 # Redistribution and use in source and binary forms, with or without
6 # modification, are permitted provided that the following conditions are met:
7 # * Redistributions of source code must retain the above copyright
8 # notice, this list of conditions and the following disclaimer.
9 # * Redistributions in binary form must reproduce the above copyright
10 # notice, this list of conditions and the following disclaimer in the
11 # documentation and/or other materials provided with the distribution.
12 # * Neither the name of Potential Ventures Ltd,
13 # SolarFlare Communications Inc nor the
14 # names of its contributors may be used to endorse or promote products
15 # derived from this software without specific prior written permission.
16 #
17 # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
18 # ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
19 # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
20 # DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY
21 # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
22 # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
23 # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
24 # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
25 # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
26 # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
27
28 """
29 Everything related to logging
30 """
31
32 import os
33 import sys
34 import logging
35 import warnings
36
37 from cocotb.utils import (
38 get_sim_time, get_time_from_sim_steps, want_color_output
39 )
40
41 import cocotb.ANSI as ANSI
42
43 if "COCOTB_REDUCED_LOG_FMT" in os.environ:
44 _suppress = True
45 else:
46 _suppress = False
47
48 # Column alignment
49 _LEVEL_CHARS = len("CRITICAL") # noqa
50 _RECORD_CHARS = 35 # noqa
51 _FILENAME_CHARS = 20 # noqa
52 _LINENO_CHARS = 4 # noqa
53 _FUNCNAME_CHARS = 31 # noqa
54
55
56 def default_config():
57 """ Apply the default cocotb log formatting to the root logger.
58
59 This hooks up the logger to write to stdout, using either
60 :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending
61 on whether colored output is requested. It also adds a
62 :class:`SimTimeContextFilter` filter so that
63 :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.
64
65 The logging level for cocotb logs is set based on the
66 :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.
67
68 If desired, this logging configuration can be overwritten by calling
69 ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by
70 manually resetting the root logger instance.
71 An example of this can be found in the section on :ref:`rotating-logger`.
72
73 .. versionadded:: 1.4
74 """
75 # construct an appropriate handler
76 hdlr = logging.StreamHandler(sys.stdout)
77 hdlr.addFilter(SimTimeContextFilter())
78 if want_color_output():
79 hdlr.setFormatter(SimColourLogFormatter())
80 else:
81 hdlr.setFormatter(SimLogFormatter())
82
83 logging.setLoggerClass(SimBaseLog) # For backwards compatibility
84 logging.basicConfig()
85 logging.getLogger().handlers = [hdlr] # overwrite default handlers
86
87 # apply level settings for cocotb
88 log = logging.getLogger('cocotb')
89 level = os.getenv("COCOTB_LOG_LEVEL", "INFO")
90 try:
91 _default_log = getattr(logging, level)
92 except AttributeError:
93 log.error("Unable to set logging level to %r" % level)
94 _default_log = logging.INFO
95 log.setLevel(_default_log)
96
97 # Notify GPI of log level, which it uses as an optimization to avoid
98 # calling into Python.
99 from cocotb import simulator
100 simulator.log_level(_default_log)
101
102
103 class SimBaseLog(logging.getLoggerClass()):
104 """ This class only exists for backwards compatibility """
105
106 @property
107 def logger(self):
108 warnings.warn(
109 "the .logger attribute should not be used now that `SimLog` "
110 "returns a native logger instance directly.",
111 DeprecationWarning, stacklevel=2)
112 return self
113
114 @property
115 def colour(self):
116 warnings.warn(
117 "the .colour attribute may be removed in future, use the "
118 "equivalent `cocotb.utils.want_color_output()` instead",
119 DeprecationWarning, stacklevel=2)
120 return want_color_output()
121
122
123 # this used to be a class, hence the unusual capitalization
124 def SimLog(name, ident=None):
125 """ Like logging.getLogger, but append a numeric identifier to the name """
126 if ident is not None:
127 name = "%s.0x%x" % (name, ident)
128 return logging.getLogger(name)
129
130
131 class SimTimeContextFilter(logging.Filter):
132 """
133 A filter to inject simulator times into the log records.
134
135 This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.
136
137 This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.
138
139 .. versionadded:: 1.4
140 """
141
142 # needed to make our docs render well
143 def __init__(self):
144 """"""
145 super().__init__()
146
147 def filter(self, record):
148 try:
149 record.created_sim_time = get_sim_time()
150 except RecursionError:
151 # get_sim_time may try to log - if that happens, we can't
152 # attach a simulator time to this message.
153 record.created_sim_time = None
154 return True
155
156
157 class SimLogFormatter(logging.Formatter):
158 """Log formatter to provide consistent log message handling.
159
160 This will only add simulator timestamps if the handler object this
161 formatter is attached to has a :class:`SimTimeContextFilter` filter
162 attached, which cocotb ensures by default.
163 """
164
165 # Removes the arguments from the base class. Docstring needed to make
166 # sphinx happy.
167 def __init__(self):
168 """ Takes no arguments. """
169 super().__init__()
170
171 # Justify and truncate
172 @staticmethod
173 def ljust(string, chars):
174 if len(string) > chars:
175 return ".." + string[(chars - 2) * -1:]
176 return string.ljust(chars)
177
178 @staticmethod
179 def rjust(string, chars):
180 if len(string) > chars:
181 return ".." + string[(chars - 2) * -1:]
182 return string.rjust(chars)
183
184 def _format(self, level, record, msg, coloured=False):
185 sim_time = getattr(record, 'created_sim_time', None)
186 if sim_time is None:
187 sim_time_str = " -.--ns"
188 else:
189 time_ns = get_time_from_sim_steps(sim_time, 'ns')
190 sim_time_str = "{:6.2f}ns".format(time_ns)
191 prefix = sim_time_str.rjust(11) + ' ' + level + ' '
192 if not _suppress:
193 prefix += self.ljust(record.name, _RECORD_CHARS) + \
194 self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS) + \
195 ':' + self.ljust(str(record.lineno), _LINENO_CHARS) + \
196 ' in ' + self.ljust(str(record.funcName), _FUNCNAME_CHARS) + ' '
197
198 # these lines are copied from the builtin logger
199 if record.exc_info:
200 # Cache the traceback text to avoid converting it multiple times
201 # (it's constant anyway)
202 if not record.exc_text:
203 record.exc_text = self.formatException(record.exc_info)
204 if record.exc_text:
205 if msg[-1:] != "\n":
206 msg = msg + "\n"
207 msg = msg + record.exc_text
208
209 prefix_len = len(prefix)
210 if coloured:
211 prefix_len -= (len(level) - _LEVEL_CHARS)
212 pad = "\n" + " " * (prefix_len)
213 return prefix + pad.join(msg.split('\n'))
214
215 def format(self, record):
216 """Prettify the log output, annotate with simulation time"""
217
218 msg = record.getMessage()
219 level = record.levelname.ljust(_LEVEL_CHARS)
220
221 return self._format(level, record, msg)
222
223
224 class SimColourLogFormatter(SimLogFormatter):
225 """Log formatter to provide consistent log message handling."""
226
227 loglevel2colour = {
228 logging.DEBUG : "%s",
229 logging.INFO : ANSI.COLOR_INFO + "%s" + ANSI.COLOR_DEFAULT,
230 logging.WARNING : ANSI.COLOR_WARNING + "%s" + ANSI.COLOR_DEFAULT,
231 logging.ERROR : ANSI.COLOR_ERROR + "%s" + ANSI.COLOR_DEFAULT,
232 logging.CRITICAL: ANSI.COLOR_CRITICAL + "%s" + ANSI.COLOR_DEFAULT,
233 }
234
235 def format(self, record):
236 """Prettify the log output, annotate with simulation time"""
237
238 msg = record.getMessage()
239
240 # Need to colour each line in case coloring is applied in the message
241 msg = '\n'.join([SimColourLogFormatter.loglevel2colour[record.levelno] % line for line in msg.split('\n')])
242 level = (SimColourLogFormatter.loglevel2colour[record.levelno] %
243 record.levelname.ljust(_LEVEL_CHARS))
244
245 return self._format(level, record, msg, coloured=True)
246
247
248 def _filter_from_c(logger_name, level):
249 return logging.getLogger(logger_name).isEnabledFor(level)
250
251
252 def _log_from_c(logger_name, level, filename, lineno, msg, function_name):
253 """
254 This is for use from the C world, and allows us to insert C stack
255 information.
256 """
257 logger = logging.getLogger(logger_name)
258 if logger.isEnabledFor(level):
259 record = logger.makeRecord(
260 logger.name,
261 level,
262 filename,
263 lineno,
264 msg,
265 None,
266 None,
267 function_name
268 )
269 logger.handle(record)
270
[end of cocotb/log.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/cocotb/log.py b/cocotb/log.py
--- a/cocotb/log.py
+++ b/cocotb/log.py
@@ -52,6 +52,9 @@
_LINENO_CHARS = 4 # noqa
_FUNCNAME_CHARS = 31 # noqa
+# Default log level if not overwritten by the user.
+_COCOTB_LOG_LEVEL_DEFAULT = "INFO"
+
def default_config():
""" Apply the default cocotb log formatting to the root logger.
@@ -86,18 +89,25 @@
# apply level settings for cocotb
log = logging.getLogger('cocotb')
- level = os.getenv("COCOTB_LOG_LEVEL", "INFO")
+
+ try:
+ # All log levels are upper case, convert the user input for convenience.
+ level = os.environ["COCOTB_LOG_LEVEL"].upper()
+ except KeyError:
+ level = _COCOTB_LOG_LEVEL_DEFAULT
+
try:
- _default_log = getattr(logging, level)
- except AttributeError:
- log.error("Unable to set logging level to %r" % level)
- _default_log = logging.INFO
- log.setLevel(_default_log)
+ log.setLevel(level)
+ except ValueError:
+ valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG')
+ raise ValueError("Invalid log level %r passed through the "
+ "COCOTB_LOG_LEVEL environment variable. Valid log "
+ "levels: %s" % (level, ', '.join(valid_levels)))
# Notify GPI of log level, which it uses as an optimization to avoid
# calling into Python.
from cocotb import simulator
- simulator.log_level(_default_log)
+ simulator.log_level(log.getEffectiveLevel())
class SimBaseLog(logging.getLoggerClass()):
|
{"golden_diff": "diff --git a/cocotb/log.py b/cocotb/log.py\n--- a/cocotb/log.py\n+++ b/cocotb/log.py\n@@ -52,6 +52,9 @@\n _LINENO_CHARS = 4 # noqa\n _FUNCNAME_CHARS = 31 # noqa\n \n+# Default log level if not overwritten by the user.\n+_COCOTB_LOG_LEVEL_DEFAULT = \"INFO\"\n+\n \n def default_config():\n \"\"\" Apply the default cocotb log formatting to the root logger.\n@@ -86,18 +89,25 @@\n \n # apply level settings for cocotb\n log = logging.getLogger('cocotb')\n- level = os.getenv(\"COCOTB_LOG_LEVEL\", \"INFO\")\n+\n+ try:\n+ # All log levels are upper case, convert the user input for convenience.\n+ level = os.environ[\"COCOTB_LOG_LEVEL\"].upper()\n+ except KeyError:\n+ level = _COCOTB_LOG_LEVEL_DEFAULT\n+\n try:\n- _default_log = getattr(logging, level)\n- except AttributeError:\n- log.error(\"Unable to set logging level to %r\" % level)\n- _default_log = logging.INFO\n- log.setLevel(_default_log)\n+ log.setLevel(level)\n+ except ValueError:\n+ valid_levels = ('CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG')\n+ raise ValueError(\"Invalid log level %r passed through the \"\n+ \"COCOTB_LOG_LEVEL environment variable. Valid log \"\n+ \"levels: %s\" % (level, ', '.join(valid_levels)))\n \n # Notify GPI of log level, which it uses as an optimization to avoid\n # calling into Python.\n from cocotb import simulator\n- simulator.log_level(_default_log)\n+ simulator.log_level(log.getEffectiveLevel())\n \n \n class SimBaseLog(logging.getLoggerClass()):\n", "issue": "Provide friendlier handling of invalid COCOTB_LOG_LEVEL\nWhen running with `COCOTB_LOG_LEVEL=debug` I get the shown stack trace.\r\n\r\nWe should be able to improve the user experience a bit by:\r\n- Converting values to ALLCAPS, i.e. debug becomes DEBUG\r\n- Provide a more helpful error message with allowed options instead of a stacktrace.\r\n\r\n\r\n```\r\n sim_build/fifo \r\n -.--ns INFO cocotb.gpi ..mbed/gpi_embed.cpp:74 in set_program_name_in_venv Did not detect Python virtual environment. Using system-wide Python interpreter\r\n -.--ns INFO cocotb.gpi ../gpi/GpiCommon.cpp:105 in gpi_print_registered_impl VPI registered\r\n -.--ns INFO cocotb.gpi ..mbed/gpi_embed.cpp:245 in embed_sim_init Python interpreter initialized and cocotb loaded!\r\nTraceback (most recent call last):\r\n File \"/home/philipp/.local/lib/python3.8/site-packages/cocotb/__init__.py\", line 175, in _initialise_testbench\r\n _setup_logging()\r\n File \"/home/philipp/.local/lib/python3.8/site-packages/cocotb/__init__.py\", line 76, in _setup_logging\r\n cocotb.log.default_config()\r\n File \"/home/philipp/.local/lib/python3.8/site-packages/cocotb/log.py\", line 95, in default_config\r\n log.setLevel(_default_log)\r\n File \"/usr/lib64/python3.8/logging/__init__.py\", line 1409, in setLevel\r\n self.level = _checkLevel(level)\r\n File \"/usr/lib64/python3.8/logging/__init__.py\", line 197, in _checkLevel\r\n raise TypeError(\"Level not an integer or a valid string: %r\" % level)\r\nTypeError: Level not an integer or a valid string: <function debug at 0x7f8653508430>\r\n```\n", "before_files": [{"content": "# Copyright (c) 2013, 2018 Potential Ventures Ltd\n# Copyright (c) 2013 SolarFlare Communications Inc\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n# * Redistributions of source code must retain the above copyright\n# notice, this list of conditions and the following disclaimer.\n# * Redistributions in binary form must reproduce the above copyright\n# notice, this list of conditions and the following disclaimer in the\n# documentation and/or other materials provided with the distribution.\n# * Neither the name of Potential Ventures Ltd,\n# SolarFlare Communications Inc nor the\n# names of its contributors may be used to endorse or promote products\n# derived from this software without specific prior written permission.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\" AND\n# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED\n# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE\n# DISCLAIMED. IN NO EVENT SHALL POTENTIAL VENTURES LTD BE LIABLE FOR ANY\n# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES\n# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;\n# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND\n# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS\n# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"\nEverything related to logging\n\"\"\"\n\nimport os\nimport sys\nimport logging\nimport warnings\n\nfrom cocotb.utils import (\n get_sim_time, get_time_from_sim_steps, want_color_output\n)\n\nimport cocotb.ANSI as ANSI\n\nif \"COCOTB_REDUCED_LOG_FMT\" in os.environ:\n _suppress = True\nelse:\n _suppress = False\n\n# Column alignment\n_LEVEL_CHARS = len(\"CRITICAL\") # noqa\n_RECORD_CHARS = 35 # noqa\n_FILENAME_CHARS = 20 # noqa\n_LINENO_CHARS = 4 # noqa\n_FUNCNAME_CHARS = 31 # noqa\n\n\ndef default_config():\n \"\"\" Apply the default cocotb log formatting to the root logger.\n\n This hooks up the logger to write to stdout, using either\n :class:`SimColourLogFormatter` or :class:`SimLogFormatter` depending\n on whether colored output is requested. It also adds a\n :class:`SimTimeContextFilter` filter so that\n :attr:`~logging.LogRecord.created_sim_time` is available to the formatter.\n\n The logging level for cocotb logs is set based on the\n :envvar:`COCOTB_LOG_LEVEL` environment variable, which defaults to ``INFO``.\n\n If desired, this logging configuration can be overwritten by calling\n ``logging.basicConfig(..., force=True)`` (in Python 3.8 onwards), or by\n manually resetting the root logger instance.\n An example of this can be found in the section on :ref:`rotating-logger`.\n\n .. versionadded:: 1.4\n \"\"\"\n # construct an appropriate handler\n hdlr = logging.StreamHandler(sys.stdout)\n hdlr.addFilter(SimTimeContextFilter())\n if want_color_output():\n hdlr.setFormatter(SimColourLogFormatter())\n else:\n hdlr.setFormatter(SimLogFormatter())\n\n logging.setLoggerClass(SimBaseLog) # For backwards compatibility\n logging.basicConfig()\n logging.getLogger().handlers = [hdlr] # overwrite default handlers\n\n # apply level settings for cocotb\n log = logging.getLogger('cocotb')\n level = os.getenv(\"COCOTB_LOG_LEVEL\", \"INFO\")\n try:\n _default_log = getattr(logging, level)\n except AttributeError:\n log.error(\"Unable to set logging level to %r\" % level)\n _default_log = logging.INFO\n log.setLevel(_default_log)\n\n # Notify GPI of log level, which it uses as an optimization to avoid\n # calling into Python.\n from cocotb import simulator\n simulator.log_level(_default_log)\n\n\nclass SimBaseLog(logging.getLoggerClass()):\n \"\"\" This class only exists for backwards compatibility \"\"\"\n\n @property\n def logger(self):\n warnings.warn(\n \"the .logger attribute should not be used now that `SimLog` \"\n \"returns a native logger instance directly.\",\n DeprecationWarning, stacklevel=2)\n return self\n\n @property\n def colour(self):\n warnings.warn(\n \"the .colour attribute may be removed in future, use the \"\n \"equivalent `cocotb.utils.want_color_output()` instead\",\n DeprecationWarning, stacklevel=2)\n return want_color_output()\n\n\n# this used to be a class, hence the unusual capitalization\ndef SimLog(name, ident=None):\n \"\"\" Like logging.getLogger, but append a numeric identifier to the name \"\"\"\n if ident is not None:\n name = \"%s.0x%x\" % (name, ident)\n return logging.getLogger(name)\n\n\nclass SimTimeContextFilter(logging.Filter):\n \"\"\"\n A filter to inject simulator times into the log records.\n\n This uses the approach described in the :ref:`Python logging cookbook <python:filters-contextual>`.\n\n This adds the :attr:`~logging.LogRecord.created_sim_time` attribute.\n\n .. versionadded:: 1.4\n \"\"\"\n\n # needed to make our docs render well\n def __init__(self):\n \"\"\"\"\"\"\n super().__init__()\n\n def filter(self, record):\n try:\n record.created_sim_time = get_sim_time()\n except RecursionError:\n # get_sim_time may try to log - if that happens, we can't\n # attach a simulator time to this message.\n record.created_sim_time = None\n return True\n\n\nclass SimLogFormatter(logging.Formatter):\n \"\"\"Log formatter to provide consistent log message handling.\n\n This will only add simulator timestamps if the handler object this\n formatter is attached to has a :class:`SimTimeContextFilter` filter\n attached, which cocotb ensures by default.\n \"\"\"\n\n # Removes the arguments from the base class. Docstring needed to make\n # sphinx happy.\n def __init__(self):\n \"\"\" Takes no arguments. \"\"\"\n super().__init__()\n\n # Justify and truncate\n @staticmethod\n def ljust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1:]\n return string.ljust(chars)\n\n @staticmethod\n def rjust(string, chars):\n if len(string) > chars:\n return \"..\" + string[(chars - 2) * -1:]\n return string.rjust(chars)\n\n def _format(self, level, record, msg, coloured=False):\n sim_time = getattr(record, 'created_sim_time', None)\n if sim_time is None:\n sim_time_str = \" -.--ns\"\n else:\n time_ns = get_time_from_sim_steps(sim_time, 'ns')\n sim_time_str = \"{:6.2f}ns\".format(time_ns)\n prefix = sim_time_str.rjust(11) + ' ' + level + ' '\n if not _suppress:\n prefix += self.ljust(record.name, _RECORD_CHARS) + \\\n self.rjust(os.path.split(record.filename)[1], _FILENAME_CHARS) + \\\n ':' + self.ljust(str(record.lineno), _LINENO_CHARS) + \\\n ' in ' + self.ljust(str(record.funcName), _FUNCNAME_CHARS) + ' '\n\n # these lines are copied from the builtin logger\n if record.exc_info:\n # Cache the traceback text to avoid converting it multiple times\n # (it's constant anyway)\n if not record.exc_text:\n record.exc_text = self.formatException(record.exc_info)\n if record.exc_text:\n if msg[-1:] != \"\\n\":\n msg = msg + \"\\n\"\n msg = msg + record.exc_text\n\n prefix_len = len(prefix)\n if coloured:\n prefix_len -= (len(level) - _LEVEL_CHARS)\n pad = \"\\n\" + \" \" * (prefix_len)\n return prefix + pad.join(msg.split('\\n'))\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n level = record.levelname.ljust(_LEVEL_CHARS)\n\n return self._format(level, record, msg)\n\n\nclass SimColourLogFormatter(SimLogFormatter):\n \"\"\"Log formatter to provide consistent log message handling.\"\"\"\n\n loglevel2colour = {\n logging.DEBUG : \"%s\",\n logging.INFO : ANSI.COLOR_INFO + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.WARNING : ANSI.COLOR_WARNING + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.ERROR : ANSI.COLOR_ERROR + \"%s\" + ANSI.COLOR_DEFAULT,\n logging.CRITICAL: ANSI.COLOR_CRITICAL + \"%s\" + ANSI.COLOR_DEFAULT,\n }\n\n def format(self, record):\n \"\"\"Prettify the log output, annotate with simulation time\"\"\"\n\n msg = record.getMessage()\n\n # Need to colour each line in case coloring is applied in the message\n msg = '\\n'.join([SimColourLogFormatter.loglevel2colour[record.levelno] % line for line in msg.split('\\n')])\n level = (SimColourLogFormatter.loglevel2colour[record.levelno] %\n record.levelname.ljust(_LEVEL_CHARS))\n\n return self._format(level, record, msg, coloured=True)\n\n\ndef _filter_from_c(logger_name, level):\n return logging.getLogger(logger_name).isEnabledFor(level)\n\n\ndef _log_from_c(logger_name, level, filename, lineno, msg, function_name):\n \"\"\"\n This is for use from the C world, and allows us to insert C stack\n information.\n \"\"\"\n logger = logging.getLogger(logger_name)\n if logger.isEnabledFor(level):\n record = logger.makeRecord(\n logger.name,\n level,\n filename,\n lineno,\n msg,\n None,\n None,\n function_name\n )\n logger.handle(record)\n", "path": "cocotb/log.py"}]}
| 3,938 | 417 |
gh_patches_debug_6239
|
rasdani/github-patches
|
git_diff
|
searx__searx-1800
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[flickr_noapi] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128)
Similar to #419
Installation: current master commit
How to reproduce? Search for "kek" on https://search.snopyta.org/ and click on "Images"
```
ERROR:flask.app:Exception on / [POST]
Traceback (most recent call last):
File "/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/searx/searx/webapp.py", line 544, in index
result['title'] = highlight_content(escape(result['title'] or u''), search_query.query)
File "/usr/local/searx/searx/utils.py", line 79, in highlight_content
if content.lower().find(query.lower()) > -1:
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128)
```
</issue>
<code>
[start of searx/engines/flickr_noapi.py]
1 #!/usr/bin/env python
2
3 """
4 Flickr (Images)
5
6 @website https://www.flickr.com
7 @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)
8
9 @using-api no
10 @results HTML
11 @stable no
12 @parse url, title, thumbnail, img_src
13 """
14
15 from json import loads
16 from time import time
17 import re
18 from searx.engines import logger
19 from searx.url_utils import urlencode
20 from searx.utils import ecma_unescape, html_to_text
21
22 logger = logger.getChild('flickr-noapi')
23
24 categories = ['images']
25
26 url = 'https://www.flickr.com/'
27 search_url = url + 'search?{query}&page={page}'
28 time_range_url = '&min_upload_date={start}&max_upload_date={end}'
29 photo_url = 'https://www.flickr.com/photos/{userid}/{photoid}'
30 modelexport_re = re.compile(r"^\s*modelExport:\s*({.*}),$", re.M)
31 image_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')
32
33 paging = True
34 time_range_support = True
35 time_range_dict = {'day': 60 * 60 * 24,
36 'week': 60 * 60 * 24 * 7,
37 'month': 60 * 60 * 24 * 7 * 4,
38 'year': 60 * 60 * 24 * 7 * 52}
39
40
41 def build_flickr_url(user_id, photo_id):
42 return photo_url.format(userid=user_id, photoid=photo_id)
43
44
45 def _get_time_range_url(time_range):
46 if time_range in time_range_dict:
47 return time_range_url.format(start=time(), end=str(int(time()) - time_range_dict[time_range]))
48 return ''
49
50
51 def request(query, params):
52 params['url'] = (search_url.format(query=urlencode({'text': query}), page=params['pageno'])
53 + _get_time_range_url(params['time_range']))
54 return params
55
56
57 def response(resp):
58 results = []
59
60 matches = modelexport_re.search(resp.text)
61
62 if matches is None:
63 return results
64
65 match = matches.group(1)
66 model_export = loads(match)
67
68 if 'legend' not in model_export:
69 return results
70
71 legend = model_export['legend']
72
73 # handle empty page
74 if not legend or not legend[0]:
75 return results
76
77 for index in legend:
78 photo = model_export['main'][index[0]][int(index[1])][index[2]][index[3]][int(index[4])]
79 author = ecma_unescape(photo.get('realname', ''))
80 source = ecma_unescape(photo.get('username', '')) + ' @ Flickr'
81 title = ecma_unescape(photo.get('title', ''))
82 content = html_to_text(ecma_unescape(photo.get('description', '')))
83 img_src = None
84 # From the biggest to the lowest format
85 for image_size in image_sizes:
86 if image_size in photo['sizes']:
87 img_src = photo['sizes'][image_size]['url']
88 img_format = 'jpg ' \
89 + str(photo['sizes'][image_size]['width']) \
90 + 'x' \
91 + str(photo['sizes'][image_size]['height'])
92 break
93
94 if not img_src:
95 logger.debug('cannot find valid image size: {0}'.format(repr(photo)))
96 continue
97
98 # For a bigger thumbnail, keep only the url_z, not the url_n
99 if 'n' in photo['sizes']:
100 thumbnail_src = photo['sizes']['n']['url']
101 elif 'z' in photo['sizes']:
102 thumbnail_src = photo['sizes']['z']['url']
103 else:
104 thumbnail_src = img_src
105
106 if 'ownerNsid' not in photo:
107 # should not happen, disowned photo? Show it anyway
108 url = img_src
109 else:
110 url = build_flickr_url(photo['ownerNsid'], photo['id'])
111
112 result = {
113 'url': url,
114 'img_src': img_src,
115 'thumbnail_src': thumbnail_src,
116 'source': source,
117 'img_format': img_format,
118 'template': 'images.html'
119 }
120 try:
121 result['author'] = author.encode('utf-8')
122 result['title'] = title.encode('utf-8')
123 result['content'] = content.encode('utf-8')
124 except:
125 result['author'] = ''
126 result['title'] = ''
127 result['content'] = ''
128 results.append(result)
129
130 return results
131
[end of searx/engines/flickr_noapi.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/flickr_noapi.py b/searx/engines/flickr_noapi.py
--- a/searx/engines/flickr_noapi.py
+++ b/searx/engines/flickr_noapi.py
@@ -118,9 +118,9 @@
'template': 'images.html'
}
try:
- result['author'] = author.encode('utf-8')
- result['title'] = title.encode('utf-8')
- result['content'] = content.encode('utf-8')
+ result['author'] = author
+ result['title'] = title
+ result['content'] = content
except:
result['author'] = ''
result['title'] = ''
|
{"golden_diff": "diff --git a/searx/engines/flickr_noapi.py b/searx/engines/flickr_noapi.py\n--- a/searx/engines/flickr_noapi.py\n+++ b/searx/engines/flickr_noapi.py\n@@ -118,9 +118,9 @@\n 'template': 'images.html'\n }\n try:\n- result['author'] = author.encode('utf-8')\n- result['title'] = title.encode('utf-8')\n- result['content'] = content.encode('utf-8')\n+ result['author'] = author\n+ result['title'] = title\n+ result['content'] = content\n except:\n result['author'] = ''\n result['title'] = ''\n", "issue": "[flickr_noapi] UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128)\nSimilar to #419\r\n\r\nInstallation: current master commit\r\nHow to reproduce? Search for \"kek\" on https://search.snopyta.org/ and click on \"Images\"\r\n\r\n```\r\nERROR:flask.app:Exception on / [POST]\r\nTraceback (most recent call last):\r\n File \"/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py\", line 1813, in full_dispatch_request\r\n rv = self.dispatch_request()\r\n File \"/usr/local/searx/searx-ve/local/lib/python2.7/site-packages/flask/app.py\", line 1799, in dispatch_request\r\n return self.view_functions[rule.endpoint](**req.view_args)\r\n File \"/usr/local/searx/searx/webapp.py\", line 544, in index\r\n result['title'] = highlight_content(escape(result['title'] or u''), search_query.query)\r\n File \"/usr/local/searx/searx/utils.py\", line 79, in highlight_content\r\n if content.lower().find(query.lower()) > -1:\r\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 1: ordinal not in range(128)\r\n```\n", "before_files": [{"content": "#!/usr/bin/env python\n\n\"\"\"\n Flickr (Images)\n\n @website https://www.flickr.com\n @provide-api yes (https://secure.flickr.com/services/api/flickr.photos.search.html)\n\n @using-api no\n @results HTML\n @stable no\n @parse url, title, thumbnail, img_src\n\"\"\"\n\nfrom json import loads\nfrom time import time\nimport re\nfrom searx.engines import logger\nfrom searx.url_utils import urlencode\nfrom searx.utils import ecma_unescape, html_to_text\n\nlogger = logger.getChild('flickr-noapi')\n\ncategories = ['images']\n\nurl = 'https://www.flickr.com/'\nsearch_url = url + 'search?{query}&page={page}'\ntime_range_url = '&min_upload_date={start}&max_upload_date={end}'\nphoto_url = 'https://www.flickr.com/photos/{userid}/{photoid}'\nmodelexport_re = re.compile(r\"^\\s*modelExport:\\s*({.*}),$\", re.M)\nimage_sizes = ('o', 'k', 'h', 'b', 'c', 'z', 'n', 'm', 't', 'q', 's')\n\npaging = True\ntime_range_support = True\ntime_range_dict = {'day': 60 * 60 * 24,\n 'week': 60 * 60 * 24 * 7,\n 'month': 60 * 60 * 24 * 7 * 4,\n 'year': 60 * 60 * 24 * 7 * 52}\n\n\ndef build_flickr_url(user_id, photo_id):\n return photo_url.format(userid=user_id, photoid=photo_id)\n\n\ndef _get_time_range_url(time_range):\n if time_range in time_range_dict:\n return time_range_url.format(start=time(), end=str(int(time()) - time_range_dict[time_range]))\n return ''\n\n\ndef request(query, params):\n params['url'] = (search_url.format(query=urlencode({'text': query}), page=params['pageno'])\n + _get_time_range_url(params['time_range']))\n return params\n\n\ndef response(resp):\n results = []\n\n matches = modelexport_re.search(resp.text)\n\n if matches is None:\n return results\n\n match = matches.group(1)\n model_export = loads(match)\n\n if 'legend' not in model_export:\n return results\n\n legend = model_export['legend']\n\n # handle empty page\n if not legend or not legend[0]:\n return results\n\n for index in legend:\n photo = model_export['main'][index[0]][int(index[1])][index[2]][index[3]][int(index[4])]\n author = ecma_unescape(photo.get('realname', ''))\n source = ecma_unescape(photo.get('username', '')) + ' @ Flickr'\n title = ecma_unescape(photo.get('title', ''))\n content = html_to_text(ecma_unescape(photo.get('description', '')))\n img_src = None\n # From the biggest to the lowest format\n for image_size in image_sizes:\n if image_size in photo['sizes']:\n img_src = photo['sizes'][image_size]['url']\n img_format = 'jpg ' \\\n + str(photo['sizes'][image_size]['width']) \\\n + 'x' \\\n + str(photo['sizes'][image_size]['height'])\n break\n\n if not img_src:\n logger.debug('cannot find valid image size: {0}'.format(repr(photo)))\n continue\n\n # For a bigger thumbnail, keep only the url_z, not the url_n\n if 'n' in photo['sizes']:\n thumbnail_src = photo['sizes']['n']['url']\n elif 'z' in photo['sizes']:\n thumbnail_src = photo['sizes']['z']['url']\n else:\n thumbnail_src = img_src\n\n if 'ownerNsid' not in photo:\n # should not happen, disowned photo? Show it anyway\n url = img_src\n else:\n url = build_flickr_url(photo['ownerNsid'], photo['id'])\n\n result = {\n 'url': url,\n 'img_src': img_src,\n 'thumbnail_src': thumbnail_src,\n 'source': source,\n 'img_format': img_format,\n 'template': 'images.html'\n }\n try:\n result['author'] = author.encode('utf-8')\n result['title'] = title.encode('utf-8')\n result['content'] = content.encode('utf-8')\n except:\n result['author'] = ''\n result['title'] = ''\n result['content'] = ''\n results.append(result)\n\n return results\n", "path": "searx/engines/flickr_noapi.py"}]}
| 2,321 | 169 |
gh_patches_debug_35248
|
rasdani/github-patches
|
git_diff
|
internetarchive__openlibrary-4049
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Books with non-ascii titles erroring when clicked from search
### Evidence / Screenshot (if possible)
### Relevant url?
<!-- `https://openlibrary.org/...` -->
### Steps to Reproduce
<!-- What steps caused you to find the bug? -->
1. Go to https://openlibrary.org/search?q=h%C3%A9las&mode=everything
2. Click on any book with a non-ascii character in the title
<!-- What actually happened after these steps? What did you expect to happen? -->
* Actual: 500 internal error on e.g. https://openlibrary.org/works/OL11565520W?edition=
* Expected: See the book
### Details
- **Logged in (Y/N)?** Y
- **Browser type/version?** FF82
- **Operating system?** Win10
- **Environment (prod/dev/local)?** prod
<!-- If not sure, put prod -->
### Proposal & Constraints
<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->
### Related files
<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->
### Stakeholders
@cclauss
</issue>
<code>
[start of openlibrary/core/processors/readableurls.py]
1 """Various web.py application processors used in OL.
2 """
3 import os
4 import web
5
6 from infogami.utils.view import render
7 from openlibrary.core import helpers as h
8
9 from six.moves import urllib
10
11
12 try:
13 from booklending_utils.openlibrary import is_exclusion
14 except ImportError:
15 def is_exclusion(obj):
16 """Processor for determining whether records require exclusion"""
17 return False
18
19 class ReadableUrlProcessor:
20 """Open Library code works with urls like /books/OL1M and
21 /books/OL1M/edit. This processor seamlessly changes the urls to
22 /books/OL1M/title and /books/OL1M/title/edit.
23
24 The changequery function is also customized to support this.
25 """
26 patterns = [
27 (r'/\w+/OL\d+M', '/type/edition', 'title', 'untitled'),
28 (r'/\w+/ia:[a-zA-Z0-9_\.-]+', '/type/edition', 'title', 'untitled'),
29 (r'/\w+/OL\d+A', '/type/author', 'name', 'noname'),
30 (r'/\w+/OL\d+W', '/type/work', 'title', 'untitled'),
31 (r'/[/\w]+/OL\d+L', '/type/list', 'name', 'unnamed')
32 ]
33
34 def __call__(self, handler):
35 # temp hack to handle languages and users during upstream-to-www migration
36 if web.ctx.path.startswith("/l/"):
37 raise web.seeother("/languages/" + web.ctx.path[len("/l/"):])
38
39 if web.ctx.path.startswith("/user/"):
40 if not web.ctx.site.get(web.ctx.path):
41 raise web.seeother("/people/" + web.ctx.path[len("/user/"):])
42
43 real_path, readable_path = get_readable_path(web.ctx.site, web.ctx.path, self.patterns, encoding=web.ctx.encoding)
44
45 #@@ web.ctx.path is either quoted or unquoted depends on whether the application is running
46 #@@ using builtin-server or lighttpd. That is probably a bug in web.py.
47 #@@ take care of that case here till that is fixed.
48 # @@ Also, the redirection must be done only for GET requests.
49 if readable_path != web.ctx.path and readable_path != urllib.parse.quote(web.safestr(web.ctx.path)) and web.ctx.method == "GET":
50 raise web.redirect(web.safeunicode(readable_path) + web.safeunicode(web.ctx.query))
51
52 web.ctx.readable_path = readable_path
53 web.ctx.path = real_path
54 web.ctx.fullpath = web.ctx.path + web.ctx.query
55 out = handler()
56 V2_TYPES = ['works', 'books', 'people', 'authors',
57 'publishers', 'languages', 'account']
58 if out and any(web.ctx.path.startswith('/%s/' % _type) for _type in V2_TYPES):
59 out.v2 = True
60
61 # Exclude noindex items
62 if web.ctx.get('exclude'):
63 web.ctx.status = "404 Not Found"
64 return render.notfound(web.ctx.path)
65
66 return out
67
68
69 def _get_object(site, key):
70 """Returns the object with the given key.
71
72 If the key has an OLID and no object is found with that key, it tries to
73 find object with the same OLID. OL database makes sures that OLIDs are
74 unique.
75 """
76 obj = site.get(key)
77
78 if obj is None and key.startswith("/a/"):
79 key = "/authors/" + key[len("/a/"):]
80 obj = key and site.get(key)
81
82 if obj is None and key.startswith("/b/"):
83 key = "/books/" + key[len("/b/"):]
84 obj = key and site.get(key)
85
86 if obj is None and key.startswith("/user/"):
87 key = "/people/" + key[len("/user/"):]
88 obj = key and site.get(key)
89
90 basename = key.split("/")[-1]
91
92 # redirect all /.*/ia:foo to /books/ia:foo
93 if obj is None and basename.startswith("ia:"):
94 key = "/books/" + basename
95 obj = site.get(key)
96
97 # redirect all /.*/OL123W to /works/OL123W
98 if obj is None and basename.startswith("OL") and basename.endswith("W"):
99 key = "/works/" + basename
100 obj = site.get(key)
101
102 # redirect all /.*/OL123M to /books/OL123M
103 if obj is None and basename.startswith("OL") and basename.endswith("M"):
104 key = "/books/" + basename
105 obj = site.get(key)
106
107 # redirect all /.*/OL123A to /authors/OL123A
108 if obj is None and basename.startswith("OL") and basename.endswith("A"):
109 key = "/authors/" + basename
110 obj = site.get(key)
111
112 # Disabled temporarily as the index is not ready the db
113
114 #if obj is None and web.re_compile(r"/.*/OL\d+[A-Z]"):
115 # olid = web.safestr(key).split("/")[-1]
116 # key = site._request("/olid_to_key", data={"olid": olid}).key
117 # obj = key and site.get(key)
118 return obj
119
120 def get_readable_path(site, path, patterns, encoding=None):
121 """Returns real_path and readable_path from the given path.
122
123 The patterns is a list of (path_regex, type, property_name, default_value)
124 tuples.
125 """
126 def match(path):
127 for pat, type, property, default_title in patterns:
128 m = web.re_compile('^' + pat).match(path)
129 if m:
130 prefix = m.group()
131 extra = web.lstrips(path, prefix)
132 tokens = extra.split("/", 2)
133
134 # `extra` starts with "/". So first token is always empty.
135 middle = web.listget(tokens, 1, "")
136 suffix = web.listget(tokens, 2, "")
137 if suffix:
138 suffix = "/" + suffix
139
140 return type, property, default_title, prefix, middle, suffix
141 return None, None, None, None, None, None
142
143 type, property, default_title, prefix, middle, suffix = match(path)
144 if type is None:
145 path = web.safeunicode(path)
146 return (path, path)
147
148 if encoding is not None \
149 or path.endswith(".json") or path.endswith(".yml") or path.endswith(".rdf"):
150 key, ext = os.path.splitext(path)
151
152 thing = _get_object(site, key)
153 if thing:
154 path = thing.key + ext
155 path = web.safeunicode(path)
156 return (path, path)
157
158 thing = _get_object(site, prefix)
159
160 # get_object may handle redirections.
161 if thing:
162 prefix = thing.key
163
164 if thing and thing.type.key == type:
165 title = thing.get(property) or default_title
166 middle = '/' + h.urlsafe(title.strip())
167 else:
168 middle = ""
169
170 if is_exclusion(thing):
171 web.ctx.exclude = True
172
173 prefix = web.safeunicode(prefix)
174 middle = web.safeunicode(middle)
175 suffix = web.safeunicode(suffix)
176
177 return (prefix + suffix, prefix + middle + suffix)
178
[end of openlibrary/core/processors/readableurls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/openlibrary/core/processors/readableurls.py b/openlibrary/core/processors/readableurls.py
--- a/openlibrary/core/processors/readableurls.py
+++ b/openlibrary/core/processors/readableurls.py
@@ -1,5 +1,6 @@
"""Various web.py application processors used in OL.
"""
+import logging
import os
import web
@@ -8,6 +9,7 @@
from six.moves import urllib
+logger = logging.getLogger("openlibrary.readableurls")
try:
from booklending_utils.openlibrary import is_exclusion
@@ -123,8 +125,9 @@
The patterns is a list of (path_regex, type, property_name, default_value)
tuples.
"""
+
def match(path):
- for pat, type, property, default_title in patterns:
+ for pat, _type, _property, default_title in patterns:
m = web.re_compile('^' + pat).match(path)
if m:
prefix = m.group()
@@ -137,11 +140,12 @@
if suffix:
suffix = "/" + suffix
- return type, property, default_title, prefix, middle, suffix
+ return _type, _property, default_title, prefix, middle, suffix
return None, None, None, None, None, None
- type, property, default_title, prefix, middle, suffix = match(path)
- if type is None:
+ _type, _property, default_title, prefix, middle, suffix = match(path)
+
+ if _type is None:
path = web.safeunicode(path)
return (path, path)
@@ -161,9 +165,14 @@
if thing:
prefix = thing.key
- if thing and thing.type.key == type:
- title = thing.get(property) or default_title
- middle = '/' + h.urlsafe(title.strip())
+ if thing and thing.type.key == _type:
+ title = thing.get(_property) or default_title
+ try:
+ # Explicitly only run for python3 to solve #4033
+ from urllib.parse import quote_plus
+ middle = '/' + quote_plus(h.urlsafe(title.strip()))
+ except ImportError:
+ middle = '/' + h.urlsafe(title.strip())
else:
middle = ""
|
{"golden_diff": "diff --git a/openlibrary/core/processors/readableurls.py b/openlibrary/core/processors/readableurls.py\n--- a/openlibrary/core/processors/readableurls.py\n+++ b/openlibrary/core/processors/readableurls.py\n@@ -1,5 +1,6 @@\n \"\"\"Various web.py application processors used in OL.\n \"\"\"\n+import logging\n import os\n import web\n \n@@ -8,6 +9,7 @@\n \n from six.moves import urllib\n \n+logger = logging.getLogger(\"openlibrary.readableurls\")\n \n try:\n from booklending_utils.openlibrary import is_exclusion\n@@ -123,8 +125,9 @@\n The patterns is a list of (path_regex, type, property_name, default_value)\n tuples.\n \"\"\"\n+\n def match(path):\n- for pat, type, property, default_title in patterns:\n+ for pat, _type, _property, default_title in patterns:\n m = web.re_compile('^' + pat).match(path)\n if m:\n prefix = m.group()\n@@ -137,11 +140,12 @@\n if suffix:\n suffix = \"/\" + suffix\n \n- return type, property, default_title, prefix, middle, suffix\n+ return _type, _property, default_title, prefix, middle, suffix\n return None, None, None, None, None, None\n \n- type, property, default_title, prefix, middle, suffix = match(path)\n- if type is None:\n+ _type, _property, default_title, prefix, middle, suffix = match(path)\n+\n+ if _type is None:\n path = web.safeunicode(path)\n return (path, path)\n \n@@ -161,9 +165,14 @@\n if thing:\n prefix = thing.key\n \n- if thing and thing.type.key == type:\n- title = thing.get(property) or default_title\n- middle = '/' + h.urlsafe(title.strip())\n+ if thing and thing.type.key == _type:\n+ title = thing.get(_property) or default_title\n+ try:\n+ # Explicitly only run for python3 to solve #4033\n+ from urllib.parse import quote_plus\n+ middle = '/' + quote_plus(h.urlsafe(title.strip()))\n+ except ImportError:\n+ middle = '/' + h.urlsafe(title.strip())\n else:\n middle = \"\"\n", "issue": "Books with non-ascii titles erroring when clicked from search\n### Evidence / Screenshot (if possible)\r\n\r\n### Relevant url?\r\n<!-- `https://openlibrary.org/...` -->\r\n\r\n### Steps to Reproduce\r\n<!-- What steps caused you to find the bug? -->\r\n1. Go to https://openlibrary.org/search?q=h%C3%A9las&mode=everything\r\n2. Click on any book with a non-ascii character in the title\r\n\r\n<!-- What actually happened after these steps? What did you expect to happen? -->\r\n* Actual: 500 internal error on e.g. https://openlibrary.org/works/OL11565520W?edition=\r\n* Expected: See the book\r\n\r\n### Details\r\n\r\n- **Logged in (Y/N)?** Y\r\n- **Browser type/version?** FF82\r\n- **Operating system?** Win10\r\n- **Environment (prod/dev/local)?** prod\r\n<!-- If not sure, put prod -->\r\n\r\n### Proposal & Constraints\r\n<!-- What is the proposed solution / implementation? Is there a precedent of this approach succeeding elsewhere? -->\r\n\r\n### Related files\r\n<!-- Files related to this issue; this is super useful for new contributors who might want to help! If you're not sure, leave this blank; a maintainer will add them. -->\r\n\r\n### Stakeholders\r\n@cclauss \r\n\n", "before_files": [{"content": "\"\"\"Various web.py application processors used in OL.\n\"\"\"\nimport os\nimport web\n\nfrom infogami.utils.view import render\nfrom openlibrary.core import helpers as h\n\nfrom six.moves import urllib\n\n\ntry:\n from booklending_utils.openlibrary import is_exclusion\nexcept ImportError:\n def is_exclusion(obj):\n \"\"\"Processor for determining whether records require exclusion\"\"\"\n return False\n\nclass ReadableUrlProcessor:\n \"\"\"Open Library code works with urls like /books/OL1M and\n /books/OL1M/edit. This processor seamlessly changes the urls to\n /books/OL1M/title and /books/OL1M/title/edit.\n\n The changequery function is also customized to support this.\n \"\"\"\n patterns = [\n (r'/\\w+/OL\\d+M', '/type/edition', 'title', 'untitled'),\n (r'/\\w+/ia:[a-zA-Z0-9_\\.-]+', '/type/edition', 'title', 'untitled'),\n (r'/\\w+/OL\\d+A', '/type/author', 'name', 'noname'),\n (r'/\\w+/OL\\d+W', '/type/work', 'title', 'untitled'),\n (r'/[/\\w]+/OL\\d+L', '/type/list', 'name', 'unnamed')\n ]\n\n def __call__(self, handler):\n # temp hack to handle languages and users during upstream-to-www migration\n if web.ctx.path.startswith(\"/l/\"):\n raise web.seeother(\"/languages/\" + web.ctx.path[len(\"/l/\"):])\n\n if web.ctx.path.startswith(\"/user/\"):\n if not web.ctx.site.get(web.ctx.path):\n raise web.seeother(\"/people/\" + web.ctx.path[len(\"/user/\"):])\n\n real_path, readable_path = get_readable_path(web.ctx.site, web.ctx.path, self.patterns, encoding=web.ctx.encoding)\n\n #@@ web.ctx.path is either quoted or unquoted depends on whether the application is running\n #@@ using builtin-server or lighttpd. That is probably a bug in web.py.\n #@@ take care of that case here till that is fixed.\n # @@ Also, the redirection must be done only for GET requests.\n if readable_path != web.ctx.path and readable_path != urllib.parse.quote(web.safestr(web.ctx.path)) and web.ctx.method == \"GET\":\n raise web.redirect(web.safeunicode(readable_path) + web.safeunicode(web.ctx.query))\n\n web.ctx.readable_path = readable_path\n web.ctx.path = real_path\n web.ctx.fullpath = web.ctx.path + web.ctx.query\n out = handler()\n V2_TYPES = ['works', 'books', 'people', 'authors',\n 'publishers', 'languages', 'account']\n if out and any(web.ctx.path.startswith('/%s/' % _type) for _type in V2_TYPES):\n out.v2 = True\n\n # Exclude noindex items\n if web.ctx.get('exclude'):\n web.ctx.status = \"404 Not Found\"\n return render.notfound(web.ctx.path)\n\n return out\n\n\ndef _get_object(site, key):\n \"\"\"Returns the object with the given key.\n\n If the key has an OLID and no object is found with that key, it tries to\n find object with the same OLID. OL database makes sures that OLIDs are\n unique.\n \"\"\"\n obj = site.get(key)\n\n if obj is None and key.startswith(\"/a/\"):\n key = \"/authors/\" + key[len(\"/a/\"):]\n obj = key and site.get(key)\n\n if obj is None and key.startswith(\"/b/\"):\n key = \"/books/\" + key[len(\"/b/\"):]\n obj = key and site.get(key)\n\n if obj is None and key.startswith(\"/user/\"):\n key = \"/people/\" + key[len(\"/user/\"):]\n obj = key and site.get(key)\n\n basename = key.split(\"/\")[-1]\n\n # redirect all /.*/ia:foo to /books/ia:foo\n if obj is None and basename.startswith(\"ia:\"):\n key = \"/books/\" + basename\n obj = site.get(key)\n\n # redirect all /.*/OL123W to /works/OL123W\n if obj is None and basename.startswith(\"OL\") and basename.endswith(\"W\"):\n key = \"/works/\" + basename\n obj = site.get(key)\n\n # redirect all /.*/OL123M to /books/OL123M\n if obj is None and basename.startswith(\"OL\") and basename.endswith(\"M\"):\n key = \"/books/\" + basename\n obj = site.get(key)\n\n # redirect all /.*/OL123A to /authors/OL123A\n if obj is None and basename.startswith(\"OL\") and basename.endswith(\"A\"):\n key = \"/authors/\" + basename\n obj = site.get(key)\n\n # Disabled temporarily as the index is not ready the db\n\n #if obj is None and web.re_compile(r\"/.*/OL\\d+[A-Z]\"):\n # olid = web.safestr(key).split(\"/\")[-1]\n # key = site._request(\"/olid_to_key\", data={\"olid\": olid}).key\n # obj = key and site.get(key)\n return obj\n\ndef get_readable_path(site, path, patterns, encoding=None):\n \"\"\"Returns real_path and readable_path from the given path.\n\n The patterns is a list of (path_regex, type, property_name, default_value)\n tuples.\n \"\"\"\n def match(path):\n for pat, type, property, default_title in patterns:\n m = web.re_compile('^' + pat).match(path)\n if m:\n prefix = m.group()\n extra = web.lstrips(path, prefix)\n tokens = extra.split(\"/\", 2)\n\n # `extra` starts with \"/\". So first token is always empty.\n middle = web.listget(tokens, 1, \"\")\n suffix = web.listget(tokens, 2, \"\")\n if suffix:\n suffix = \"/\" + suffix\n\n return type, property, default_title, prefix, middle, suffix\n return None, None, None, None, None, None\n\n type, property, default_title, prefix, middle, suffix = match(path)\n if type is None:\n path = web.safeunicode(path)\n return (path, path)\n\n if encoding is not None \\\n or path.endswith(\".json\") or path.endswith(\".yml\") or path.endswith(\".rdf\"):\n key, ext = os.path.splitext(path)\n\n thing = _get_object(site, key)\n if thing:\n path = thing.key + ext\n path = web.safeunicode(path)\n return (path, path)\n\n thing = _get_object(site, prefix)\n\n # get_object may handle redirections.\n if thing:\n prefix = thing.key\n\n if thing and thing.type.key == type:\n title = thing.get(property) or default_title\n middle = '/' + h.urlsafe(title.strip())\n else:\n middle = \"\"\n\n if is_exclusion(thing):\n web.ctx.exclude = True\n\n prefix = web.safeunicode(prefix)\n middle = web.safeunicode(middle)\n suffix = web.safeunicode(suffix)\n\n return (prefix + suffix, prefix + middle + suffix)\n", "path": "openlibrary/core/processors/readableurls.py"}]}
| 2,849 | 521 |
gh_patches_debug_23348
|
rasdani/github-patches
|
git_diff
|
wagtail__wagtail-8210
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tags over 100 characters
Found a bug? Please fill out the sections below. 👍
### Issue Summary
When adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.
### Steps to Reproduce
1. login to admin and edit a page with a tag content panel
2. create a tag with more than 100 characters
3. save, or publish the page
### Technical details
* Python version: Python 3.5.1
* Django version: 1.11.13
* Wagtail version: 1.13.1
Tags over 100 characters
Found a bug? Please fill out the sections below. 👍
### Issue Summary
When adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.
### Steps to Reproduce
1. login to admin and edit a page with a tag content panel
2. create a tag with more than 100 characters
3. save, or publish the page
### Technical details
* Python version: Python 3.5.1
* Django version: 1.11.13
* Wagtail version: 1.13.1
</issue>
<code>
[start of wagtail/admin/forms/tags.py]
1 from taggit.forms import TagField as TaggitTagField
2 from taggit.models import Tag
3
4 from wagtail.admin.widgets import AdminTagWidget
5
6
7 class TagField(TaggitTagField):
8 """
9 Extends taggit's TagField with the option to prevent creating tags that do not already exist
10 """
11
12 widget = AdminTagWidget
13
14 def __init__(self, *args, **kwargs):
15 self.tag_model = kwargs.pop("tag_model", None)
16 self.free_tagging = kwargs.pop("free_tagging", None)
17
18 super().__init__(*args, **kwargs)
19
20 # pass on tag_model and free_tagging kwargs to the widget,
21 # if (and only if) they have been passed explicitly here.
22 # Otherwise, set default values for clean() to use
23 if self.tag_model is None:
24 self.tag_model = Tag
25 else:
26 self.widget.tag_model = self.tag_model
27
28 if self.free_tagging is None:
29 self.free_tagging = getattr(self.tag_model, "free_tagging", True)
30 else:
31 self.widget.free_tagging = self.free_tagging
32
33 def clean(self, value):
34 value = super().clean(value)
35
36 if not self.free_tagging:
37 # filter value to just the tags that already exist in tag_model
38 value = list(
39 self.tag_model.objects.filter(name__in=value).values_list(
40 "name", flat=True
41 )
42 )
43
44 return value
45
[end of wagtail/admin/forms/tags.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/wagtail/admin/forms/tags.py b/wagtail/admin/forms/tags.py
--- a/wagtail/admin/forms/tags.py
+++ b/wagtail/admin/forms/tags.py
@@ -1,3 +1,5 @@
+from django.core.exceptions import ValidationError
+from django.utils.translation import gettext_lazy as _
from taggit.forms import TagField as TaggitTagField
from taggit.models import Tag
@@ -31,8 +33,27 @@
self.widget.free_tagging = self.free_tagging
def clean(self, value):
+
value = super().clean(value)
+ max_tag_length = self.tag_model.name.field.max_length
+ value_too_long = ""
+ for val in value:
+ if len(val) > max_tag_length:
+ if value_too_long:
+ value_too_long += ", "
+ value_too_long += val
+ if value_too_long:
+ raise ValidationError(
+ _(
+ "Tag(s) %(value_too_long)s are over %(max_tag_length)d characters"
+ % {
+ "value_too_long": value_too_long,
+ "max_tag_length": max_tag_length,
+ }
+ )
+ )
+
if not self.free_tagging:
# filter value to just the tags that already exist in tag_model
value = list(
|
{"golden_diff": "diff --git a/wagtail/admin/forms/tags.py b/wagtail/admin/forms/tags.py\n--- a/wagtail/admin/forms/tags.py\n+++ b/wagtail/admin/forms/tags.py\n@@ -1,3 +1,5 @@\n+from django.core.exceptions import ValidationError\n+from django.utils.translation import gettext_lazy as _\n from taggit.forms import TagField as TaggitTagField\n from taggit.models import Tag\n \n@@ -31,8 +33,27 @@\n self.widget.free_tagging = self.free_tagging\n \n def clean(self, value):\n+\n value = super().clean(value)\n \n+ max_tag_length = self.tag_model.name.field.max_length\n+ value_too_long = \"\"\n+ for val in value:\n+ if len(val) > max_tag_length:\n+ if value_too_long:\n+ value_too_long += \", \"\n+ value_too_long += val\n+ if value_too_long:\n+ raise ValidationError(\n+ _(\n+ \"Tag(s) %(value_too_long)s are over %(max_tag_length)d characters\"\n+ % {\n+ \"value_too_long\": value_too_long,\n+ \"max_tag_length\": max_tag_length,\n+ }\n+ )\n+ )\n+\n if not self.free_tagging:\n # filter value to just the tags that already exist in tag_model\n value = list(\n", "issue": "Tags over 100 characters\nFound a bug? Please fill out the sections below. \ud83d\udc4d\r\n\r\n### Issue Summary\r\n\r\nWhen adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.\r\n\r\n### Steps to Reproduce\r\n\r\n1. login to admin and edit a page with a tag content panel\r\n2. create a tag with more than 100 characters\r\n3. save, or publish the page \r\n\r\n### Technical details\r\n\r\n* Python version: Python 3.5.1\r\n* Django version: 1.11.13\r\n* Wagtail version: 1.13.1\nTags over 100 characters\nFound a bug? Please fill out the sections below. \ud83d\udc4d\r\n\r\n### Issue Summary\r\n\r\nWhen adding a tag while using the ClusterTaggableManager class, if the tag name is greater than the character limit for the database column no validation error is given.\r\n\r\n### Steps to Reproduce\r\n\r\n1. login to admin and edit a page with a tag content panel\r\n2. create a tag with more than 100 characters\r\n3. save, or publish the page \r\n\r\n### Technical details\r\n\r\n* Python version: Python 3.5.1\r\n* Django version: 1.11.13\r\n* Wagtail version: 1.13.1\n", "before_files": [{"content": "from taggit.forms import TagField as TaggitTagField\nfrom taggit.models import Tag\n\nfrom wagtail.admin.widgets import AdminTagWidget\n\n\nclass TagField(TaggitTagField):\n \"\"\"\n Extends taggit's TagField with the option to prevent creating tags that do not already exist\n \"\"\"\n\n widget = AdminTagWidget\n\n def __init__(self, *args, **kwargs):\n self.tag_model = kwargs.pop(\"tag_model\", None)\n self.free_tagging = kwargs.pop(\"free_tagging\", None)\n\n super().__init__(*args, **kwargs)\n\n # pass on tag_model and free_tagging kwargs to the widget,\n # if (and only if) they have been passed explicitly here.\n # Otherwise, set default values for clean() to use\n if self.tag_model is None:\n self.tag_model = Tag\n else:\n self.widget.tag_model = self.tag_model\n\n if self.free_tagging is None:\n self.free_tagging = getattr(self.tag_model, \"free_tagging\", True)\n else:\n self.widget.free_tagging = self.free_tagging\n\n def clean(self, value):\n value = super().clean(value)\n\n if not self.free_tagging:\n # filter value to just the tags that already exist in tag_model\n value = list(\n self.tag_model.objects.filter(name__in=value).values_list(\n \"name\", flat=True\n )\n )\n\n return value\n", "path": "wagtail/admin/forms/tags.py"}]}
| 1,225 | 300 |
gh_patches_debug_24605
|
rasdani/github-patches
|
git_diff
|
lutris__lutris-1180
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Game Sync Bug (Proton,Steamworks Banner etc.)
If you ever install a Winesteam Game or use Steam Play with Proton from the beta branch, Lutris scan some necessary Steam Applications and show it as Gamebanner:

this happen cause the Application have a SteamID:
in this case:
Proton 3.7 Beta: 930400
Proton 3.7: 858280
(also Steamworks SDK from Wine-Steam have a ID, but i dont have it installed atm)
As a User, we can "delete" it via the Context Menu from the Lutris Main Window but it dont work permanent, cause after a restart it get again synced and shown.
Pls to fix :+1:
</issue>
<code>
[start of lutris/services/steam.py]
1 import os
2 import re
3 from collections import defaultdict
4
5 from lutris import pga
6 from lutris.util.log import logger
7 from lutris.util.steam import vdf_parse
8 from lutris.util.system import fix_path_case
9 from lutris.util.strings import slugify
10 from lutris.config import make_game_config_id, LutrisConfig
11
12 NAME = 'Steam'
13
14 APP_STATE_FLAGS = [
15 "Invalid",
16 "Uninstalled",
17 "Update Required",
18 "Fully Installed",
19 "Encrypted",
20 "Locked",
21 "Files Missing",
22 "AppRunning",
23 "Files Corrupt",
24 "Update Running",
25 "Update Paused",
26 "Update Started",
27 "Uninstalling",
28 "Backup Running",
29 "Reconfiguring",
30 "Validating",
31 "Adding Files",
32 "Preallocating",
33 "Downloading",
34 "Staging",
35 "Committing",
36 "Update Stopping"
37 ]
38
39
40 class AppManifest:
41 def __init__(self, appmanifest_path):
42 self.appmanifest_path = appmanifest_path
43 self.steamapps_path, filename = os.path.split(appmanifest_path)
44 self.steamid = re.findall(r'(\d+)', filename)[-1]
45 if os.path.exists(appmanifest_path):
46 with open(appmanifest_path, "r") as appmanifest_file:
47 self.appmanifest_data = vdf_parse(appmanifest_file, {})
48
49 def __repr__(self):
50 return "<AppManifest: %s>" % self.appmanifest_path
51
52 @property
53 def app_state(self):
54 return self.appmanifest_data.get('AppState') or {}
55
56 @property
57 def user_config(self):
58 return self.app_state.get('UserConfig') or {}
59
60 @property
61 def name(self):
62 _name = self.app_state.get('name')
63 if not _name:
64 _name = self.user_config.get('name')
65 return _name
66
67 @property
68 def slug(self):
69 return slugify(self.name)
70
71 @property
72 def installdir(self):
73 return self.app_state.get('installdir')
74
75 @property
76 def states(self):
77 """Return the states of a Steam game."""
78 states = []
79 state_flags = self.app_state.get('StateFlags', 0)
80 state_flags = bin(int(state_flags))[:1:-1]
81 for index, flag in enumerate(state_flags):
82 if flag == '1':
83 states.append(APP_STATE_FLAGS[index + 1])
84 return states
85
86 def is_installed(self):
87 return 'Fully Installed' in self.states
88
89 def get_install_path(self):
90 if not self.installdir:
91 return
92 install_path = fix_path_case(os.path.join(self.steamapps_path, "common",
93 self.installdir))
94 if install_path:
95 return install_path
96
97 def get_platform(self):
98 steamapps_paths = get_steamapps_paths()
99 if self.steamapps_path in steamapps_paths['linux']:
100 return 'linux'
101 elif self.steamapps_path in steamapps_paths['windows']:
102 return 'windows'
103 else:
104 raise ValueError("Can't find %s in %s"
105 % (self.steamapps_path, steamapps_paths))
106
107 def get_runner_name(self):
108 platform = self.get_platform()
109 if platform == 'linux':
110 return 'steam'
111 else:
112 return 'winesteam'
113
114
115 def get_appmanifests(steamapps_path):
116 """Return the list for all appmanifest files in a Steam library folder"""
117 return [f for f in os.listdir(steamapps_path)
118 if re.match(r'^appmanifest_\d+.acf$', f)]
119
120
121 def get_steamapps_paths_for_platform(platform_name):
122 from lutris.runners import winesteam, steam
123 runners = {
124 'linux': steam.steam,
125 'windows': winesteam.winesteam
126 }
127 runner = runners[platform_name]()
128 return runner.get_steamapps_dirs()
129
130
131 def get_steamapps_paths(flat=False, platform=None):
132 base_platforms = ['linux', 'windows']
133 if flat:
134 steamapps_paths = []
135 else:
136 steamapps_paths = defaultdict(list)
137
138 if platform:
139 if platform not in base_platforms:
140 raise ValueError("Illegal value for Steam platform: %s" % platform)
141 platforms = [platform]
142 else:
143 platforms = base_platforms
144
145 for platform in platforms:
146 folders = get_steamapps_paths_for_platform(platform)
147 if flat:
148 steamapps_paths += folders
149 else:
150 steamapps_paths[platform] = folders
151
152 return steamapps_paths
153
154
155 def get_appmanifest_from_appid(steamapps_path, appid):
156 """Given the steam apps path and appid, return the corresponding appmanifest"""
157 if not steamapps_path:
158 raise ValueError("steamapps_path is mandatory")
159 if not os.path.exists(steamapps_path):
160 raise IOError("steamapps_path must be a valid directory")
161 if not appid:
162 raise ValueError("Missing mandatory appid")
163 appmanifest_path = os.path.join(steamapps_path, "appmanifest_%s.acf" % appid)
164 if not os.path.exists(appmanifest_path):
165 return
166 return AppManifest(appmanifest_path)
167
168
169 def get_path_from_appmanifest(steamapps_path, appid):
170 """Return the path where a Steam game is installed."""
171 appmanifest = get_appmanifest_from_appid(steamapps_path, appid)
172 if not appmanifest:
173 return
174 return appmanifest.get_install_path()
175
176
177 def mark_as_installed(steamid, runner_name, game_info):
178 for key in ['name', 'slug']:
179 assert game_info[key]
180 logger.info("Setting %s as installed" % game_info['name'])
181 config_id = (game_info.get('config_path') or make_game_config_id(game_info['slug']))
182 game_id = pga.add_or_update(
183 steamid=int(steamid),
184 name=game_info['name'],
185 runner=runner_name,
186 slug=game_info['slug'],
187 installed=1,
188 configpath=config_id,
189 )
190
191 game_config = LutrisConfig(
192 runner_slug=runner_name,
193 game_config_id=config_id,
194 )
195 game_config.raw_game_config.update({'appid': steamid})
196 game_config.save()
197 return game_id
198
199
200 def mark_as_uninstalled(game_info):
201 for key in ('id', 'name'):
202 if key not in game_info:
203 raise ValueError("Missing %s field in %s" % (key, game_info))
204 logger.info('Setting %s as uninstalled' % game_info['name'])
205 game_id = pga.add_or_update(
206 id=game_info['id'],
207 runner='',
208 installed=0
209 )
210 return game_id
211
212
213 def sync_appmanifest_state(appmanifest_path, name=None, slug=None):
214 try:
215 appmanifest = AppManifest(appmanifest_path)
216 except Exception:
217 logger.error("Unable to parse file %s", appmanifest_path)
218 return
219 if appmanifest.is_installed():
220 game_info = {
221 'name': name or appmanifest.name,
222 'slug': slug or appmanifest.slug,
223 }
224 runner_name = appmanifest.get_runner_name()
225 mark_as_installed(appmanifest.steamid, runner_name, game_info)
226
227
228 def sync_with_lutris(platform='linux'):
229 steamapps_paths = get_steamapps_paths()
230 steam_games_in_lutris = pga.get_games_where(steamid__isnull=False, steamid__not='')
231 steamids_in_lutris = set([str(game['steamid']) for game in steam_games_in_lutris])
232 seen_ids = set() # Set of Steam appids seen while browsing AppManifests
233
234 for steamapps_path in steamapps_paths[platform]:
235 appmanifests = get_appmanifests(steamapps_path)
236 for appmanifest_file in appmanifests:
237 steamid = re.findall(r'(\d+)', appmanifest_file)[0]
238 seen_ids.add(steamid)
239 appmanifest_path = os.path.join(steamapps_path, appmanifest_file)
240 if steamid not in steamids_in_lutris:
241 # New Steam game, not seen before in Lutris,
242 if platform != 'linux':
243 # Windows games might require additional steps.
244 # TODO: Find a way to mark games as "Not fully configured"
245 # as the status.
246 logger.warning("Importing Steam game %s but game might require additional configuration")
247 sync_appmanifest_state(appmanifest_path)
248 else:
249 # Lookup previously installed Steam games
250 pga_entry = None
251 for game in steam_games_in_lutris:
252 if str(game['steamid']) == steamid and not game['installed']:
253 pga_entry = game
254 break
255 if pga_entry:
256 sync_appmanifest_state(appmanifest_path, name=pga_entry['name'], slug=pga_entry['slug'])
257 unavailable_ids = steamids_in_lutris.difference(seen_ids)
258 for steamid in unavailable_ids:
259 for game in steam_games_in_lutris:
260 runner = 'steam' if platform == 'linux' else 'winesteam'
261 if str(game['steamid']) == steamid \
262 and game['installed'] \
263 and game['runner'] == runner:
264 mark_as_uninstalled(game)
265
[end of lutris/services/steam.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/lutris/services/steam.py b/lutris/services/steam.py
--- a/lutris/services/steam.py
+++ b/lutris/services/steam.py
@@ -228,6 +228,7 @@
def sync_with_lutris(platform='linux'):
steamapps_paths = get_steamapps_paths()
steam_games_in_lutris = pga.get_games_where(steamid__isnull=False, steamid__not='')
+ proton_ids = ["858280", "930400"]
steamids_in_lutris = set([str(game['steamid']) for game in steam_games_in_lutris])
seen_ids = set() # Set of Steam appids seen while browsing AppManifests
@@ -237,7 +238,7 @@
steamid = re.findall(r'(\d+)', appmanifest_file)[0]
seen_ids.add(steamid)
appmanifest_path = os.path.join(steamapps_path, appmanifest_file)
- if steamid not in steamids_in_lutris:
+ if steamid not in steamids_in_lutris and steamid not in proton_ids:
# New Steam game, not seen before in Lutris,
if platform != 'linux':
# Windows games might require additional steps.
|
{"golden_diff": "diff --git a/lutris/services/steam.py b/lutris/services/steam.py\n--- a/lutris/services/steam.py\n+++ b/lutris/services/steam.py\n@@ -228,6 +228,7 @@\n def sync_with_lutris(platform='linux'):\n steamapps_paths = get_steamapps_paths()\n steam_games_in_lutris = pga.get_games_where(steamid__isnull=False, steamid__not='')\n+ proton_ids = [\"858280\", \"930400\"]\n steamids_in_lutris = set([str(game['steamid']) for game in steam_games_in_lutris])\n seen_ids = set() # Set of Steam appids seen while browsing AppManifests\n \n@@ -237,7 +238,7 @@\n steamid = re.findall(r'(\\d+)', appmanifest_file)[0]\n seen_ids.add(steamid)\n appmanifest_path = os.path.join(steamapps_path, appmanifest_file)\n- if steamid not in steamids_in_lutris:\n+ if steamid not in steamids_in_lutris and steamid not in proton_ids:\n # New Steam game, not seen before in Lutris,\n if platform != 'linux':\n # Windows games might require additional steps.\n", "issue": "Game Sync Bug (Proton,Steamworks Banner etc.)\nIf you ever install a Winesteam Game or use Steam Play with Proton from the beta branch, Lutris scan some necessary Steam Applications and show it as Gamebanner:\r\n\r\n\r\n\r\nthis happen cause the Application have a SteamID:\r\n\r\nin this case:\r\nProton 3.7 Beta: 930400\r\nProton 3.7: 858280\r\n\r\n(also Steamworks SDK from Wine-Steam have a ID, but i dont have it installed atm)\r\n\r\nAs a User, we can \"delete\" it via the Context Menu from the Lutris Main Window but it dont work permanent, cause after a restart it get again synced and shown.\r\n\r\nPls to fix :+1: \r\n\r\n\r\n\r\n\n", "before_files": [{"content": "import os\nimport re\nfrom collections import defaultdict\n\nfrom lutris import pga\nfrom lutris.util.log import logger\nfrom lutris.util.steam import vdf_parse\nfrom lutris.util.system import fix_path_case\nfrom lutris.util.strings import slugify\nfrom lutris.config import make_game_config_id, LutrisConfig\n\nNAME = 'Steam'\n\nAPP_STATE_FLAGS = [\n \"Invalid\",\n \"Uninstalled\",\n \"Update Required\",\n \"Fully Installed\",\n \"Encrypted\",\n \"Locked\",\n \"Files Missing\",\n \"AppRunning\",\n \"Files Corrupt\",\n \"Update Running\",\n \"Update Paused\",\n \"Update Started\",\n \"Uninstalling\",\n \"Backup Running\",\n \"Reconfiguring\",\n \"Validating\",\n \"Adding Files\",\n \"Preallocating\",\n \"Downloading\",\n \"Staging\",\n \"Committing\",\n \"Update Stopping\"\n]\n\n\nclass AppManifest:\n def __init__(self, appmanifest_path):\n self.appmanifest_path = appmanifest_path\n self.steamapps_path, filename = os.path.split(appmanifest_path)\n self.steamid = re.findall(r'(\\d+)', filename)[-1]\n if os.path.exists(appmanifest_path):\n with open(appmanifest_path, \"r\") as appmanifest_file:\n self.appmanifest_data = vdf_parse(appmanifest_file, {})\n\n def __repr__(self):\n return \"<AppManifest: %s>\" % self.appmanifest_path\n\n @property\n def app_state(self):\n return self.appmanifest_data.get('AppState') or {}\n\n @property\n def user_config(self):\n return self.app_state.get('UserConfig') or {}\n\n @property\n def name(self):\n _name = self.app_state.get('name')\n if not _name:\n _name = self.user_config.get('name')\n return _name\n\n @property\n def slug(self):\n return slugify(self.name)\n\n @property\n def installdir(self):\n return self.app_state.get('installdir')\n\n @property\n def states(self):\n \"\"\"Return the states of a Steam game.\"\"\"\n states = []\n state_flags = self.app_state.get('StateFlags', 0)\n state_flags = bin(int(state_flags))[:1:-1]\n for index, flag in enumerate(state_flags):\n if flag == '1':\n states.append(APP_STATE_FLAGS[index + 1])\n return states\n\n def is_installed(self):\n return 'Fully Installed' in self.states\n\n def get_install_path(self):\n if not self.installdir:\n return\n install_path = fix_path_case(os.path.join(self.steamapps_path, \"common\",\n self.installdir))\n if install_path:\n return install_path\n\n def get_platform(self):\n steamapps_paths = get_steamapps_paths()\n if self.steamapps_path in steamapps_paths['linux']:\n return 'linux'\n elif self.steamapps_path in steamapps_paths['windows']:\n return 'windows'\n else:\n raise ValueError(\"Can't find %s in %s\"\n % (self.steamapps_path, steamapps_paths))\n\n def get_runner_name(self):\n platform = self.get_platform()\n if platform == 'linux':\n return 'steam'\n else:\n return 'winesteam'\n\n\ndef get_appmanifests(steamapps_path):\n \"\"\"Return the list for all appmanifest files in a Steam library folder\"\"\"\n return [f for f in os.listdir(steamapps_path)\n if re.match(r'^appmanifest_\\d+.acf$', f)]\n\n\ndef get_steamapps_paths_for_platform(platform_name):\n from lutris.runners import winesteam, steam\n runners = {\n 'linux': steam.steam,\n 'windows': winesteam.winesteam\n }\n runner = runners[platform_name]()\n return runner.get_steamapps_dirs()\n\n\ndef get_steamapps_paths(flat=False, platform=None):\n base_platforms = ['linux', 'windows']\n if flat:\n steamapps_paths = []\n else:\n steamapps_paths = defaultdict(list)\n\n if platform:\n if platform not in base_platforms:\n raise ValueError(\"Illegal value for Steam platform: %s\" % platform)\n platforms = [platform]\n else:\n platforms = base_platforms\n\n for platform in platforms:\n folders = get_steamapps_paths_for_platform(platform)\n if flat:\n steamapps_paths += folders\n else:\n steamapps_paths[platform] = folders\n\n return steamapps_paths\n\n\ndef get_appmanifest_from_appid(steamapps_path, appid):\n \"\"\"Given the steam apps path and appid, return the corresponding appmanifest\"\"\"\n if not steamapps_path:\n raise ValueError(\"steamapps_path is mandatory\")\n if not os.path.exists(steamapps_path):\n raise IOError(\"steamapps_path must be a valid directory\")\n if not appid:\n raise ValueError(\"Missing mandatory appid\")\n appmanifest_path = os.path.join(steamapps_path, \"appmanifest_%s.acf\" % appid)\n if not os.path.exists(appmanifest_path):\n return\n return AppManifest(appmanifest_path)\n\n\ndef get_path_from_appmanifest(steamapps_path, appid):\n \"\"\"Return the path where a Steam game is installed.\"\"\"\n appmanifest = get_appmanifest_from_appid(steamapps_path, appid)\n if not appmanifest:\n return\n return appmanifest.get_install_path()\n\n\ndef mark_as_installed(steamid, runner_name, game_info):\n for key in ['name', 'slug']:\n assert game_info[key]\n logger.info(\"Setting %s as installed\" % game_info['name'])\n config_id = (game_info.get('config_path') or make_game_config_id(game_info['slug']))\n game_id = pga.add_or_update(\n steamid=int(steamid),\n name=game_info['name'],\n runner=runner_name,\n slug=game_info['slug'],\n installed=1,\n configpath=config_id,\n )\n\n game_config = LutrisConfig(\n runner_slug=runner_name,\n game_config_id=config_id,\n )\n game_config.raw_game_config.update({'appid': steamid})\n game_config.save()\n return game_id\n\n\ndef mark_as_uninstalled(game_info):\n for key in ('id', 'name'):\n if key not in game_info:\n raise ValueError(\"Missing %s field in %s\" % (key, game_info))\n logger.info('Setting %s as uninstalled' % game_info['name'])\n game_id = pga.add_or_update(\n id=game_info['id'],\n runner='',\n installed=0\n )\n return game_id\n\n\ndef sync_appmanifest_state(appmanifest_path, name=None, slug=None):\n try:\n appmanifest = AppManifest(appmanifest_path)\n except Exception:\n logger.error(\"Unable to parse file %s\", appmanifest_path)\n return\n if appmanifest.is_installed():\n game_info = {\n 'name': name or appmanifest.name,\n 'slug': slug or appmanifest.slug,\n }\n runner_name = appmanifest.get_runner_name()\n mark_as_installed(appmanifest.steamid, runner_name, game_info)\n\n\ndef sync_with_lutris(platform='linux'):\n steamapps_paths = get_steamapps_paths()\n steam_games_in_lutris = pga.get_games_where(steamid__isnull=False, steamid__not='')\n steamids_in_lutris = set([str(game['steamid']) for game in steam_games_in_lutris])\n seen_ids = set() # Set of Steam appids seen while browsing AppManifests\n\n for steamapps_path in steamapps_paths[platform]:\n appmanifests = get_appmanifests(steamapps_path)\n for appmanifest_file in appmanifests:\n steamid = re.findall(r'(\\d+)', appmanifest_file)[0]\n seen_ids.add(steamid)\n appmanifest_path = os.path.join(steamapps_path, appmanifest_file)\n if steamid not in steamids_in_lutris:\n # New Steam game, not seen before in Lutris,\n if platform != 'linux':\n # Windows games might require additional steps.\n # TODO: Find a way to mark games as \"Not fully configured\"\n # as the status.\n logger.warning(\"Importing Steam game %s but game might require additional configuration\")\n sync_appmanifest_state(appmanifest_path)\n else:\n # Lookup previously installed Steam games\n pga_entry = None\n for game in steam_games_in_lutris:\n if str(game['steamid']) == steamid and not game['installed']:\n pga_entry = game\n break\n if pga_entry:\n sync_appmanifest_state(appmanifest_path, name=pga_entry['name'], slug=pga_entry['slug'])\n unavailable_ids = steamids_in_lutris.difference(seen_ids)\n for steamid in unavailable_ids:\n for game in steam_games_in_lutris:\n runner = 'steam' if platform == 'linux' else 'winesteam'\n if str(game['steamid']) == steamid \\\n and game['installed'] \\\n and game['runner'] == runner:\n mark_as_uninstalled(game)\n", "path": "lutris/services/steam.py"}]}
| 3,469 | 290 |
gh_patches_debug_66972
|
rasdani/github-patches
|
git_diff
|
pandas-dev__pandas-19628
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
DateTimeIndex.__iter__().next() rounds time to microseconds, when timezone aware
#### Code Sample
```python
>> import pandas as pd
>> datetimeindex = pd.DatetimeIndex(["2018-02-08 15:00:00.168456358"])
>> datetimeindex
DatetimeIndex(['2018-02-08 15:00:00.168456358'], dtype='datetime64[ns]', freq=None)
>> datetimeindex = datetimeindex.tz_localize(datetime.timezone.utc)
>> datetimeindex
DatetimeIndex(['2018-02-08 15:00:00.168456358+00:00'], dtype='datetime64[ns, UTC+00:00]', freq=None)
>> datetimeindex.__getitem__(0)
Timestamp('2018-02-08 15:00:00.168456358+0000', tz='UTC+00:00')
>> datetimeindex.__iter__().__next__()
Timestamp('2018-02-08 15:00:00.168456+0000', tz='UTC+00:00')
```
#### Problem description
When using localize DateTimeIndex with nanosecond precision, __getitem__ behavious differs from __iter__().__next__ behaviour, as when iterating thought the DateTimeIndex the date is round to microseconds. This doen not happends if the DatetimeIndex has no timezone.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.4.2.final.0
python-bits: 64
OS: Linux
OS-release: 4.9.0-0.bpo.2-amd64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.22.0
pytest: None
pip: 9.0.1
setuptools: 36.5.0
Cython: None
numpy: 1.14.0
scipy: 1.0.0
pyarrow: None
xarray: None
IPython: 6.2.1
sphinx: None
patsy: None
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: 2.0.2
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
</issue>
<code>
[start of pandas/conftest.py]
1 import pytest
2
3 from distutils.version import LooseVersion
4 import numpy
5 import pandas
6 import dateutil
7 import pandas.util._test_decorators as td
8
9
10 def pytest_addoption(parser):
11 parser.addoption("--skip-slow", action="store_true",
12 help="skip slow tests")
13 parser.addoption("--skip-network", action="store_true",
14 help="skip network tests")
15 parser.addoption("--run-high-memory", action="store_true",
16 help="run high memory tests")
17 parser.addoption("--only-slow", action="store_true",
18 help="run only slow tests")
19
20
21 def pytest_runtest_setup(item):
22 if 'slow' in item.keywords and item.config.getoption("--skip-slow"):
23 pytest.skip("skipping due to --skip-slow")
24
25 if 'slow' not in item.keywords and item.config.getoption("--only-slow"):
26 pytest.skip("skipping due to --only-slow")
27
28 if 'network' in item.keywords and item.config.getoption("--skip-network"):
29 pytest.skip("skipping due to --skip-network")
30
31 if 'high_memory' in item.keywords and not item.config.getoption(
32 "--run-high-memory"):
33 pytest.skip(
34 "skipping high memory test since --run-high-memory was not set")
35
36
37 # Configurations for all tests and all test modules
38
39 @pytest.fixture(autouse=True)
40 def configure_tests():
41 pandas.set_option('chained_assignment', 'raise')
42
43
44 # For running doctests: make np and pd names available
45
46 @pytest.fixture(autouse=True)
47 def add_imports(doctest_namespace):
48 doctest_namespace['np'] = numpy
49 doctest_namespace['pd'] = pandas
50
51
52 @pytest.fixture(params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])
53 def spmatrix(request):
54 from scipy import sparse
55 return getattr(sparse, request.param + '_matrix')
56
57
58 @pytest.fixture
59 def ip():
60 """
61 Get an instance of IPython.InteractiveShell.
62
63 Will raise a skip if IPython is not installed.
64 """
65
66 pytest.importorskip('IPython', minversion="6.0.0")
67 from IPython.core.interactiveshell import InteractiveShell
68 return InteractiveShell()
69
70
71 is_dateutil_le_261 = pytest.mark.skipif(
72 LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),
73 reason="dateutil api change version")
74 is_dateutil_gt_261 = pytest.mark.skipif(
75 LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),
76 reason="dateutil stable version")
77
78
79 @pytest.fixture(params=[None, 'gzip', 'bz2', 'zip',
80 pytest.param('xz', marks=td.skip_if_no_lzma)])
81 def compression(request):
82 """
83 Fixture for trying common compression types in compression tests
84 """
85 return request.param
86
87
88 @pytest.fixture(params=[None, 'gzip', 'bz2',
89 pytest.param('xz', marks=td.skip_if_no_lzma)])
90 def compression_no_zip(request):
91 """
92 Fixture for trying common compression types in compression tests
93 except zip
94 """
95 return request.param
96
[end of pandas/conftest.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pandas/conftest.py b/pandas/conftest.py
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -93,3 +93,9 @@
except zip
"""
return request.param
+
+
[email protected](scope='module')
+def datetime_tz_utc():
+ from datetime import timezone
+ return timezone.utc
|
{"golden_diff": "diff --git a/pandas/conftest.py b/pandas/conftest.py\n--- a/pandas/conftest.py\n+++ b/pandas/conftest.py\n@@ -93,3 +93,9 @@\n except zip\n \"\"\"\n return request.param\n+\n+\[email protected](scope='module')\n+def datetime_tz_utc():\n+ from datetime import timezone\n+ return timezone.utc\n", "issue": "DateTimeIndex.__iter__().next() rounds time to microseconds, when timezone aware\n#### Code Sample\r\n\r\n```python\r\n>> import pandas as pd\r\n>> datetimeindex = pd.DatetimeIndex([\"2018-02-08 15:00:00.168456358\"])\r\n>> datetimeindex\r\nDatetimeIndex(['2018-02-08 15:00:00.168456358'], dtype='datetime64[ns]', freq=None)\r\n>> datetimeindex = datetimeindex.tz_localize(datetime.timezone.utc)\r\n>> datetimeindex\r\nDatetimeIndex(['2018-02-08 15:00:00.168456358+00:00'], dtype='datetime64[ns, UTC+00:00]', freq=None)\r\n>> datetimeindex.__getitem__(0)\r\nTimestamp('2018-02-08 15:00:00.168456358+0000', tz='UTC+00:00')\r\n>> datetimeindex.__iter__().__next__()\r\nTimestamp('2018-02-08 15:00:00.168456+0000', tz='UTC+00:00')\r\n```\r\n#### Problem description\r\n\r\nWhen using localize DateTimeIndex with nanosecond precision, __getitem__ behavious differs from __iter__().__next__ behaviour, as when iterating thought the DateTimeIndex the date is round to microseconds. This doen not happends if the DatetimeIndex has no timezone.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n<details>\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.4.2.final.0\r\npython-bits: 64\r\nOS: Linux\r\nOS-release: 4.9.0-0.bpo.2-amd64\r\nmachine: x86_64\r\nprocessor: \r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.22.0\r\npytest: None\r\npip: 9.0.1\r\nsetuptools: 36.5.0\r\nCython: None\r\nnumpy: 1.14.0\r\nscipy: 1.0.0\r\npyarrow: None\r\nxarray: None\r\nIPython: 6.2.1\r\nsphinx: None\r\npatsy: None\r\ndateutil: 2.6.1\r\npytz: 2017.3\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: None\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: None\r\ns3fs: None\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n</details>\r\n\n", "before_files": [{"content": "import pytest\n\nfrom distutils.version import LooseVersion\nimport numpy\nimport pandas\nimport dateutil\nimport pandas.util._test_decorators as td\n\n\ndef pytest_addoption(parser):\n parser.addoption(\"--skip-slow\", action=\"store_true\",\n help=\"skip slow tests\")\n parser.addoption(\"--skip-network\", action=\"store_true\",\n help=\"skip network tests\")\n parser.addoption(\"--run-high-memory\", action=\"store_true\",\n help=\"run high memory tests\")\n parser.addoption(\"--only-slow\", action=\"store_true\",\n help=\"run only slow tests\")\n\n\ndef pytest_runtest_setup(item):\n if 'slow' in item.keywords and item.config.getoption(\"--skip-slow\"):\n pytest.skip(\"skipping due to --skip-slow\")\n\n if 'slow' not in item.keywords and item.config.getoption(\"--only-slow\"):\n pytest.skip(\"skipping due to --only-slow\")\n\n if 'network' in item.keywords and item.config.getoption(\"--skip-network\"):\n pytest.skip(\"skipping due to --skip-network\")\n\n if 'high_memory' in item.keywords and not item.config.getoption(\n \"--run-high-memory\"):\n pytest.skip(\n \"skipping high memory test since --run-high-memory was not set\")\n\n\n# Configurations for all tests and all test modules\n\[email protected](autouse=True)\ndef configure_tests():\n pandas.set_option('chained_assignment', 'raise')\n\n\n# For running doctests: make np and pd names available\n\[email protected](autouse=True)\ndef add_imports(doctest_namespace):\n doctest_namespace['np'] = numpy\n doctest_namespace['pd'] = pandas\n\n\[email protected](params=['bsr', 'coo', 'csc', 'csr', 'dia', 'dok', 'lil'])\ndef spmatrix(request):\n from scipy import sparse\n return getattr(sparse, request.param + '_matrix')\n\n\[email protected]\ndef ip():\n \"\"\"\n Get an instance of IPython.InteractiveShell.\n\n Will raise a skip if IPython is not installed.\n \"\"\"\n\n pytest.importorskip('IPython', minversion=\"6.0.0\")\n from IPython.core.interactiveshell import InteractiveShell\n return InteractiveShell()\n\n\nis_dateutil_le_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) > LooseVersion('2.6.1'),\n reason=\"dateutil api change version\")\nis_dateutil_gt_261 = pytest.mark.skipif(\n LooseVersion(dateutil.__version__) <= LooseVersion('2.6.1'),\n reason=\"dateutil stable version\")\n\n\[email protected](params=[None, 'gzip', 'bz2', 'zip',\n pytest.param('xz', marks=td.skip_if_no_lzma)])\ndef compression(request):\n \"\"\"\n Fixture for trying common compression types in compression tests\n \"\"\"\n return request.param\n\n\[email protected](params=[None, 'gzip', 'bz2',\n pytest.param('xz', marks=td.skip_if_no_lzma)])\ndef compression_no_zip(request):\n \"\"\"\n Fixture for trying common compression types in compression tests\n except zip\n \"\"\"\n return request.param\n", "path": "pandas/conftest.py"}]}
| 2,122 | 89 |
gh_patches_debug_14039
|
rasdani/github-patches
|
git_diff
|
conda__conda-5236
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
An unexpected error has occurred.
Current conda install:
platform : win-64
conda version : 4.3.11
conda is private : False
conda-env version : 4.3.11
conda-build version : 2.0.2
python version : 2.7.12.final.0
requests version : 2.13.0
root environment : I:\Program Files\Anaconda2 (writable)
default environment : I:\Program Files\Anaconda2
envs directories : I:\Program Files\Anaconda2\envs
C:\Users\topnet\AppData\Local\conda\conda\envs
C:\Users\topnet\.conda\envs
package cache : I:\Program Files\Anaconda2\pkgs
C:\Users\topnet\AppData\Local\conda\conda\pkgs
channel URLs : https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/win-64
https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch
https://repo.continuum.io/pkgs/free/win-64
https://repo.continuum.io/pkgs/free/noarch
https://repo.continuum.io/pkgs/r/win-64
https://repo.continuum.io/pkgs/r/noarch
https://repo.continuum.io/pkgs/pro/win-64
https://repo.continuum.io/pkgs/pro/noarch
https://repo.continuum.io/pkgs/msys2/win-64
https://repo.continuum.io/pkgs/msys2/noarch
config file : C:\Users\topnet\.condarc
offline mode : False
user-agent : conda/4.3.11 requests/2.13.0 CPython/2.7.12 Windows/10 Windows/10.0.14393
`$ I:\Program Files\Anaconda2\Scripts\conda-script.py install numpy`
Traceback (most recent call last):
File "I:\Program Files\Anaconda2\lib\site-packages\conda\exceptions.py", line 616, in conda_exception_handler
return_value = func(*args, **kwargs)
File "I:\Program Files\Anaconda2\lib\site-packages\conda\cli\main.py", line 137, in _main
exit_code = args.func(args, p)
File "I:\Program Files\Anaconda2\lib\site-packages\conda\cli\main_install.py", line 80, in execute
install(args, parser, 'install')
File "I:\Program Files\Anaconda2\lib\site-packages\conda\cli\install.py", line 359, in install
execute_actions(actions, index, verbose=not context.quiet)
File "I:\Program Files\Anaconda2\lib\site-packages\conda\plan.py", line 825, in execute_actions
execute_instructions(plan, index, verbose)
File "I:\Program Files\Anaconda2\lib\site-packages\conda\instructions.py", line 258, in execute_instructions
cmd(state, arg)
File "I:\Program Files\Anaconda2\lib\site-packages\conda\instructions.py", line 111, in PROGRESSIVEFETCHEXTRACT_CMD
progressive_fetch_extract.execute()
File "I:\Program Files\Anaconda2\lib\site-packages\conda\core\package_cache.py", line 470, in execute
self._execute_action(action)
File "I:\Program Files\Anaconda2\lib\site-packages\conda\core\package_cache.py", line 486, in _execute_action
exceptions.append(CondaError(repr(e)))
File "I:\Program Files\Anaconda2\lib\site-packages\conda\__init__.py", line 43, in __repr__
return '%s: %s\n' % (self.__class__.__name__, text_type(self))
File "I:\Program Files\Anaconda2\lib\site-packages\conda\__init__.py", line 46, in __str__
return text_type(self.message % self._kwargs)
TypeError: not enough arguments for format string
thanks!
</issue>
<code>
[start of conda/__init__.py]
1 # (c) 2012-2016 Continuum Analytics, Inc. / http://continuum.io
2 # All Rights Reserved
3 #
4 # conda is distributed under the terms of the BSD 3-clause license.
5 # Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.
6 """OS-agnostic, system-level binary package manager."""
7 from __future__ import absolute_import, division, print_function, unicode_literals
8
9 import os
10 from os.path import dirname
11 import sys
12
13 from ._vendor.auxlib.packaging import get_version
14 from .common.compat import iteritems, text_type
15
16 __all__ = (
17 "__name__", "__version__", "__author__", "__email__", "__license__", "__summary__", "__url__",
18 "CONDA_PACKAGE_ROOT", "CondaError", "CondaMultiError", "CondaExitZero", "conda_signal_handler",
19 )
20
21 __name__ = "conda"
22 __version__ = get_version(__file__)
23 __author__ = "Continuum Analytics, Inc."
24 __email__ = "[email protected]"
25 __license__ = "BSD"
26 __summary__ = __doc__
27 __url__ = "https://github.com/conda/conda"
28
29
30 if os.getenv('CONDA_ROOT') is None:
31 os.environ[str('CONDA_ROOT')] = sys.prefix
32
33 CONDA_PACKAGE_ROOT = dirname(__file__)
34
35
36 class CondaError(Exception):
37 def __init__(self, message, caused_by=None, **kwargs):
38 self.message = message
39 self._kwargs = kwargs
40 self._caused_by = caused_by
41 super(CondaError, self).__init__(message)
42
43 def __repr__(self):
44 return '%s: %s' % (self.__class__.__name__, text_type(self))
45
46 def __str__(self):
47 return text_type(self.message % self._kwargs)
48
49 def dump_map(self):
50 result = dict((k, v) for k, v in iteritems(vars(self)) if not k.startswith('_'))
51 result.update(exception_type=text_type(type(self)),
52 exception_name=self.__class__.__name__,
53 message=text_type(self),
54 error=repr(self),
55 caused_by=repr(self._caused_by),
56 **self._kwargs)
57 return result
58
59
60 class CondaMultiError(CondaError):
61
62 def __init__(self, errors):
63 self.errors = errors
64 super(CondaError, self).__init__(None)
65
66 def __repr__(self):
67 return '\n'.join(repr(e) for e in self.errors) + '\n'
68
69 def __str__(self):
70 return '\n'.join(text_type(e) for e in self.errors) + '\n'
71
72 def dump_map(self):
73 return dict(exception_type=text_type(type(self)),
74 exception_name=self.__class__.__name__,
75 errors=tuple(error.dump_map() for error in self.errors),
76 error="Multiple Errors Encountered.",
77 )
78
79
80 class CondaExitZero(CondaError):
81 pass
82
83
84 ACTIVE_SUBPROCESSES = set()
85
86
87 def conda_signal_handler(signum, frame):
88 # This function is in the base __init__.py so that it can be monkey-patched by other code
89 # if downstream conda users so choose. The biggest danger of monkey-patching is that
90 # unlink/link transactions don't get rolled back if interrupted mid-transaction.
91 for p in ACTIVE_SUBPROCESSES:
92 if p.poll() is None:
93 p.send_signal(signum)
94
95 from .exceptions import CondaSignalInterrupt
96 raise CondaSignalInterrupt(signum)
97
[end of conda/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/conda/__init__.py b/conda/__init__.py
--- a/conda/__init__.py
+++ b/conda/__init__.py
@@ -44,7 +44,19 @@
return '%s: %s' % (self.__class__.__name__, text_type(self))
def __str__(self):
- return text_type(self.message % self._kwargs)
+ try:
+ return text_type(self.message % self._kwargs)
+ except TypeError:
+ # TypeError: not enough arguments for format string
+ debug_message = "\n".join((
+ "class: " + self.__class__.__name__,
+ "message:",
+ self.message,
+ "kwargs:",
+ text_type(self._kwargs),
+ ))
+ sys.stderr.write(debug_message)
+ raise
def dump_map(self):
result = dict((k, v) for k, v in iteritems(vars(self)) if not k.startswith('_'))
|
{"golden_diff": "diff --git a/conda/__init__.py b/conda/__init__.py\n--- a/conda/__init__.py\n+++ b/conda/__init__.py\n@@ -44,7 +44,19 @@\n return '%s: %s' % (self.__class__.__name__, text_type(self))\n \n def __str__(self):\n- return text_type(self.message % self._kwargs)\n+ try:\n+ return text_type(self.message % self._kwargs)\n+ except TypeError:\n+ # TypeError: not enough arguments for format string\n+ debug_message = \"\\n\".join((\n+ \"class: \" + self.__class__.__name__,\n+ \"message:\",\n+ self.message,\n+ \"kwargs:\",\n+ text_type(self._kwargs),\n+ ))\n+ sys.stderr.write(debug_message)\n+ raise\n \n def dump_map(self):\n result = dict((k, v) for k, v in iteritems(vars(self)) if not k.startswith('_'))\n", "issue": "An unexpected error has occurred.\nCurrent conda install:\r\n\r\n platform : win-64\r\n conda version : 4.3.11\r\n conda is private : False\r\n conda-env version : 4.3.11\r\n conda-build version : 2.0.2\r\n python version : 2.7.12.final.0\r\n requests version : 2.13.0\r\n root environment : I:\\Program Files\\Anaconda2 (writable)\r\n default environment : I:\\Program Files\\Anaconda2\r\n envs directories : I:\\Program Files\\Anaconda2\\envs\r\n C:\\Users\\topnet\\AppData\\Local\\conda\\conda\\envs\r\n C:\\Users\\topnet\\.conda\\envs\r\n package cache : I:\\Program Files\\Anaconda2\\pkgs\r\n C:\\Users\\topnet\\AppData\\Local\\conda\\conda\\pkgs\r\n channel URLs : https://conda.anaconda.org/conda-forge/win-64\r\n https://conda.anaconda.org/conda-forge/noarch\r\n https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/win-64\r\n https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch\r\n https://repo.continuum.io/pkgs/free/win-64\r\n https://repo.continuum.io/pkgs/free/noarch\r\n https://repo.continuum.io/pkgs/r/win-64\r\n https://repo.continuum.io/pkgs/r/noarch\r\n https://repo.continuum.io/pkgs/pro/win-64\r\n https://repo.continuum.io/pkgs/pro/noarch\r\n https://repo.continuum.io/pkgs/msys2/win-64\r\n https://repo.continuum.io/pkgs/msys2/noarch\r\n config file : C:\\Users\\topnet\\.condarc\r\n offline mode : False\r\n user-agent : conda/4.3.11 requests/2.13.0 CPython/2.7.12 Windows/10 Windows/10.0.14393\r\n\r\n\r\n\r\n`$ I:\\Program Files\\Anaconda2\\Scripts\\conda-script.py install numpy`\r\n\r\n\r\n\r\n\r\n Traceback (most recent call last):\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\exceptions.py\", line 616, in conda_exception_handler\r\n return_value = func(*args, **kwargs)\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\cli\\main.py\", line 137, in _main\r\n exit_code = args.func(args, p)\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\cli\\main_install.py\", line 80, in execute\r\n install(args, parser, 'install')\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\cli\\install.py\", line 359, in install\r\n execute_actions(actions, index, verbose=not context.quiet)\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\plan.py\", line 825, in execute_actions\r\n execute_instructions(plan, index, verbose)\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\instructions.py\", line 258, in execute_instructions\r\n cmd(state, arg)\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\instructions.py\", line 111, in PROGRESSIVEFETCHEXTRACT_CMD\r\n progressive_fetch_extract.execute()\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\core\\package_cache.py\", line 470, in execute\r\n self._execute_action(action)\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\core\\package_cache.py\", line 486, in _execute_action\r\n exceptions.append(CondaError(repr(e)))\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\__init__.py\", line 43, in __repr__\r\n return '%s: %s\\n' % (self.__class__.__name__, text_type(self))\r\n File \"I:\\Program Files\\Anaconda2\\lib\\site-packages\\conda\\__init__.py\", line 46, in __str__\r\n return text_type(self.message % self._kwargs)\r\n TypeError: not enough arguments for format string\r\n\r\nthanks\uff01\n", "before_files": [{"content": "# (c) 2012-2016 Continuum Analytics, Inc. / http://continuum.io\n# All Rights Reserved\n#\n# conda is distributed under the terms of the BSD 3-clause license.\n# Consult LICENSE.txt or http://opensource.org/licenses/BSD-3-Clause.\n\"\"\"OS-agnostic, system-level binary package manager.\"\"\"\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport os\nfrom os.path import dirname\nimport sys\n\nfrom ._vendor.auxlib.packaging import get_version\nfrom .common.compat import iteritems, text_type\n\n__all__ = (\n \"__name__\", \"__version__\", \"__author__\", \"__email__\", \"__license__\", \"__summary__\", \"__url__\",\n \"CONDA_PACKAGE_ROOT\", \"CondaError\", \"CondaMultiError\", \"CondaExitZero\", \"conda_signal_handler\",\n)\n\n__name__ = \"conda\"\n__version__ = get_version(__file__)\n__author__ = \"Continuum Analytics, Inc.\"\n__email__ = \"[email protected]\"\n__license__ = \"BSD\"\n__summary__ = __doc__\n__url__ = \"https://github.com/conda/conda\"\n\n\nif os.getenv('CONDA_ROOT') is None:\n os.environ[str('CONDA_ROOT')] = sys.prefix\n\nCONDA_PACKAGE_ROOT = dirname(__file__)\n\n\nclass CondaError(Exception):\n def __init__(self, message, caused_by=None, **kwargs):\n self.message = message\n self._kwargs = kwargs\n self._caused_by = caused_by\n super(CondaError, self).__init__(message)\n\n def __repr__(self):\n return '%s: %s' % (self.__class__.__name__, text_type(self))\n\n def __str__(self):\n return text_type(self.message % self._kwargs)\n\n def dump_map(self):\n result = dict((k, v) for k, v in iteritems(vars(self)) if not k.startswith('_'))\n result.update(exception_type=text_type(type(self)),\n exception_name=self.__class__.__name__,\n message=text_type(self),\n error=repr(self),\n caused_by=repr(self._caused_by),\n **self._kwargs)\n return result\n\n\nclass CondaMultiError(CondaError):\n\n def __init__(self, errors):\n self.errors = errors\n super(CondaError, self).__init__(None)\n\n def __repr__(self):\n return '\\n'.join(repr(e) for e in self.errors) + '\\n'\n\n def __str__(self):\n return '\\n'.join(text_type(e) for e in self.errors) + '\\n'\n\n def dump_map(self):\n return dict(exception_type=text_type(type(self)),\n exception_name=self.__class__.__name__,\n errors=tuple(error.dump_map() for error in self.errors),\n error=\"Multiple Errors Encountered.\",\n )\n\n\nclass CondaExitZero(CondaError):\n pass\n\n\nACTIVE_SUBPROCESSES = set()\n\n\ndef conda_signal_handler(signum, frame):\n # This function is in the base __init__.py so that it can be monkey-patched by other code\n # if downstream conda users so choose. The biggest danger of monkey-patching is that\n # unlink/link transactions don't get rolled back if interrupted mid-transaction.\n for p in ACTIVE_SUBPROCESSES:\n if p.poll() is None:\n p.send_signal(signum)\n\n from .exceptions import CondaSignalInterrupt\n raise CondaSignalInterrupt(signum)\n", "path": "conda/__init__.py"}]}
| 2,518 | 214 |
gh_patches_debug_18191
|
rasdani/github-patches
|
git_diff
|
python-discord__bot-997
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Charinfo doesn't escape backticks
The Unicode escapes are formatted in code blocks. When the embed tries to also render a literal backtick, this ends up interfering with the code blocks and creating a mess.

</issue>
<code>
[start of bot/cogs/utils.py]
1 import difflib
2 import logging
3 import re
4 import unicodedata
5 from email.parser import HeaderParser
6 from io import StringIO
7 from typing import Tuple, Union
8
9 from discord import Colour, Embed
10 from discord.ext.commands import BadArgument, Cog, Context, command
11
12 from bot.bot import Bot
13 from bot.constants import Channels, MODERATION_ROLES, STAFF_ROLES
14 from bot.decorators import in_whitelist, with_role
15
16 log = logging.getLogger(__name__)
17
18 ZEN_OF_PYTHON = """\
19 Beautiful is better than ugly.
20 Explicit is better than implicit.
21 Simple is better than complex.
22 Complex is better than complicated.
23 Flat is better than nested.
24 Sparse is better than dense.
25 Readability counts.
26 Special cases aren't special enough to break the rules.
27 Although practicality beats purity.
28 Errors should never pass silently.
29 Unless explicitly silenced.
30 In the face of ambiguity, refuse the temptation to guess.
31 There should be one-- and preferably only one --obvious way to do it.
32 Although that way may not be obvious at first unless you're Dutch.
33 Now is better than never.
34 Although never is often better than *right* now.
35 If the implementation is hard to explain, it's a bad idea.
36 If the implementation is easy to explain, it may be a good idea.
37 Namespaces are one honking great idea -- let's do more of those!
38 """
39
40 ICON_URL = "https://www.python.org/static/opengraph-icon-200x200.png"
41
42
43 class Utils(Cog):
44 """A selection of utilities which don't have a clear category."""
45
46 def __init__(self, bot: Bot):
47 self.bot = bot
48
49 self.base_pep_url = "http://www.python.org/dev/peps/pep-"
50 self.base_github_pep_url = "https://raw.githubusercontent.com/python/peps/master/pep-"
51
52 @command(name='pep', aliases=('get_pep', 'p'))
53 async def pep_command(self, ctx: Context, pep_number: str) -> None:
54 """Fetches information about a PEP and sends it to the channel."""
55 if pep_number.isdigit():
56 pep_number = int(pep_number)
57 else:
58 await ctx.send_help(ctx.command)
59 return
60
61 # Handle PEP 0 directly because it's not in .rst or .txt so it can't be accessed like other PEPs.
62 if pep_number == 0:
63 return await self.send_pep_zero(ctx)
64
65 possible_extensions = ['.txt', '.rst']
66 found_pep = False
67 for extension in possible_extensions:
68 # Attempt to fetch the PEP
69 pep_url = f"{self.base_github_pep_url}{pep_number:04}{extension}"
70 log.trace(f"Requesting PEP {pep_number} with {pep_url}")
71 response = await self.bot.http_session.get(pep_url)
72
73 if response.status == 200:
74 log.trace("PEP found")
75 found_pep = True
76
77 pep_content = await response.text()
78
79 # Taken from https://github.com/python/peps/blob/master/pep0/pep.py#L179
80 pep_header = HeaderParser().parse(StringIO(pep_content))
81
82 # Assemble the embed
83 pep_embed = Embed(
84 title=f"**PEP {pep_number} - {pep_header['Title']}**",
85 description=f"[Link]({self.base_pep_url}{pep_number:04})",
86 )
87
88 pep_embed.set_thumbnail(url=ICON_URL)
89
90 # Add the interesting information
91 fields_to_check = ("Status", "Python-Version", "Created", "Type")
92 for field in fields_to_check:
93 # Check for a PEP metadata field that is present but has an empty value
94 # embed field values can't contain an empty string
95 if pep_header.get(field, ""):
96 pep_embed.add_field(name=field, value=pep_header[field])
97
98 elif response.status != 404:
99 # any response except 200 and 404 is expected
100 found_pep = True # actually not, but it's easier to display this way
101 log.trace(f"The user requested PEP {pep_number}, but the response had an unexpected status code: "
102 f"{response.status}.\n{response.text}")
103
104 error_message = "Unexpected HTTP error during PEP search. Please let us know."
105 pep_embed = Embed(title="Unexpected error", description=error_message)
106 pep_embed.colour = Colour.red()
107 break
108
109 if not found_pep:
110 log.trace("PEP was not found")
111 not_found = f"PEP {pep_number} does not exist."
112 pep_embed = Embed(title="PEP not found", description=not_found)
113 pep_embed.colour = Colour.red()
114
115 await ctx.message.channel.send(embed=pep_embed)
116
117 @command()
118 @in_whitelist(channels=(Channels.bot_commands,), roles=STAFF_ROLES)
119 async def charinfo(self, ctx: Context, *, characters: str) -> None:
120 """Shows you information on up to 25 unicode characters."""
121 match = re.match(r"<(a?):(\w+):(\d+)>", characters)
122 if match:
123 embed = Embed(
124 title="Non-Character Detected",
125 description=(
126 "Only unicode characters can be processed, but a custom Discord emoji "
127 "was found. Please remove it and try again."
128 )
129 )
130 embed.colour = Colour.red()
131 await ctx.send(embed=embed)
132 return
133
134 if len(characters) > 25:
135 embed = Embed(title=f"Too many characters ({len(characters)}/25)")
136 embed.colour = Colour.red()
137 await ctx.send(embed=embed)
138 return
139
140 def get_info(char: str) -> Tuple[str, str]:
141 digit = f"{ord(char):x}"
142 if len(digit) <= 4:
143 u_code = f"\\u{digit:>04}"
144 else:
145 u_code = f"\\U{digit:>08}"
146 url = f"https://www.compart.com/en/unicode/U+{digit:>04}"
147 name = f"[{unicodedata.name(char, '')}]({url})"
148 info = f"`{u_code.ljust(10)}`: {name} - {char}"
149 return info, u_code
150
151 charlist, rawlist = zip(*(get_info(c) for c in characters))
152
153 embed = Embed(description="\n".join(charlist))
154 embed.set_author(name="Character Info")
155
156 if len(characters) > 1:
157 embed.add_field(name='Raw', value=f"`{''.join(rawlist)}`", inline=False)
158
159 await ctx.send(embed=embed)
160
161 @command()
162 async def zen(self, ctx: Context, *, search_value: Union[int, str, None] = None) -> None:
163 """
164 Show the Zen of Python.
165
166 Without any arguments, the full Zen will be produced.
167 If an integer is provided, the line with that index will be produced.
168 If a string is provided, the line which matches best will be produced.
169 """
170 embed = Embed(
171 colour=Colour.blurple(),
172 title="The Zen of Python",
173 description=ZEN_OF_PYTHON
174 )
175
176 if search_value is None:
177 embed.title += ", by Tim Peters"
178 await ctx.send(embed=embed)
179 return
180
181 zen_lines = ZEN_OF_PYTHON.splitlines()
182
183 # handle if it's an index int
184 if isinstance(search_value, int):
185 upper_bound = len(zen_lines) - 1
186 lower_bound = -1 * upper_bound
187 if not (lower_bound <= search_value <= upper_bound):
188 raise BadArgument(f"Please provide an index between {lower_bound} and {upper_bound}.")
189
190 embed.title += f" (line {search_value % len(zen_lines)}):"
191 embed.description = zen_lines[search_value]
192 await ctx.send(embed=embed)
193 return
194
195 # Try to handle first exact word due difflib.SequenceMatched may use some other similar word instead
196 # exact word.
197 for i, line in enumerate(zen_lines):
198 for word in line.split():
199 if word.lower() == search_value.lower():
200 embed.title += f" (line {i}):"
201 embed.description = line
202 await ctx.send(embed=embed)
203 return
204
205 # handle if it's a search string and not exact word
206 matcher = difflib.SequenceMatcher(None, search_value.lower())
207
208 best_match = ""
209 match_index = 0
210 best_ratio = 0
211
212 for index, line in enumerate(zen_lines):
213 matcher.set_seq2(line.lower())
214
215 # the match ratio needs to be adjusted because, naturally,
216 # longer lines will have worse ratios than shorter lines when
217 # fuzzy searching for keywords. this seems to work okay.
218 adjusted_ratio = (len(line) - 5) ** 0.5 * matcher.ratio()
219
220 if adjusted_ratio > best_ratio:
221 best_ratio = adjusted_ratio
222 best_match = line
223 match_index = index
224
225 if not best_match:
226 raise BadArgument("I didn't get a match! Please try again with a different search term.")
227
228 embed.title += f" (line {match_index}):"
229 embed.description = best_match
230 await ctx.send(embed=embed)
231
232 @command(aliases=("poll",))
233 @with_role(*MODERATION_ROLES)
234 async def vote(self, ctx: Context, title: str, *options: str) -> None:
235 """
236 Build a quick voting poll with matching reactions with the provided options.
237
238 A maximum of 20 options can be provided, as Discord supports a max of 20
239 reactions on a single message.
240 """
241 if len(options) < 2:
242 raise BadArgument("Please provide at least 2 options.")
243 if len(options) > 20:
244 raise BadArgument("I can only handle 20 options!")
245
246 codepoint_start = 127462 # represents "regional_indicator_a" unicode value
247 options = {chr(i): f"{chr(i)} - {v}" for i, v in enumerate(options, start=codepoint_start)}
248 embed = Embed(title=title, description="\n".join(options.values()))
249 message = await ctx.send(embed=embed)
250 for reaction in options:
251 await message.add_reaction(reaction)
252
253 async def send_pep_zero(self, ctx: Context) -> None:
254 """Send information about PEP 0."""
255 pep_embed = Embed(
256 title="**PEP 0 - Index of Python Enhancement Proposals (PEPs)**",
257 description="[Link](https://www.python.org/dev/peps/)"
258 )
259 pep_embed.set_thumbnail(url=ICON_URL)
260 pep_embed.add_field(name="Status", value="Active")
261 pep_embed.add_field(name="Created", value="13-Jul-2000")
262 pep_embed.add_field(name="Type", value="Informational")
263
264 await ctx.send(embed=pep_embed)
265
266
267 def setup(bot: Bot) -> None:
268 """Load the Utils cog."""
269 bot.add_cog(Utils(bot))
270
[end of bot/cogs/utils.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/bot/cogs/utils.py b/bot/cogs/utils.py
--- a/bot/cogs/utils.py
+++ b/bot/cogs/utils.py
@@ -6,7 +6,7 @@
from io import StringIO
from typing import Tuple, Union
-from discord import Colour, Embed
+from discord import Colour, Embed, utils
from discord.ext.commands import BadArgument, Cog, Context, command
from bot.bot import Bot
@@ -145,7 +145,7 @@
u_code = f"\\U{digit:>08}"
url = f"https://www.compart.com/en/unicode/U+{digit:>04}"
name = f"[{unicodedata.name(char, '')}]({url})"
- info = f"`{u_code.ljust(10)}`: {name} - {char}"
+ info = f"`{u_code.ljust(10)}`: {name} - {utils.escape_markdown(char)}"
return info, u_code
charlist, rawlist = zip(*(get_info(c) for c in characters))
|
{"golden_diff": "diff --git a/bot/cogs/utils.py b/bot/cogs/utils.py\n--- a/bot/cogs/utils.py\n+++ b/bot/cogs/utils.py\n@@ -6,7 +6,7 @@\n from io import StringIO\n from typing import Tuple, Union\n \n-from discord import Colour, Embed\n+from discord import Colour, Embed, utils\n from discord.ext.commands import BadArgument, Cog, Context, command\n \n from bot.bot import Bot\n@@ -145,7 +145,7 @@\n u_code = f\"\\\\U{digit:>08}\"\n url = f\"https://www.compart.com/en/unicode/U+{digit:>04}\"\n name = f\"[{unicodedata.name(char, '')}]({url})\"\n- info = f\"`{u_code.ljust(10)}`: {name} - {char}\"\n+ info = f\"`{u_code.ljust(10)}`: {name} - {utils.escape_markdown(char)}\"\n return info, u_code\n \n charlist, rawlist = zip(*(get_info(c) for c in characters))\n", "issue": "Charinfo doesn't escape backticks\nThe Unicode escapes are formatted in code blocks. When the embed tries to also render a literal backtick, this ends up interfering with the code blocks and creating a mess.\r\n\r\n\r\n\n", "before_files": [{"content": "import difflib\nimport logging\nimport re\nimport unicodedata\nfrom email.parser import HeaderParser\nfrom io import StringIO\nfrom typing import Tuple, Union\n\nfrom discord import Colour, Embed\nfrom discord.ext.commands import BadArgument, Cog, Context, command\n\nfrom bot.bot import Bot\nfrom bot.constants import Channels, MODERATION_ROLES, STAFF_ROLES\nfrom bot.decorators import in_whitelist, with_role\n\nlog = logging.getLogger(__name__)\n\nZEN_OF_PYTHON = \"\"\"\\\nBeautiful is better than ugly.\nExplicit is better than implicit.\nSimple is better than complex.\nComplex is better than complicated.\nFlat is better than nested.\nSparse is better than dense.\nReadability counts.\nSpecial cases aren't special enough to break the rules.\nAlthough practicality beats purity.\nErrors should never pass silently.\nUnless explicitly silenced.\nIn the face of ambiguity, refuse the temptation to guess.\nThere should be one-- and preferably only one --obvious way to do it.\nAlthough that way may not be obvious at first unless you're Dutch.\nNow is better than never.\nAlthough never is often better than *right* now.\nIf the implementation is hard to explain, it's a bad idea.\nIf the implementation is easy to explain, it may be a good idea.\nNamespaces are one honking great idea -- let's do more of those!\n\"\"\"\n\nICON_URL = \"https://www.python.org/static/opengraph-icon-200x200.png\"\n\n\nclass Utils(Cog):\n \"\"\"A selection of utilities which don't have a clear category.\"\"\"\n\n def __init__(self, bot: Bot):\n self.bot = bot\n\n self.base_pep_url = \"http://www.python.org/dev/peps/pep-\"\n self.base_github_pep_url = \"https://raw.githubusercontent.com/python/peps/master/pep-\"\n\n @command(name='pep', aliases=('get_pep', 'p'))\n async def pep_command(self, ctx: Context, pep_number: str) -> None:\n \"\"\"Fetches information about a PEP and sends it to the channel.\"\"\"\n if pep_number.isdigit():\n pep_number = int(pep_number)\n else:\n await ctx.send_help(ctx.command)\n return\n\n # Handle PEP 0 directly because it's not in .rst or .txt so it can't be accessed like other PEPs.\n if pep_number == 0:\n return await self.send_pep_zero(ctx)\n\n possible_extensions = ['.txt', '.rst']\n found_pep = False\n for extension in possible_extensions:\n # Attempt to fetch the PEP\n pep_url = f\"{self.base_github_pep_url}{pep_number:04}{extension}\"\n log.trace(f\"Requesting PEP {pep_number} with {pep_url}\")\n response = await self.bot.http_session.get(pep_url)\n\n if response.status == 200:\n log.trace(\"PEP found\")\n found_pep = True\n\n pep_content = await response.text()\n\n # Taken from https://github.com/python/peps/blob/master/pep0/pep.py#L179\n pep_header = HeaderParser().parse(StringIO(pep_content))\n\n # Assemble the embed\n pep_embed = Embed(\n title=f\"**PEP {pep_number} - {pep_header['Title']}**\",\n description=f\"[Link]({self.base_pep_url}{pep_number:04})\",\n )\n\n pep_embed.set_thumbnail(url=ICON_URL)\n\n # Add the interesting information\n fields_to_check = (\"Status\", \"Python-Version\", \"Created\", \"Type\")\n for field in fields_to_check:\n # Check for a PEP metadata field that is present but has an empty value\n # embed field values can't contain an empty string\n if pep_header.get(field, \"\"):\n pep_embed.add_field(name=field, value=pep_header[field])\n\n elif response.status != 404:\n # any response except 200 and 404 is expected\n found_pep = True # actually not, but it's easier to display this way\n log.trace(f\"The user requested PEP {pep_number}, but the response had an unexpected status code: \"\n f\"{response.status}.\\n{response.text}\")\n\n error_message = \"Unexpected HTTP error during PEP search. Please let us know.\"\n pep_embed = Embed(title=\"Unexpected error\", description=error_message)\n pep_embed.colour = Colour.red()\n break\n\n if not found_pep:\n log.trace(\"PEP was not found\")\n not_found = f\"PEP {pep_number} does not exist.\"\n pep_embed = Embed(title=\"PEP not found\", description=not_found)\n pep_embed.colour = Colour.red()\n\n await ctx.message.channel.send(embed=pep_embed)\n\n @command()\n @in_whitelist(channels=(Channels.bot_commands,), roles=STAFF_ROLES)\n async def charinfo(self, ctx: Context, *, characters: str) -> None:\n \"\"\"Shows you information on up to 25 unicode characters.\"\"\"\n match = re.match(r\"<(a?):(\\w+):(\\d+)>\", characters)\n if match:\n embed = Embed(\n title=\"Non-Character Detected\",\n description=(\n \"Only unicode characters can be processed, but a custom Discord emoji \"\n \"was found. Please remove it and try again.\"\n )\n )\n embed.colour = Colour.red()\n await ctx.send(embed=embed)\n return\n\n if len(characters) > 25:\n embed = Embed(title=f\"Too many characters ({len(characters)}/25)\")\n embed.colour = Colour.red()\n await ctx.send(embed=embed)\n return\n\n def get_info(char: str) -> Tuple[str, str]:\n digit = f\"{ord(char):x}\"\n if len(digit) <= 4:\n u_code = f\"\\\\u{digit:>04}\"\n else:\n u_code = f\"\\\\U{digit:>08}\"\n url = f\"https://www.compart.com/en/unicode/U+{digit:>04}\"\n name = f\"[{unicodedata.name(char, '')}]({url})\"\n info = f\"`{u_code.ljust(10)}`: {name} - {char}\"\n return info, u_code\n\n charlist, rawlist = zip(*(get_info(c) for c in characters))\n\n embed = Embed(description=\"\\n\".join(charlist))\n embed.set_author(name=\"Character Info\")\n\n if len(characters) > 1:\n embed.add_field(name='Raw', value=f\"`{''.join(rawlist)}`\", inline=False)\n\n await ctx.send(embed=embed)\n\n @command()\n async def zen(self, ctx: Context, *, search_value: Union[int, str, None] = None) -> None:\n \"\"\"\n Show the Zen of Python.\n\n Without any arguments, the full Zen will be produced.\n If an integer is provided, the line with that index will be produced.\n If a string is provided, the line which matches best will be produced.\n \"\"\"\n embed = Embed(\n colour=Colour.blurple(),\n title=\"The Zen of Python\",\n description=ZEN_OF_PYTHON\n )\n\n if search_value is None:\n embed.title += \", by Tim Peters\"\n await ctx.send(embed=embed)\n return\n\n zen_lines = ZEN_OF_PYTHON.splitlines()\n\n # handle if it's an index int\n if isinstance(search_value, int):\n upper_bound = len(zen_lines) - 1\n lower_bound = -1 * upper_bound\n if not (lower_bound <= search_value <= upper_bound):\n raise BadArgument(f\"Please provide an index between {lower_bound} and {upper_bound}.\")\n\n embed.title += f\" (line {search_value % len(zen_lines)}):\"\n embed.description = zen_lines[search_value]\n await ctx.send(embed=embed)\n return\n\n # Try to handle first exact word due difflib.SequenceMatched may use some other similar word instead\n # exact word.\n for i, line in enumerate(zen_lines):\n for word in line.split():\n if word.lower() == search_value.lower():\n embed.title += f\" (line {i}):\"\n embed.description = line\n await ctx.send(embed=embed)\n return\n\n # handle if it's a search string and not exact word\n matcher = difflib.SequenceMatcher(None, search_value.lower())\n\n best_match = \"\"\n match_index = 0\n best_ratio = 0\n\n for index, line in enumerate(zen_lines):\n matcher.set_seq2(line.lower())\n\n # the match ratio needs to be adjusted because, naturally,\n # longer lines will have worse ratios than shorter lines when\n # fuzzy searching for keywords. this seems to work okay.\n adjusted_ratio = (len(line) - 5) ** 0.5 * matcher.ratio()\n\n if adjusted_ratio > best_ratio:\n best_ratio = adjusted_ratio\n best_match = line\n match_index = index\n\n if not best_match:\n raise BadArgument(\"I didn't get a match! Please try again with a different search term.\")\n\n embed.title += f\" (line {match_index}):\"\n embed.description = best_match\n await ctx.send(embed=embed)\n\n @command(aliases=(\"poll\",))\n @with_role(*MODERATION_ROLES)\n async def vote(self, ctx: Context, title: str, *options: str) -> None:\n \"\"\"\n Build a quick voting poll with matching reactions with the provided options.\n\n A maximum of 20 options can be provided, as Discord supports a max of 20\n reactions on a single message.\n \"\"\"\n if len(options) < 2:\n raise BadArgument(\"Please provide at least 2 options.\")\n if len(options) > 20:\n raise BadArgument(\"I can only handle 20 options!\")\n\n codepoint_start = 127462 # represents \"regional_indicator_a\" unicode value\n options = {chr(i): f\"{chr(i)} - {v}\" for i, v in enumerate(options, start=codepoint_start)}\n embed = Embed(title=title, description=\"\\n\".join(options.values()))\n message = await ctx.send(embed=embed)\n for reaction in options:\n await message.add_reaction(reaction)\n\n async def send_pep_zero(self, ctx: Context) -> None:\n \"\"\"Send information about PEP 0.\"\"\"\n pep_embed = Embed(\n title=\"**PEP 0 - Index of Python Enhancement Proposals (PEPs)**\",\n description=\"[Link](https://www.python.org/dev/peps/)\"\n )\n pep_embed.set_thumbnail(url=ICON_URL)\n pep_embed.add_field(name=\"Status\", value=\"Active\")\n pep_embed.add_field(name=\"Created\", value=\"13-Jul-2000\")\n pep_embed.add_field(name=\"Type\", value=\"Informational\")\n\n await ctx.send(embed=pep_embed)\n\n\ndef setup(bot: Bot) -> None:\n \"\"\"Load the Utils cog.\"\"\"\n bot.add_cog(Utils(bot))\n", "path": "bot/cogs/utils.py"}]}
| 3,790 | 239 |
gh_patches_debug_18828
|
rasdani/github-patches
|
git_diff
|
platformsh__platformsh-docs-2079
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add keywords for search
### Where on docs.platform.sh should be changed?
/configuration/app/app-reference.html
### What exactly should be updated?
We'd like specific pages to be findable by searching for specific words. For example, the app reference when searching for `.platform.app.yaml` (this may also involve a problem with escaping characters like `.`). Add keywords or other to make these pages findable.
### Additional context
_No response_
</issue>
<code>
[start of search/main.py]
1 import os
2 import glob
3 import json
4 import meilisearch
5 from platformshconfig import Config
6
7 class Search:
8 def __init__(self):
9 self.default = {
10 "host": "http://127.0.0.1",
11 "key": None,
12 "port": 7700
13 }
14
15 self.scrape_dir = "output"
16 self.scrape_config = "config/scrape.json"
17 self.docs_index = "docs"
18 self.primaryKey = "documentId"
19 self.index_name = "Docs"
20
21 # Below are Platform.sh custom settings for how the search engine functions.
22
23 # Data available to the dropdown React app in docs, used to fill out autocomplete results.
24 self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']
25 # Data actually searchable by our queries.
26 self.searchable_attributes = ['title', 'pageUrl', 'section', 'url', 'text']
27
28 # Show results for one query with the listed pages, when they by default would not show up as best results.
29 # Note: these aren't automatically two-way, which is why they're all defined twice.
30 self.synonyms = {
31 "cron": ["crons"],
32 "crons": ["cron tasks", "cron jobs"],
33 "e-mail": ["email"],
34 "routes.yaml": ["routes"],
35 "routes": ["routes.yaml"],
36 "services": ["services.yaml"],
37 "services.yaml": ["services"],
38 "application": [".platform.app.yaml", "app.yaml", "applications.yaml"],
39 ".platform.app.yaml": ["application"],
40 "app.yaml": ["application"],
41 "applications.yaml": ["application", "multi-app"],
42 "multi-app": ["applications.yaml"],
43 "regions": ["public ip addresses"],
44 "public ip addresses": ["regions"],
45 "ssl": ["https", "tls"],
46 "https": ["ssl"],
47 }
48
49 # Ranking rules:
50 #
51 # - Default order: ["words", "typo", "proximity", "attribute", "sort", "exactness"]
52 #
53 # - words: number of times query is in document (greater number gets priority)
54 # - typo: fewer typos > more typos
55 # - proximity: smaller distance between multiple occurences of query in same document > larger distances
56 # - attribute: sorted according to order of importance of attributes (searchable_attributes). terms in
57 # more important attributes first.
58 # - sort: queries are sorted at query time
59 # - exactness: similarity of matched words in document with query
60
61 self.ranking_rules = ["rank:asc", "attribute", "typo", "words", "proximity", "exactness"]
62
63 self.updated_settings = {
64 "rankingRules": self.ranking_rules,
65 "searchableAttributes": self.searchable_attributes,
66 "displayedAttributes": self.displayed_attributes
67 }
68
69 # Group results by page
70 self.distinct_attribute = "pageUrl"
71
72 def getConnectionString(self):
73 """
74 Sets the Meilisearch host string, depending on the environment.
75
76 Returns:
77 string: Meilisearch host string.
78 """
79 if os.environ.get('PORT'):
80 return "{}:{}".format(self.default["host"], os.environ['PORT'])
81 else:
82 return "{}:{}".format(self.default["host"], self.default["port"])
83
84 def getMasterKey(self):
85 """
86 Retrieves the Meilisearch master key, either from the Platform.sh environment or locally.
87 """
88 config = Config()
89 if config.is_valid_platform():
90 return config.projectEntropy
91 elif os.environ.get("MEILI_MASTER_KEY"):
92 return os.environ["MEILI_MASTER_KEY"]
93 else:
94 return self.default["key"]
95
96 def add_documents(self, index):
97 """
98 Cycle through the individual site indexes in /outputs so their individual documents can be added to Meilisearch.
99 """
100 documents = [f for f in glob.glob("{}/*.json".format(self.scrape_dir))]
101 for doc in documents:
102 self.add(doc, index)
103
104 def add(self, doc, index):
105 """
106 Add an individual site's index to the Meilisearch service.
107 """
108 with open(doc) as scraped_index:
109 data = json.load(scraped_index)
110 index.add_documents(data)
111
112 def update(self):
113 """
114 Updates the Meilisearch index.
115 """
116 # Create a Meilisearch client.
117 client = meilisearch.Client(self.getConnectionString(), self.getMasterKey())
118
119 # Delete previous index
120 if len(client.get_indexes()):
121 client.get_index(self.docs_index).delete()
122
123 # Create a new index
124 index = client.create_index(uid=self.docs_index, options={'primaryKey': self.primaryKey, 'uid': self.index_name})
125
126 # Add synonyms for the index
127 index.update_synonyms(self.synonyms)
128
129 # Update its settings: what can be searched, what's displayable, and how results should be ranked.
130 index.update_settings(self.updated_settings)
131
132 # Update distinct attribute.
133 index.update_distinct_attribute(self.distinct_attribute)
134
135 # Add documents to the index
136 self.add_documents(index)
137
138 if __name__ == "__main__":
139 meili = Search()
140 meili.update()
141
[end of search/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/search/main.py b/search/main.py
--- a/search/main.py
+++ b/search/main.py
@@ -21,9 +21,9 @@
# Below are Platform.sh custom settings for how the search engine functions.
# Data available to the dropdown React app in docs, used to fill out autocomplete results.
- self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']
+ self.displayed_attributes = ['keywords', 'title', 'text', 'url', 'site', 'section']
# Data actually searchable by our queries.
- self.searchable_attributes = ['title', 'pageUrl', 'section', 'url', 'text']
+ self.searchable_attributes = ['keywords', 'title', 'pageUrl', 'section', 'text', 'url']
# Show results for one query with the listed pages, when they by default would not show up as best results.
# Note: these aren't automatically two-way, which is why they're all defined twice.
|
{"golden_diff": "diff --git a/search/main.py b/search/main.py\n--- a/search/main.py\n+++ b/search/main.py\n@@ -21,9 +21,9 @@\n # Below are Platform.sh custom settings for how the search engine functions.\n \n # Data available to the dropdown React app in docs, used to fill out autocomplete results.\n- self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']\n+ self.displayed_attributes = ['keywords', 'title', 'text', 'url', 'site', 'section']\n # Data actually searchable by our queries.\n- self.searchable_attributes = ['title', 'pageUrl', 'section', 'url', 'text']\n+ self.searchable_attributes = ['keywords', 'title', 'pageUrl', 'section', 'text', 'url']\n \n # Show results for one query with the listed pages, when they by default would not show up as best results.\n # Note: these aren't automatically two-way, which is why they're all defined twice.\n", "issue": "Add keywords for search\n### Where on docs.platform.sh should be changed?\n\n/configuration/app/app-reference.html\n\n### What exactly should be updated?\n\nWe'd like specific pages to be findable by searching for specific words. For example, the app reference when searching for `.platform.app.yaml` (this may also involve a problem with escaping characters like `.`). Add keywords or other to make these pages findable.\n\n### Additional context\n\n_No response_\n", "before_files": [{"content": "import os\nimport glob\nimport json\nimport meilisearch\nfrom platformshconfig import Config\n\nclass Search:\n def __init__(self):\n self.default = {\n \"host\": \"http://127.0.0.1\",\n \"key\": None,\n \"port\": 7700\n }\n\n self.scrape_dir = \"output\"\n self.scrape_config = \"config/scrape.json\"\n self.docs_index = \"docs\"\n self.primaryKey = \"documentId\"\n self.index_name = \"Docs\"\n\n # Below are Platform.sh custom settings for how the search engine functions.\n\n # Data available to the dropdown React app in docs, used to fill out autocomplete results.\n self.displayed_attributes = ['title', 'text', 'url', 'site', 'section']\n # Data actually searchable by our queries.\n self.searchable_attributes = ['title', 'pageUrl', 'section', 'url', 'text']\n\n # Show results for one query with the listed pages, when they by default would not show up as best results.\n # Note: these aren't automatically two-way, which is why they're all defined twice.\n self.synonyms = {\n \"cron\": [\"crons\"],\n \"crons\": [\"cron tasks\", \"cron jobs\"],\n \"e-mail\": [\"email\"],\n \"routes.yaml\": [\"routes\"],\n \"routes\": [\"routes.yaml\"],\n \"services\": [\"services.yaml\"],\n \"services.yaml\": [\"services\"],\n \"application\": [\".platform.app.yaml\", \"app.yaml\", \"applications.yaml\"],\n \".platform.app.yaml\": [\"application\"],\n \"app.yaml\": [\"application\"],\n \"applications.yaml\": [\"application\", \"multi-app\"],\n \"multi-app\": [\"applications.yaml\"],\n \"regions\": [\"public ip addresses\"],\n \"public ip addresses\": [\"regions\"],\n \"ssl\": [\"https\", \"tls\"],\n \"https\": [\"ssl\"],\n }\n\n # Ranking rules:\n #\n # - Default order: [\"words\", \"typo\", \"proximity\", \"attribute\", \"sort\", \"exactness\"]\n #\n # - words: number of times query is in document (greater number gets priority)\n # - typo: fewer typos > more typos\n # - proximity: smaller distance between multiple occurences of query in same document > larger distances\n # - attribute: sorted according to order of importance of attributes (searchable_attributes). terms in\n # more important attributes first.\n # - sort: queries are sorted at query time\n # - exactness: similarity of matched words in document with query\n\n self.ranking_rules = [\"rank:asc\", \"attribute\", \"typo\", \"words\", \"proximity\", \"exactness\"]\n\n self.updated_settings = {\n \"rankingRules\": self.ranking_rules,\n \"searchableAttributes\": self.searchable_attributes,\n \"displayedAttributes\": self.displayed_attributes\n }\n\n # Group results by page\n self.distinct_attribute = \"pageUrl\"\n\n def getConnectionString(self):\n \"\"\"\n Sets the Meilisearch host string, depending on the environment.\n\n Returns:\n string: Meilisearch host string.\n \"\"\"\n if os.environ.get('PORT'):\n return \"{}:{}\".format(self.default[\"host\"], os.environ['PORT'])\n else:\n return \"{}:{}\".format(self.default[\"host\"], self.default[\"port\"])\n\n def getMasterKey(self):\n \"\"\"\n Retrieves the Meilisearch master key, either from the Platform.sh environment or locally.\n \"\"\"\n config = Config()\n if config.is_valid_platform():\n return config.projectEntropy\n elif os.environ.get(\"MEILI_MASTER_KEY\"):\n return os.environ[\"MEILI_MASTER_KEY\"]\n else:\n return self.default[\"key\"]\n\n def add_documents(self, index):\n \"\"\"\n Cycle through the individual site indexes in /outputs so their individual documents can be added to Meilisearch.\n \"\"\"\n documents = [f for f in glob.glob(\"{}/*.json\".format(self.scrape_dir))]\n for doc in documents:\n self.add(doc, index)\n\n def add(self, doc, index):\n \"\"\"\n Add an individual site's index to the Meilisearch service.\n \"\"\"\n with open(doc) as scraped_index:\n data = json.load(scraped_index)\n index.add_documents(data)\n\n def update(self):\n \"\"\"\n Updates the Meilisearch index.\n \"\"\"\n # Create a Meilisearch client.\n client = meilisearch.Client(self.getConnectionString(), self.getMasterKey())\n\n # Delete previous index\n if len(client.get_indexes()):\n client.get_index(self.docs_index).delete()\n\n # Create a new index\n index = client.create_index(uid=self.docs_index, options={'primaryKey': self.primaryKey, 'uid': self.index_name})\n\n # Add synonyms for the index\n index.update_synonyms(self.synonyms)\n\n # Update its settings: what can be searched, what's displayable, and how results should be ranked.\n index.update_settings(self.updated_settings)\n\n # Update distinct attribute.\n index.update_distinct_attribute(self.distinct_attribute)\n\n # Add documents to the index\n self.add_documents(index)\n\nif __name__ == \"__main__\":\n meili = Search()\n meili.update()\n", "path": "search/main.py"}]}
| 2,082 | 221 |
gh_patches_debug_37279
|
rasdani/github-patches
|
git_diff
|
saleor__saleor-12359
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bug: Missing "app id" in app extension token
### What are you trying to achieve?
App (via app-sdk) is checking `app` field in user JWT token (sent from dashboard, via postmessage to appbridge) and compares it to registered app ID (that app persists). If app ID is different, app is rejecting the call.
However, there are also extensions that have their own token (they can have different permissions). **This token lacks `app` field**.
For app, extension is its part, so its expected to have app ID in the token as well. Otherwise, app would require to persist all registered extensions to work, which is quite hard to implement.
### Steps to reproduce the problem
## App token payload
```json
{
"iat": 1679048767,
"owner": "saleor",
"iss": "https://automation-dashboard.staging.saleor.cloud/graphql/",
"exp": 1679135167,
"token": "M2irFmzVASR3",
"email": "[email protected]",
"type": "thirdparty",
"user_id": "VXNlcjoxMDMz",
"is_staff": true,
"app": "QXBwOjY0",
"permissions": [
"MANAGE_PRODUCTS"
],
"user_permissions": [
"MANAGE_GIFT_CARD",
"MANAGE_MENUS",
"MANAGE_PAGES",
"MANAGE_PAGE_TYPES_AND_ATTRIBUTES",
"MANAGE_PLUGINS",
"MANAGE_TAXES",
"MANAGE_USERS",
"MANAGE_CHECKOUTS",
"MANAGE_PRODUCT_TYPES_AND_ATTRIBUTES",
"MANAGE_TRANSLATIONS",
"MANAGE_APPS",
"MANAGE_OBSERVABILITY",
"MANAGE_STAFF",
"HANDLE_TAXES",
"MANAGE_CHANNELS",
"HANDLE_CHECKOUTS",
"MANAGE_SETTINGS",
"HANDLE_PAYMENTS",
"MANAGE_ORDERS",
"MANAGE_PRODUCTS",
"MANAGE_SHIPPING",
"MANAGE_DISCOUNTS",
"IMPERSONATE_USER"
]
}
```
## Extension token payload
```json
"owner": "saleor",
"iss": "https://automation-dashboard.staging.saleor.cloud/graphql/",
"exp": 1679137338,
"token": "M2irFmzVASR3",
"email": "[email protected]",
"type": "thirdparty",
"user_id": "VXNlcjoxMDMz",
"is_staff": true,
"app_extension": "QXBwRXh0ZW5zaW9uOjQy",
"permissions": [],
"user_permissions": [
"MANAGE_GIFT_CARD",
"MANAGE_MENUS",
"MANAGE_PAGES",
"MANAGE_PAGE_TYPES_AND_ATTRIBUTES",
"MANAGE_PLUGINS",
"MANAGE_TAXES",
"MANAGE_USERS",
"MANAGE_CHECKOUTS",
"MANAGE_PRODUCT_TYPES_AND_ATTRIBUTES",
"MANAGE_TRANSLATIONS",
"MANAGE_APPS",
"MANAGE_OBSERVABILITY",
"MANAGE_STAFF",
"HANDLE_TAXES",
"MANAGE_CHANNELS",
"HANDLE_CHECKOUTS",
"MANAGE_SETTINGS",
"HANDLE_PAYMENTS",
"MANAGE_ORDERS",
"MANAGE_PRODUCTS",
"MANAGE_SHIPPING",
"MANAGE_DISCOUNTS",
"IMPERSONATE_USER"
]
}
```
### What did you expect to happen?
Extension token should contain `app` field
### Logs
_No response_
### Environment
Saleor version: …
OS and version: …
</issue>
<code>
[start of saleor/core/jwt.py]
1 from datetime import datetime, timedelta
2 from typing import Any, Dict, Iterable, Optional
3
4 import graphene
5 import jwt
6 from django.conf import settings
7
8 from ..account.models import User
9 from ..app.models import App, AppExtension
10 from ..permission.enums import (
11 get_permission_names,
12 get_permissions_from_codenames,
13 get_permissions_from_names,
14 )
15 from ..permission.models import Permission
16 from .jwt_manager import get_jwt_manager
17
18 JWT_ACCESS_TYPE = "access"
19 JWT_REFRESH_TYPE = "refresh"
20 JWT_THIRDPARTY_ACCESS_TYPE = "thirdparty"
21 JWT_REFRESH_TOKEN_COOKIE_NAME = "refreshToken"
22
23 PERMISSIONS_FIELD = "permissions"
24 USER_PERMISSION_FIELD = "user_permissions"
25 JWT_SALEOR_OWNER_NAME = "saleor"
26 JWT_OWNER_FIELD = "owner"
27
28
29 def jwt_base_payload(
30 exp_delta: Optional[timedelta], token_owner: str
31 ) -> Dict[str, Any]:
32 utc_now = datetime.utcnow()
33
34 payload = {
35 "iat": utc_now,
36 JWT_OWNER_FIELD: token_owner,
37 "iss": get_jwt_manager().get_issuer(),
38 }
39 if exp_delta:
40 payload["exp"] = utc_now + exp_delta
41 return payload
42
43
44 def jwt_user_payload(
45 user: User,
46 token_type: str,
47 exp_delta: Optional[timedelta],
48 additional_payload: Optional[Dict[str, Any]] = None,
49 token_owner: str = JWT_SALEOR_OWNER_NAME,
50 ) -> Dict[str, Any]:
51 payload = jwt_base_payload(exp_delta, token_owner)
52 payload.update(
53 {
54 "token": user.jwt_token_key,
55 "email": user.email,
56 "type": token_type,
57 "user_id": graphene.Node.to_global_id("User", user.id),
58 "is_staff": user.is_staff,
59 }
60 )
61 if additional_payload:
62 payload.update(additional_payload)
63 return payload
64
65
66 def jwt_encode(payload: Dict[str, Any]) -> str:
67 jwt_manager = get_jwt_manager()
68 return jwt_manager.encode(payload)
69
70
71 def jwt_decode_with_exception_handler(
72 token: str, verify_expiration=settings.JWT_EXPIRE
73 ) -> Optional[Dict[str, Any]]:
74 try:
75 return jwt_decode(token, verify_expiration=verify_expiration)
76 except jwt.PyJWTError:
77 return None
78
79
80 def jwt_decode(
81 token: str, verify_expiration=settings.JWT_EXPIRE, verify_aud: bool = False
82 ) -> Dict[str, Any]:
83 jwt_manager = get_jwt_manager()
84 return jwt_manager.decode(token, verify_expiration, verify_aud=verify_aud)
85
86
87 def create_token(payload: Dict[str, Any], exp_delta: timedelta) -> str:
88 payload.update(jwt_base_payload(exp_delta, token_owner=JWT_SALEOR_OWNER_NAME))
89 return jwt_encode(payload)
90
91
92 def create_access_token(
93 user: User, additional_payload: Optional[Dict[str, Any]] = None
94 ) -> str:
95 payload = jwt_user_payload(
96 user, JWT_ACCESS_TYPE, settings.JWT_TTL_ACCESS, additional_payload
97 )
98 return jwt_encode(payload)
99
100
101 def create_refresh_token(
102 user: User, additional_payload: Optional[Dict[str, Any]] = None
103 ) -> str:
104 payload = jwt_user_payload(
105 user,
106 JWT_REFRESH_TYPE,
107 settings.JWT_TTL_REFRESH,
108 additional_payload,
109 )
110 return jwt_encode(payload)
111
112
113 def get_user_from_payload(payload: Dict[str, Any], request=None) -> Optional[User]:
114 # TODO: dataloader
115 user = User.objects.filter(email=payload["email"], is_active=True).first()
116 user_jwt_token = payload.get("token")
117 if not user_jwt_token or not user:
118 raise jwt.InvalidTokenError(
119 "Invalid token. Create new one by using tokenCreate mutation."
120 )
121 if user.jwt_token_key != user_jwt_token:
122 raise jwt.InvalidTokenError(
123 "Invalid token. Create new one by using tokenCreate mutation."
124 )
125 return user
126
127
128 def is_saleor_token(token: str) -> bool:
129 """Confirm that token was generated by Saleor not by plugin."""
130 try:
131 payload = jwt.decode(token, options={"verify_signature": False})
132 except jwt.PyJWTError:
133 return False
134 owner = payload.get(JWT_OWNER_FIELD)
135 if not owner or owner != JWT_SALEOR_OWNER_NAME:
136 return False
137 return True
138
139
140 def get_user_from_access_payload(payload: dict, request=None) -> Optional[User]:
141 jwt_type = payload.get("type")
142 if jwt_type not in [JWT_ACCESS_TYPE, JWT_THIRDPARTY_ACCESS_TYPE]:
143 raise jwt.InvalidTokenError(
144 "Invalid token. Create new one by using tokenCreate mutation."
145 )
146 permissions = payload.get(PERMISSIONS_FIELD, None)
147 user = get_user_from_payload(payload, request)
148 if user:
149 if permissions is not None:
150 token_permissions = get_permissions_from_names(permissions)
151 token_codenames = [perm.codename for perm in token_permissions]
152 user.effective_permissions = get_permissions_from_codenames(token_codenames)
153 user.is_staff = True if user.effective_permissions else False
154
155 if payload.get("is_staff"):
156 user.is_staff = True
157 return user
158
159
160 def _create_access_token_for_third_party_actions(
161 permissions: Iterable["Permission"],
162 user: "User",
163 type: str,
164 object_id: int,
165 object_payload_key: str,
166 audience: Optional[str],
167 ):
168 app_permission_enums = get_permission_names(permissions)
169
170 permissions = user.effective_permissions
171 user_permission_enums = get_permission_names(permissions)
172 additional_payload = {
173 object_payload_key: graphene.Node.to_global_id(type, object_id),
174 PERMISSIONS_FIELD: list(app_permission_enums & user_permission_enums),
175 USER_PERMISSION_FIELD: list(user_permission_enums),
176 }
177 if audience:
178 additional_payload["aud"] = audience
179
180 payload = jwt_user_payload(
181 user,
182 JWT_THIRDPARTY_ACCESS_TYPE,
183 exp_delta=settings.JWT_TTL_APP_ACCESS,
184 additional_payload=additional_payload,
185 )
186 return jwt_encode(payload)
187
188
189 def create_access_token_for_app(app: "App", user: "User"):
190 """Create access token for app.
191
192 App can use user's JWT token to proceed given operation in Saleor.
193 The token which can be used by App has additional field defining the permissions
194 assigned to it. The permissions set is the intersection of user permissions and
195 app permissions.
196 """
197 app_permissions = app.permissions.all()
198 return _create_access_token_for_third_party_actions(
199 permissions=app_permissions,
200 user=user,
201 type="App",
202 object_id=app.id,
203 object_payload_key="app",
204 audience=app.audience,
205 )
206
207
208 def create_access_token_for_app_extension(
209 app_extension: "AppExtension",
210 permissions: Iterable["Permission"],
211 user: "User",
212 app: "App",
213 ):
214 return _create_access_token_for_third_party_actions(
215 permissions=permissions,
216 user=user,
217 type="AppExtension",
218 object_id=app_extension.id,
219 object_payload_key="app_extension",
220 audience=app.audience,
221 )
222
[end of saleor/core/jwt.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/saleor/core/jwt.py b/saleor/core/jwt.py
--- a/saleor/core/jwt.py
+++ b/saleor/core/jwt.py
@@ -20,6 +20,7 @@
JWT_THIRDPARTY_ACCESS_TYPE = "thirdparty"
JWT_REFRESH_TOKEN_COOKIE_NAME = "refreshToken"
+APP_KEY_FIELD = "app"
PERMISSIONS_FIELD = "permissions"
USER_PERMISSION_FIELD = "user_permissions"
JWT_SALEOR_OWNER_NAME = "saleor"
@@ -160,22 +161,22 @@
def _create_access_token_for_third_party_actions(
permissions: Iterable["Permission"],
user: "User",
- type: str,
- object_id: int,
- object_payload_key: str,
- audience: Optional[str],
+ app: "App",
+ extra: Optional[Dict[str, Any]] = None,
):
app_permission_enums = get_permission_names(permissions)
permissions = user.effective_permissions
user_permission_enums = get_permission_names(permissions)
additional_payload = {
- object_payload_key: graphene.Node.to_global_id(type, object_id),
+ APP_KEY_FIELD: graphene.Node.to_global_id("App", app.id),
PERMISSIONS_FIELD: list(app_permission_enums & user_permission_enums),
USER_PERMISSION_FIELD: list(user_permission_enums),
}
- if audience:
- additional_payload["aud"] = audience
+ if app.audience:
+ additional_payload["aud"] = app.audience
+ if extra:
+ additional_payload.update(extra)
payload = jwt_user_payload(
user,
@@ -196,12 +197,7 @@
"""
app_permissions = app.permissions.all()
return _create_access_token_for_third_party_actions(
- permissions=app_permissions,
- user=user,
- type="App",
- object_id=app.id,
- object_payload_key="app",
- audience=app.audience,
+ permissions=app_permissions, user=user, app=app
)
@@ -211,11 +207,10 @@
user: "User",
app: "App",
):
+ app_extension_id = graphene.Node.to_global_id("AppExtension", app_extension.id)
return _create_access_token_for_third_party_actions(
permissions=permissions,
user=user,
- type="AppExtension",
- object_id=app_extension.id,
- object_payload_key="app_extension",
- audience=app.audience,
+ app=app,
+ extra={"app_extension": app_extension_id},
)
|
{"golden_diff": "diff --git a/saleor/core/jwt.py b/saleor/core/jwt.py\n--- a/saleor/core/jwt.py\n+++ b/saleor/core/jwt.py\n@@ -20,6 +20,7 @@\n JWT_THIRDPARTY_ACCESS_TYPE = \"thirdparty\"\n JWT_REFRESH_TOKEN_COOKIE_NAME = \"refreshToken\"\n \n+APP_KEY_FIELD = \"app\"\n PERMISSIONS_FIELD = \"permissions\"\n USER_PERMISSION_FIELD = \"user_permissions\"\n JWT_SALEOR_OWNER_NAME = \"saleor\"\n@@ -160,22 +161,22 @@\n def _create_access_token_for_third_party_actions(\n permissions: Iterable[\"Permission\"],\n user: \"User\",\n- type: str,\n- object_id: int,\n- object_payload_key: str,\n- audience: Optional[str],\n+ app: \"App\",\n+ extra: Optional[Dict[str, Any]] = None,\n ):\n app_permission_enums = get_permission_names(permissions)\n \n permissions = user.effective_permissions\n user_permission_enums = get_permission_names(permissions)\n additional_payload = {\n- object_payload_key: graphene.Node.to_global_id(type, object_id),\n+ APP_KEY_FIELD: graphene.Node.to_global_id(\"App\", app.id),\n PERMISSIONS_FIELD: list(app_permission_enums & user_permission_enums),\n USER_PERMISSION_FIELD: list(user_permission_enums),\n }\n- if audience:\n- additional_payload[\"aud\"] = audience\n+ if app.audience:\n+ additional_payload[\"aud\"] = app.audience\n+ if extra:\n+ additional_payload.update(extra)\n \n payload = jwt_user_payload(\n user,\n@@ -196,12 +197,7 @@\n \"\"\"\n app_permissions = app.permissions.all()\n return _create_access_token_for_third_party_actions(\n- permissions=app_permissions,\n- user=user,\n- type=\"App\",\n- object_id=app.id,\n- object_payload_key=\"app\",\n- audience=app.audience,\n+ permissions=app_permissions, user=user, app=app\n )\n \n \n@@ -211,11 +207,10 @@\n user: \"User\",\n app: \"App\",\n ):\n+ app_extension_id = graphene.Node.to_global_id(\"AppExtension\", app_extension.id)\n return _create_access_token_for_third_party_actions(\n permissions=permissions,\n user=user,\n- type=\"AppExtension\",\n- object_id=app_extension.id,\n- object_payload_key=\"app_extension\",\n- audience=app.audience,\n+ app=app,\n+ extra={\"app_extension\": app_extension_id},\n )\n", "issue": "Bug: Missing \"app id\" in app extension token\n### What are you trying to achieve?\r\n\r\nApp (via app-sdk) is checking `app` field in user JWT token (sent from dashboard, via postmessage to appbridge) and compares it to registered app ID (that app persists). If app ID is different, app is rejecting the call.\r\n\r\nHowever, there are also extensions that have their own token (they can have different permissions). **This token lacks `app` field**. \r\n\r\nFor app, extension is its part, so its expected to have app ID in the token as well. Otherwise, app would require to persist all registered extensions to work, which is quite hard to implement.\r\n\r\n### Steps to reproduce the problem\r\n\r\n## App token payload\r\n\r\n```json\r\n{\r\n \"iat\": 1679048767,\r\n \"owner\": \"saleor\",\r\n \"iss\": \"https://automation-dashboard.staging.saleor.cloud/graphql/\",\r\n \"exp\": 1679135167,\r\n \"token\": \"M2irFmzVASR3\",\r\n \"email\": \"[email protected]\",\r\n \"type\": \"thirdparty\",\r\n \"user_id\": \"VXNlcjoxMDMz\",\r\n \"is_staff\": true,\r\n \"app\": \"QXBwOjY0\",\r\n \"permissions\": [\r\n \"MANAGE_PRODUCTS\"\r\n ],\r\n \"user_permissions\": [\r\n \"MANAGE_GIFT_CARD\",\r\n \"MANAGE_MENUS\",\r\n \"MANAGE_PAGES\",\r\n \"MANAGE_PAGE_TYPES_AND_ATTRIBUTES\",\r\n \"MANAGE_PLUGINS\",\r\n \"MANAGE_TAXES\",\r\n \"MANAGE_USERS\",\r\n \"MANAGE_CHECKOUTS\",\r\n \"MANAGE_PRODUCT_TYPES_AND_ATTRIBUTES\",\r\n \"MANAGE_TRANSLATIONS\",\r\n \"MANAGE_APPS\",\r\n \"MANAGE_OBSERVABILITY\",\r\n \"MANAGE_STAFF\",\r\n \"HANDLE_TAXES\",\r\n \"MANAGE_CHANNELS\",\r\n \"HANDLE_CHECKOUTS\",\r\n \"MANAGE_SETTINGS\",\r\n \"HANDLE_PAYMENTS\",\r\n \"MANAGE_ORDERS\",\r\n \"MANAGE_PRODUCTS\",\r\n \"MANAGE_SHIPPING\",\r\n \"MANAGE_DISCOUNTS\",\r\n \"IMPERSONATE_USER\"\r\n ]\r\n}\r\n```\r\n\r\n\r\n## Extension token payload\r\n\r\n```json\r\n \"owner\": \"saleor\",\r\n \"iss\": \"https://automation-dashboard.staging.saleor.cloud/graphql/\",\r\n \"exp\": 1679137338,\r\n \"token\": \"M2irFmzVASR3\",\r\n \"email\": \"[email protected]\",\r\n \"type\": \"thirdparty\",\r\n \"user_id\": \"VXNlcjoxMDMz\",\r\n \"is_staff\": true,\r\n \"app_extension\": \"QXBwRXh0ZW5zaW9uOjQy\",\r\n \"permissions\": [],\r\n \"user_permissions\": [\r\n \"MANAGE_GIFT_CARD\",\r\n \"MANAGE_MENUS\",\r\n \"MANAGE_PAGES\",\r\n \"MANAGE_PAGE_TYPES_AND_ATTRIBUTES\",\r\n \"MANAGE_PLUGINS\",\r\n \"MANAGE_TAXES\",\r\n \"MANAGE_USERS\",\r\n \"MANAGE_CHECKOUTS\",\r\n \"MANAGE_PRODUCT_TYPES_AND_ATTRIBUTES\",\r\n \"MANAGE_TRANSLATIONS\",\r\n \"MANAGE_APPS\",\r\n \"MANAGE_OBSERVABILITY\",\r\n \"MANAGE_STAFF\",\r\n \"HANDLE_TAXES\",\r\n \"MANAGE_CHANNELS\",\r\n \"HANDLE_CHECKOUTS\",\r\n \"MANAGE_SETTINGS\",\r\n \"HANDLE_PAYMENTS\",\r\n \"MANAGE_ORDERS\",\r\n \"MANAGE_PRODUCTS\",\r\n \"MANAGE_SHIPPING\",\r\n \"MANAGE_DISCOUNTS\",\r\n \"IMPERSONATE_USER\"\r\n ]\r\n}\r\n```\r\n\r\n### What did you expect to happen?\r\n\r\nExtension token should contain `app` field\r\n\r\n### Logs\r\n\r\n_No response_\r\n\r\n### Environment\r\n\r\nSaleor version: \u2026\r\nOS and version: \u2026\r\n\n", "before_files": [{"content": "from datetime import datetime, timedelta\nfrom typing import Any, Dict, Iterable, Optional\n\nimport graphene\nimport jwt\nfrom django.conf import settings\n\nfrom ..account.models import User\nfrom ..app.models import App, AppExtension\nfrom ..permission.enums import (\n get_permission_names,\n get_permissions_from_codenames,\n get_permissions_from_names,\n)\nfrom ..permission.models import Permission\nfrom .jwt_manager import get_jwt_manager\n\nJWT_ACCESS_TYPE = \"access\"\nJWT_REFRESH_TYPE = \"refresh\"\nJWT_THIRDPARTY_ACCESS_TYPE = \"thirdparty\"\nJWT_REFRESH_TOKEN_COOKIE_NAME = \"refreshToken\"\n\nPERMISSIONS_FIELD = \"permissions\"\nUSER_PERMISSION_FIELD = \"user_permissions\"\nJWT_SALEOR_OWNER_NAME = \"saleor\"\nJWT_OWNER_FIELD = \"owner\"\n\n\ndef jwt_base_payload(\n exp_delta: Optional[timedelta], token_owner: str\n) -> Dict[str, Any]:\n utc_now = datetime.utcnow()\n\n payload = {\n \"iat\": utc_now,\n JWT_OWNER_FIELD: token_owner,\n \"iss\": get_jwt_manager().get_issuer(),\n }\n if exp_delta:\n payload[\"exp\"] = utc_now + exp_delta\n return payload\n\n\ndef jwt_user_payload(\n user: User,\n token_type: str,\n exp_delta: Optional[timedelta],\n additional_payload: Optional[Dict[str, Any]] = None,\n token_owner: str = JWT_SALEOR_OWNER_NAME,\n) -> Dict[str, Any]:\n payload = jwt_base_payload(exp_delta, token_owner)\n payload.update(\n {\n \"token\": user.jwt_token_key,\n \"email\": user.email,\n \"type\": token_type,\n \"user_id\": graphene.Node.to_global_id(\"User\", user.id),\n \"is_staff\": user.is_staff,\n }\n )\n if additional_payload:\n payload.update(additional_payload)\n return payload\n\n\ndef jwt_encode(payload: Dict[str, Any]) -> str:\n jwt_manager = get_jwt_manager()\n return jwt_manager.encode(payload)\n\n\ndef jwt_decode_with_exception_handler(\n token: str, verify_expiration=settings.JWT_EXPIRE\n) -> Optional[Dict[str, Any]]:\n try:\n return jwt_decode(token, verify_expiration=verify_expiration)\n except jwt.PyJWTError:\n return None\n\n\ndef jwt_decode(\n token: str, verify_expiration=settings.JWT_EXPIRE, verify_aud: bool = False\n) -> Dict[str, Any]:\n jwt_manager = get_jwt_manager()\n return jwt_manager.decode(token, verify_expiration, verify_aud=verify_aud)\n\n\ndef create_token(payload: Dict[str, Any], exp_delta: timedelta) -> str:\n payload.update(jwt_base_payload(exp_delta, token_owner=JWT_SALEOR_OWNER_NAME))\n return jwt_encode(payload)\n\n\ndef create_access_token(\n user: User, additional_payload: Optional[Dict[str, Any]] = None\n) -> str:\n payload = jwt_user_payload(\n user, JWT_ACCESS_TYPE, settings.JWT_TTL_ACCESS, additional_payload\n )\n return jwt_encode(payload)\n\n\ndef create_refresh_token(\n user: User, additional_payload: Optional[Dict[str, Any]] = None\n) -> str:\n payload = jwt_user_payload(\n user,\n JWT_REFRESH_TYPE,\n settings.JWT_TTL_REFRESH,\n additional_payload,\n )\n return jwt_encode(payload)\n\n\ndef get_user_from_payload(payload: Dict[str, Any], request=None) -> Optional[User]:\n # TODO: dataloader\n user = User.objects.filter(email=payload[\"email\"], is_active=True).first()\n user_jwt_token = payload.get(\"token\")\n if not user_jwt_token or not user:\n raise jwt.InvalidTokenError(\n \"Invalid token. Create new one by using tokenCreate mutation.\"\n )\n if user.jwt_token_key != user_jwt_token:\n raise jwt.InvalidTokenError(\n \"Invalid token. Create new one by using tokenCreate mutation.\"\n )\n return user\n\n\ndef is_saleor_token(token: str) -> bool:\n \"\"\"Confirm that token was generated by Saleor not by plugin.\"\"\"\n try:\n payload = jwt.decode(token, options={\"verify_signature\": False})\n except jwt.PyJWTError:\n return False\n owner = payload.get(JWT_OWNER_FIELD)\n if not owner or owner != JWT_SALEOR_OWNER_NAME:\n return False\n return True\n\n\ndef get_user_from_access_payload(payload: dict, request=None) -> Optional[User]:\n jwt_type = payload.get(\"type\")\n if jwt_type not in [JWT_ACCESS_TYPE, JWT_THIRDPARTY_ACCESS_TYPE]:\n raise jwt.InvalidTokenError(\n \"Invalid token. Create new one by using tokenCreate mutation.\"\n )\n permissions = payload.get(PERMISSIONS_FIELD, None)\n user = get_user_from_payload(payload, request)\n if user:\n if permissions is not None:\n token_permissions = get_permissions_from_names(permissions)\n token_codenames = [perm.codename for perm in token_permissions]\n user.effective_permissions = get_permissions_from_codenames(token_codenames)\n user.is_staff = True if user.effective_permissions else False\n\n if payload.get(\"is_staff\"):\n user.is_staff = True\n return user\n\n\ndef _create_access_token_for_third_party_actions(\n permissions: Iterable[\"Permission\"],\n user: \"User\",\n type: str,\n object_id: int,\n object_payload_key: str,\n audience: Optional[str],\n):\n app_permission_enums = get_permission_names(permissions)\n\n permissions = user.effective_permissions\n user_permission_enums = get_permission_names(permissions)\n additional_payload = {\n object_payload_key: graphene.Node.to_global_id(type, object_id),\n PERMISSIONS_FIELD: list(app_permission_enums & user_permission_enums),\n USER_PERMISSION_FIELD: list(user_permission_enums),\n }\n if audience:\n additional_payload[\"aud\"] = audience\n\n payload = jwt_user_payload(\n user,\n JWT_THIRDPARTY_ACCESS_TYPE,\n exp_delta=settings.JWT_TTL_APP_ACCESS,\n additional_payload=additional_payload,\n )\n return jwt_encode(payload)\n\n\ndef create_access_token_for_app(app: \"App\", user: \"User\"):\n \"\"\"Create access token for app.\n\n App can use user's JWT token to proceed given operation in Saleor.\n The token which can be used by App has additional field defining the permissions\n assigned to it. The permissions set is the intersection of user permissions and\n app permissions.\n \"\"\"\n app_permissions = app.permissions.all()\n return _create_access_token_for_third_party_actions(\n permissions=app_permissions,\n user=user,\n type=\"App\",\n object_id=app.id,\n object_payload_key=\"app\",\n audience=app.audience,\n )\n\n\ndef create_access_token_for_app_extension(\n app_extension: \"AppExtension\",\n permissions: Iterable[\"Permission\"],\n user: \"User\",\n app: \"App\",\n):\n return _create_access_token_for_third_party_actions(\n permissions=permissions,\n user=user,\n type=\"AppExtension\",\n object_id=app_extension.id,\n object_payload_key=\"app_extension\",\n audience=app.audience,\n )\n", "path": "saleor/core/jwt.py"}]}
| 3,463 | 575 |
gh_patches_debug_16950
|
rasdani/github-patches
|
git_diff
|
pyg-team__pytorch_geometric-8016
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Strange behaviour with the to_dense_adj function
### 🐛 Describe the bug
While using to_dense_adj with edge attributes, I observed that the `idx` values generated are not unique ((line 94 in to_dense_adj.py). As such, the scatter_add function sums up overlapping values and generating an output greater than the original range of edge_attr values.

The required tensors can be downloaded from [here](https://filesender.switch.ch/filesender2/download.php?token=d4b1599a-6eee-4b06-8640-be16fb784ab5&files_ids=490595)
Any help or insights are highly appreciated.
Thanks,
Chinmay
### Environment
* PyG version:2.3.1
* PyTorch version: 2.0.1+cu117
* OS: Ubuntu 20.04
* Python version:3.8.10
* CUDA/cuDNN version:11.7
* How you installed PyTorch and PyG (`conda`, `pip`, source):pip
* Any other relevant information (*e.g.*, version of `torch-scatter`):
</issue>
<code>
[start of torch_geometric/utils/to_dense_adj.py]
1 from typing import Optional
2
3 import torch
4 from torch import Tensor
5
6 from torch_geometric.typing import OptTensor
7 from torch_geometric.utils import cumsum, scatter
8
9
10 def to_dense_adj(
11 edge_index: Tensor,
12 batch: OptTensor = None,
13 edge_attr: OptTensor = None,
14 max_num_nodes: Optional[int] = None,
15 batch_size: Optional[int] = None,
16 ) -> Tensor:
17 r"""Converts batched sparse adjacency matrices given by edge indices and
18 edge attributes to a single dense batched adjacency matrix.
19
20 Args:
21 edge_index (LongTensor): The edge indices.
22 batch (LongTensor, optional): Batch vector
23 :math:`\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N`, which assigns each
24 node to a specific example. (default: :obj:`None`)
25 edge_attr (Tensor, optional): Edge weights or multi-dimensional edge
26 features. (default: :obj:`None`)
27 max_num_nodes (int, optional): The size of the output node dimension.
28 (default: :obj:`None`)
29 batch_size (int, optional) The batch size. (default: :obj:`None`)
30
31 :rtype: :class:`Tensor`
32
33 Examples:
34
35 >>> edge_index = torch.tensor([[0, 0, 1, 2, 3],
36 ... [0, 1, 0, 3, 0]])
37 >>> batch = torch.tensor([0, 0, 1, 1])
38 >>> to_dense_adj(edge_index, batch)
39 tensor([[[1., 1.],
40 [1., 0.]],
41 [[0., 1.],
42 [1., 0.]]])
43
44 >>> to_dense_adj(edge_index, batch, max_num_nodes=4)
45 tensor([[[1., 1., 0., 0.],
46 [1., 0., 0., 0.],
47 [0., 0., 0., 0.],
48 [0., 0., 0., 0.]],
49 [[0., 1., 0., 0.],
50 [1., 0., 0., 0.],
51 [0., 0., 0., 0.],
52 [0., 0., 0., 0.]]])
53
54 >>> edge_attr = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0])
55 >>> to_dense_adj(edge_index, batch, edge_attr)
56 tensor([[[1., 2.],
57 [3., 0.]],
58 [[0., 4.],
59 [5., 0.]]])
60 """
61 if batch is None:
62 num_nodes = int(edge_index.max()) + 1 if edge_index.numel() > 0 else 0
63 batch = edge_index.new_zeros(num_nodes)
64
65 if batch_size is None:
66 batch_size = int(batch.max()) + 1 if batch.numel() > 0 else 1
67
68 one = batch.new_ones(batch.size(0))
69 num_nodes = scatter(one, batch, dim=0, dim_size=batch_size, reduce='sum')
70 cum_nodes = cumsum(num_nodes)
71
72 idx0 = batch[edge_index[0]]
73 idx1 = edge_index[0] - cum_nodes[batch][edge_index[0]]
74 idx2 = edge_index[1] - cum_nodes[batch][edge_index[1]]
75
76 if max_num_nodes is None:
77 max_num_nodes = int(num_nodes.max())
78
79 elif ((idx1.numel() > 0 and idx1.max() >= max_num_nodes)
80 or (idx2.numel() > 0 and idx2.max() >= max_num_nodes)):
81 mask = (idx1 < max_num_nodes) & (idx2 < max_num_nodes)
82 idx0 = idx0[mask]
83 idx1 = idx1[mask]
84 idx2 = idx2[mask]
85 edge_attr = None if edge_attr is None else edge_attr[mask]
86
87 if edge_attr is None:
88 edge_attr = torch.ones(idx0.numel(), device=edge_index.device)
89
90 size = [batch_size, max_num_nodes, max_num_nodes]
91 size += list(edge_attr.size())[1:]
92 flattened_size = batch_size * max_num_nodes * max_num_nodes
93
94 idx = idx0 * max_num_nodes * max_num_nodes + idx1 * max_num_nodes + idx2
95 adj = scatter(edge_attr, idx, dim=0, dim_size=flattened_size, reduce='sum')
96 adj = adj.view(size)
97
98 return adj
99
[end of torch_geometric/utils/to_dense_adj.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torch_geometric/utils/to_dense_adj.py b/torch_geometric/utils/to_dense_adj.py
--- a/torch_geometric/utils/to_dense_adj.py
+++ b/torch_geometric/utils/to_dense_adj.py
@@ -23,7 +23,10 @@
:math:`\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N`, which assigns each
node to a specific example. (default: :obj:`None`)
edge_attr (Tensor, optional): Edge weights or multi-dimensional edge
- features. (default: :obj:`None`)
+ features.
+ If :obj:`edge_index` contains duplicated edges, the dense adjacency
+ matrix output holds the summed up entries of :obj:`edge_attr` for
+ duplicated edges. (default: :obj:`None`)
max_num_nodes (int, optional): The size of the output node dimension.
(default: :obj:`None`)
batch_size (int, optional) The batch size. (default: :obj:`None`)
|
{"golden_diff": "diff --git a/torch_geometric/utils/to_dense_adj.py b/torch_geometric/utils/to_dense_adj.py\n--- a/torch_geometric/utils/to_dense_adj.py\n+++ b/torch_geometric/utils/to_dense_adj.py\n@@ -23,7 +23,10 @@\n :math:`\\mathbf{b} \\in {\\{ 0, \\ldots, B-1\\}}^N`, which assigns each\n node to a specific example. (default: :obj:`None`)\n edge_attr (Tensor, optional): Edge weights or multi-dimensional edge\n- features. (default: :obj:`None`)\n+ features.\n+ If :obj:`edge_index` contains duplicated edges, the dense adjacency\n+ matrix output holds the summed up entries of :obj:`edge_attr` for\n+ duplicated edges. (default: :obj:`None`)\n max_num_nodes (int, optional): The size of the output node dimension.\n (default: :obj:`None`)\n batch_size (int, optional) The batch size. (default: :obj:`None`)\n", "issue": "Strange behaviour with the to_dense_adj function\n### \ud83d\udc1b Describe the bug\n\nWhile using to_dense_adj with edge attributes, I observed that the `idx` values generated are not unique ((line 94 in to_dense_adj.py). As such, the scatter_add function sums up overlapping values and generating an output greater than the original range of edge_attr values.\r\n\r\n\r\n\r\nThe required tensors can be downloaded from [here](https://filesender.switch.ch/filesender2/download.php?token=d4b1599a-6eee-4b06-8640-be16fb784ab5&files_ids=490595)\r\n\r\nAny help or insights are highly appreciated. \r\n\r\nThanks,\r\nChinmay\n\n### Environment\n\n* PyG version:2.3.1\r\n* PyTorch version: 2.0.1+cu117\r\n* OS: Ubuntu 20.04\r\n* Python version:3.8.10\r\n* CUDA/cuDNN version:11.7\r\n* How you installed PyTorch and PyG (`conda`, `pip`, source):pip\r\n* Any other relevant information (*e.g.*, version of `torch-scatter`):\r\n\n", "before_files": [{"content": "from typing import Optional\n\nimport torch\nfrom torch import Tensor\n\nfrom torch_geometric.typing import OptTensor\nfrom torch_geometric.utils import cumsum, scatter\n\n\ndef to_dense_adj(\n edge_index: Tensor,\n batch: OptTensor = None,\n edge_attr: OptTensor = None,\n max_num_nodes: Optional[int] = None,\n batch_size: Optional[int] = None,\n) -> Tensor:\n r\"\"\"Converts batched sparse adjacency matrices given by edge indices and\n edge attributes to a single dense batched adjacency matrix.\n\n Args:\n edge_index (LongTensor): The edge indices.\n batch (LongTensor, optional): Batch vector\n :math:`\\mathbf{b} \\in {\\{ 0, \\ldots, B-1\\}}^N`, which assigns each\n node to a specific example. (default: :obj:`None`)\n edge_attr (Tensor, optional): Edge weights or multi-dimensional edge\n features. (default: :obj:`None`)\n max_num_nodes (int, optional): The size of the output node dimension.\n (default: :obj:`None`)\n batch_size (int, optional) The batch size. (default: :obj:`None`)\n\n :rtype: :class:`Tensor`\n\n Examples:\n\n >>> edge_index = torch.tensor([[0, 0, 1, 2, 3],\n ... [0, 1, 0, 3, 0]])\n >>> batch = torch.tensor([0, 0, 1, 1])\n >>> to_dense_adj(edge_index, batch)\n tensor([[[1., 1.],\n [1., 0.]],\n [[0., 1.],\n [1., 0.]]])\n\n >>> to_dense_adj(edge_index, batch, max_num_nodes=4)\n tensor([[[1., 1., 0., 0.],\n [1., 0., 0., 0.],\n [0., 0., 0., 0.],\n [0., 0., 0., 0.]],\n [[0., 1., 0., 0.],\n [1., 0., 0., 0.],\n [0., 0., 0., 0.],\n [0., 0., 0., 0.]]])\n\n >>> edge_attr = torch.tensor([1.0, 2.0, 3.0, 4.0, 5.0])\n >>> to_dense_adj(edge_index, batch, edge_attr)\n tensor([[[1., 2.],\n [3., 0.]],\n [[0., 4.],\n [5., 0.]]])\n \"\"\"\n if batch is None:\n num_nodes = int(edge_index.max()) + 1 if edge_index.numel() > 0 else 0\n batch = edge_index.new_zeros(num_nodes)\n\n if batch_size is None:\n batch_size = int(batch.max()) + 1 if batch.numel() > 0 else 1\n\n one = batch.new_ones(batch.size(0))\n num_nodes = scatter(one, batch, dim=0, dim_size=batch_size, reduce='sum')\n cum_nodes = cumsum(num_nodes)\n\n idx0 = batch[edge_index[0]]\n idx1 = edge_index[0] - cum_nodes[batch][edge_index[0]]\n idx2 = edge_index[1] - cum_nodes[batch][edge_index[1]]\n\n if max_num_nodes is None:\n max_num_nodes = int(num_nodes.max())\n\n elif ((idx1.numel() > 0 and idx1.max() >= max_num_nodes)\n or (idx2.numel() > 0 and idx2.max() >= max_num_nodes)):\n mask = (idx1 < max_num_nodes) & (idx2 < max_num_nodes)\n idx0 = idx0[mask]\n idx1 = idx1[mask]\n idx2 = idx2[mask]\n edge_attr = None if edge_attr is None else edge_attr[mask]\n\n if edge_attr is None:\n edge_attr = torch.ones(idx0.numel(), device=edge_index.device)\n\n size = [batch_size, max_num_nodes, max_num_nodes]\n size += list(edge_attr.size())[1:]\n flattened_size = batch_size * max_num_nodes * max_num_nodes\n\n idx = idx0 * max_num_nodes * max_num_nodes + idx1 * max_num_nodes + idx2\n adj = scatter(edge_attr, idx, dim=0, dim_size=flattened_size, reduce='sum')\n adj = adj.view(size)\n\n return adj\n", "path": "torch_geometric/utils/to_dense_adj.py"}]}
| 2,058 | 234 |
gh_patches_debug_40467
|
rasdani/github-patches
|
git_diff
|
Parsl__parsl-1292
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tasks per app graph appears as a sawtooth, not as rectangles
See attached plot.
This looks like it plots the number of data points at the point a task starts, and then the next point after a task ends, with linear interpolation between the two points. This is an incorrect visualisation: a task does not fade from existing to not existing over the entire duration of execution; instead it exists at full strength for the full duration of existence, and should be represented on the graph as a rectangular, not saw tooth, plot.

</issue>
<code>
[start of parsl/monitoring/visualization/plots/default/workflow_plots.py]
1 import numpy as np
2 import plotly.graph_objs as go
3 import plotly.figure_factory as ff
4 from plotly.offline import plot
5 import networkx as nx
6 import datetime
7
8 from parsl.monitoring.visualization.utils import timestamp_to_int, num_to_timestamp, DB_DATE_FORMAT
9
10
11 def task_gantt_plot(df_task, time_completed=None):
12
13 df_task = df_task.sort_values(by=['task_time_submitted'], ascending=False)
14
15 # df_task['task_time_submitted'] = pd.to_datetime(df_task['task_time_submitted'], unit='s')
16 # df_task['task_time_returned'] = pd.to_datetime(df_task['task_time_returned'], unit='s')
17
18 # df_task = df_task.rename(index=str, columns={"task_id": "Task",
19 # "task_time_submitted": "Start",
20 # "task_time_returned": "Finish",
21 # })
22 # parsl_tasks = df_task.to_dict('records')
23 parsl_tasks = []
24 for i, task in df_task.iterrows():
25 time_running, time_returned = task['task_time_running'], task['task_time_returned']
26 if task['task_time_returned'] is None:
27 time_returned = datetime.datetime.now()
28 if time_completed is not None:
29 time_returned = time_completed
30 if task['task_time_running'] is None:
31 time_running = task['task_time_submitted']
32 description = "Task ID: {}, app: {}".format(task['task_id'], task['task_func_name'])
33 dic1 = dict(Task=description, Start=task['task_time_submitted'],
34 Finish=time_running, Resource="Pending")
35 dic2 = dict(Task=description, Start=time_running,
36 Finish=time_returned, Resource="Running")
37 parsl_tasks.extend([dic1, dic2])
38 colors = {'Pending': 'rgb(168, 168, 168)', 'Running': 'rgb(0, 0, 255)'}
39 fig = ff.create_gantt(parsl_tasks,
40 title="",
41 colors=colors,
42 group_tasks=True,
43 show_colorbar=True,
44 index_col='Resource',
45 )
46 fig['layout']['yaxis']['title'] = 'Task'
47 fig['layout']['yaxis']['showticklabels'] = False
48 fig['layout']['xaxis']['title'] = 'Time'
49 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
50
51
52 def task_per_app_plot(df_task, df_status):
53
54 def y_axis_setup(array):
55 count = 0
56 items = []
57 for n in array:
58 if n:
59 count += 1
60 elif count > 0:
61 count -= 1
62 items.append(count)
63 return items
64
65 # Fill up dict "apps" like: {app1: [#task1, #task2], app2: [#task4], app3: [#task3]}
66 apps_dict = dict()
67 for i in range(len(df_task)):
68 row = df_task.iloc[i]
69 if row['task_func_name'] in apps_dict:
70 apps_dict[row['task_func_name']].append(row['task_id'])
71 else:
72 apps_dict[row['task_func_name']] = [row['task_id']]
73
74 fig = go.Figure(
75 data=[go.Scatter(x=df_status[df_status['task_id'].isin(tasks)]['timestamp'],
76 y=y_axis_setup(df_status[df_status['task_id'].isin(
77 tasks)]['task_status_name'] == 'running'),
78 name=app)
79 for app, tasks in apps_dict.items()] +
80 [go.Scatter(x=df_status['timestamp'],
81 y=y_axis_setup(
82 df_status['task_status_name'] == 'running'),
83 name='all')],
84 layout=go.Layout(xaxis=dict(tickformat='%m-%d\n%H:%M:%S',
85 autorange=True,
86 title='Time'),
87 yaxis=dict(tickformat=',d',
88 title='Tasks'),
89 hovermode='closest',
90 title='Tasks per app'))
91
92 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
93
94
95 def total_tasks_plot(df_task, df_status, columns=20):
96
97 min_time = timestamp_to_int(min(df_status['timestamp']))
98 max_time = timestamp_to_int(max(df_status['timestamp']))
99 time_step = (max_time - min_time) / columns
100
101 x_axis = []
102 for i in np.arange(min_time, max_time + time_step, time_step):
103 x_axis.append(num_to_timestamp(i).strftime(DB_DATE_FORMAT))
104
105 # Fill up dict "apps" like: {app1: [#task1, #task2], app2: [#task4], app3: [#task3]}
106 apps_dict = dict()
107 for i in range(len(df_task)):
108 row = df_task.iloc[i]
109 if row['task_func_name'] in apps_dict:
110 apps_dict[row['task_func_name']].append(row['task_id'])
111 else:
112 apps_dict[row['task_func_name']] = [row['task_id']]
113
114 def y_axis_setup(value):
115 items = []
116 for app, tasks in apps_dict.items():
117 tmp = []
118 task = df_status[df_status['task_id'].isin(tasks)]
119 for i in range(len(x_axis) - 1):
120 x = task['timestamp'] >= x_axis[i]
121 y = task['timestamp'] < x_axis[i + 1]
122 tmp.append(sum(task.loc[x & y]['task_status_name'] == value))
123 items = np.sum([items, tmp], axis=0)
124
125 return items
126
127 y_axis_done = y_axis_setup('done')
128 y_axis_failed = y_axis_setup('failed')
129
130 fig = go.Figure(data=[go.Bar(x=x_axis[:-1],
131 y=y_axis_done,
132 name='done'),
133 go.Bar(x=x_axis[:-1],
134 y=y_axis_failed,
135 name='failed')],
136 layout=go.Layout(xaxis=dict(tickformat='%m-%d\n%H:%M:%S',
137 autorange=True,
138 title='Time'),
139 yaxis=dict(tickformat=',d',
140 title='Running tasks.' ' Bin width: ' + num_to_timestamp(time_step).strftime('%Mm%Ss')),
141 annotations=[
142 dict(
143 x=0,
144 y=1.07,
145 showarrow=False,
146 text='Total Done: ' +
147 str(sum(y_axis_done)),
148 xref='paper',
149 yref='paper'
150 ),
151 dict(
152 x=0,
153 y=1.05,
154 showarrow=False,
155 text='Total Failed: ' +
156 str(sum(y_axis_failed)),
157 xref='paper',
158 yref='paper'
159 ),
160 ],
161 barmode='stack',
162 title="Total tasks"))
163
164 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
165
166
167 def workflow_dag_plot(df_tasks, group_by_apps=True):
168 G = nx.DiGraph(directed=True)
169 nodes = df_tasks['task_id'].unique()
170 dic = df_tasks.set_index('task_id').to_dict()
171 G.add_nodes_from(nodes)
172
173 # Add edges or links between the nodes:
174 edges = []
175 for k, v in dic['task_depends'].items():
176 if v:
177 adj = v.split(",")
178 for e in adj:
179 edges.append((int(e), k))
180 G.add_edges_from(edges)
181
182 node_positions = nx.nx_pydot.pydot_layout(G, prog='dot')
183 node_traces = []
184
185 if group_by_apps:
186 groups_list = {app: i for i, app in enumerate(
187 df_tasks['task_func_name'].unique())}
188 else:
189 groups_list = {'Pending': (0, 'gray'), "Running": (1, 'blue'), 'Completed': (2, 'green')}
190
191 for k, _ in groups_list.items():
192 node_trace = go.Scatter(
193 x=[],
194 y=[],
195 text=[],
196 mode='markers',
197 textposition='top center',
198 textfont=dict(
199 family='arial',
200 size=18,
201 color='rgb(0,0,0)'
202 ),
203 hoverinfo='text',
204 name=k, # legend app_name here
205 marker=dict(
206 showscale=False,
207 # color='rgb(200,0,0)',
208 size=11,
209 line=dict(width=1, color='rgb(0,0,0)')))
210 node_traces.append(node_trace)
211
212 for node in node_positions:
213 x, y = node_positions[node]
214 if group_by_apps:
215 name = dic['task_func_name'][node]
216 index = groups_list[name]
217 else:
218 if dic['task_time_returned'][node] is not None:
219 name = 'Completed'
220 elif dic['task_time_running'][node] is not None:
221 name = "Running"
222 elif dic['task_time_submitted'][node] is not None:
223 name = "Pending"
224 index, color = groups_list[name]
225 node_traces[index]['marker']['color'] = color
226 node_traces[index]['x'] += tuple([x])
227 node_traces[index]['y'] += tuple([y])
228 node_traces[index]['text'] += tuple(
229 ["{}:{}".format(dic['task_func_name'][node], node)])
230
231 # The edges will be drawn as lines:
232 edge_trace = go.Scatter(
233 x=[],
234 y=[],
235 line=dict(width=1, color='rgb(160,160,160)'),
236 hoverinfo='none',
237 # showlegend=False,
238 name='Dependency',
239 mode='lines')
240
241 for edge in G.edges:
242 x0, y0 = node_positions[edge[0]]
243 x1, y1 = node_positions[edge[1]]
244 edge_trace['x'] += tuple([x0, x1, None])
245 edge_trace['y'] += tuple([y0, y1, None])
246
247 # Create figure:
248 fig = go.Figure(data=[edge_trace] + node_traces,
249 layout=go.Layout(
250 title='Workflow DAG',
251 titlefont=dict(size=16),
252 showlegend=True,
253 hovermode='closest',
254 margin=dict(b=20, l=5, r=5, t=40), # noqa: E741
255 xaxis=dict(showgrid=False, zeroline=False,
256 showticklabels=False),
257 yaxis=dict(showgrid=False, zeroline=False, showticklabels=False)))
258 return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
259
[end of parsl/monitoring/visualization/plots/default/workflow_plots.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/parsl/monitoring/visualization/plots/default/workflow_plots.py b/parsl/monitoring/visualization/plots/default/workflow_plots.py
--- a/parsl/monitoring/visualization/plots/default/workflow_plots.py
+++ b/parsl/monitoring/visualization/plots/default/workflow_plots.py
@@ -1,4 +1,5 @@
import numpy as np
+import pandas as pd
import plotly.graph_objs as go
import plotly.figure_factory as ff
from plotly.offline import plot
@@ -49,47 +50,39 @@
return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
-def task_per_app_plot(df_task, df_status):
-
- def y_axis_setup(array):
- count = 0
- items = []
- for n in array:
- if n:
- count += 1
- elif count > 0:
- count -= 1
- items.append(count)
- return items
-
- # Fill up dict "apps" like: {app1: [#task1, #task2], app2: [#task4], app3: [#task3]}
- apps_dict = dict()
- for i in range(len(df_task)):
- row = df_task.iloc[i]
- if row['task_func_name'] in apps_dict:
- apps_dict[row['task_func_name']].append(row['task_id'])
- else:
- apps_dict[row['task_func_name']] = [row['task_id']]
-
- fig = go.Figure(
- data=[go.Scatter(x=df_status[df_status['task_id'].isin(tasks)]['timestamp'],
- y=y_axis_setup(df_status[df_status['task_id'].isin(
- tasks)]['task_status_name'] == 'running'),
- name=app)
- for app, tasks in apps_dict.items()] +
- [go.Scatter(x=df_status['timestamp'],
- y=y_axis_setup(
- df_status['task_status_name'] == 'running'),
- name='all')],
- layout=go.Layout(xaxis=dict(tickformat='%m-%d\n%H:%M:%S',
- autorange=True,
- title='Time'),
- yaxis=dict(tickformat=',d',
- title='Tasks'),
- hovermode='closest',
- title='Tasks per app'))
-
- return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
+def task_per_app_plot(task, status):
+
+ try:
+ task['epoch_time_running'] = (pd.to_datetime(
+ task['task_time_running']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
+ task['epoch_time_returned'] = (pd.to_datetime(
+ task['task_time_returned']) - pd.Timestamp("1970-01-01")) // pd.Timedelta('1s')
+ start = task['epoch_time_running'].min()
+ end = task['epoch_time_returned'].max()
+ tasks_per_app = {}
+ all_tasks = [0] * (end - start + 1)
+ for i, row in task.iterrows():
+ if row['task_func_name'] not in tasks_per_app:
+ tasks_per_app[row['task_func_name']] = [0] * (end - start + 1)
+ for j in range(int(row['epoch_time_running']) + 1, int(row['epoch_time_returned']) + 1):
+ tasks_per_app[row['task_func_name']][j - start] += 1
+ all_tasks[j - start] += 1
+ fig = go.Figure(
+ data=[go.Scatter(x=list(range(0, end - start + 1)),
+ y=tasks_per_app[app],
+ name=app,
+ ) for app in tasks_per_app] +
+ [go.Scatter(x=list(range(0, end - start + 1)),
+ y=all_tasks,
+ name='All',
+ )],
+ layout=go.Layout(xaxis=dict(autorange=True,
+ title='Time (seconds)'),
+ yaxis=dict(title='Number of tasks'),
+ title="Tasks per app"))
+ return plot(fig, show_link=False, output_type="div", include_plotlyjs=False)
+ except Exception as e:
+ return "The tasks per app plot cannot be generated because of exception {}.".format(e)
def total_tasks_plot(df_task, df_status, columns=20):
|
{"golden_diff": "diff --git a/parsl/monitoring/visualization/plots/default/workflow_plots.py b/parsl/monitoring/visualization/plots/default/workflow_plots.py\n--- a/parsl/monitoring/visualization/plots/default/workflow_plots.py\n+++ b/parsl/monitoring/visualization/plots/default/workflow_plots.py\n@@ -1,4 +1,5 @@\n import numpy as np\n+import pandas as pd\n import plotly.graph_objs as go\n import plotly.figure_factory as ff\n from plotly.offline import plot\n@@ -49,47 +50,39 @@\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n \n \n-def task_per_app_plot(df_task, df_status):\n-\n- def y_axis_setup(array):\n- count = 0\n- items = []\n- for n in array:\n- if n:\n- count += 1\n- elif count > 0:\n- count -= 1\n- items.append(count)\n- return items\n-\n- # Fill up dict \"apps\" like: {app1: [#task1, #task2], app2: [#task4], app3: [#task3]}\n- apps_dict = dict()\n- for i in range(len(df_task)):\n- row = df_task.iloc[i]\n- if row['task_func_name'] in apps_dict:\n- apps_dict[row['task_func_name']].append(row['task_id'])\n- else:\n- apps_dict[row['task_func_name']] = [row['task_id']]\n-\n- fig = go.Figure(\n- data=[go.Scatter(x=df_status[df_status['task_id'].isin(tasks)]['timestamp'],\n- y=y_axis_setup(df_status[df_status['task_id'].isin(\n- tasks)]['task_status_name'] == 'running'),\n- name=app)\n- for app, tasks in apps_dict.items()] +\n- [go.Scatter(x=df_status['timestamp'],\n- y=y_axis_setup(\n- df_status['task_status_name'] == 'running'),\n- name='all')],\n- layout=go.Layout(xaxis=dict(tickformat='%m-%d\\n%H:%M:%S',\n- autorange=True,\n- title='Time'),\n- yaxis=dict(tickformat=',d',\n- title='Tasks'),\n- hovermode='closest',\n- title='Tasks per app'))\n-\n- return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n+def task_per_app_plot(task, status):\n+\n+ try:\n+ task['epoch_time_running'] = (pd.to_datetime(\n+ task['task_time_running']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n+ task['epoch_time_returned'] = (pd.to_datetime(\n+ task['task_time_returned']) - pd.Timestamp(\"1970-01-01\")) // pd.Timedelta('1s')\n+ start = task['epoch_time_running'].min()\n+ end = task['epoch_time_returned'].max()\n+ tasks_per_app = {}\n+ all_tasks = [0] * (end - start + 1)\n+ for i, row in task.iterrows():\n+ if row['task_func_name'] not in tasks_per_app:\n+ tasks_per_app[row['task_func_name']] = [0] * (end - start + 1)\n+ for j in range(int(row['epoch_time_running']) + 1, int(row['epoch_time_returned']) + 1):\n+ tasks_per_app[row['task_func_name']][j - start] += 1\n+ all_tasks[j - start] += 1\n+ fig = go.Figure(\n+ data=[go.Scatter(x=list(range(0, end - start + 1)),\n+ y=tasks_per_app[app],\n+ name=app,\n+ ) for app in tasks_per_app] +\n+ [go.Scatter(x=list(range(0, end - start + 1)),\n+ y=all_tasks,\n+ name='All',\n+ )],\n+ layout=go.Layout(xaxis=dict(autorange=True,\n+ title='Time (seconds)'),\n+ yaxis=dict(title='Number of tasks'),\n+ title=\"Tasks per app\"))\n+ return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n+ except Exception as e:\n+ return \"The tasks per app plot cannot be generated because of exception {}.\".format(e)\n \n \n def total_tasks_plot(df_task, df_status, columns=20):\n", "issue": "Tasks per app graph appears as a sawtooth, not as rectangles\nSee attached plot.\r\n\r\nThis looks like it plots the number of data points at the point a task starts, and then the next point after a task ends, with linear interpolation between the two points. This is an incorrect visualisation: a task does not fade from existing to not existing over the entire duration of execution; instead it exists at full strength for the full duration of existence, and should be represented on the graph as a rectangular, not saw tooth, plot.\r\n\r\n\r\n\n", "before_files": [{"content": "import numpy as np\nimport plotly.graph_objs as go\nimport plotly.figure_factory as ff\nfrom plotly.offline import plot\nimport networkx as nx\nimport datetime\n\nfrom parsl.monitoring.visualization.utils import timestamp_to_int, num_to_timestamp, DB_DATE_FORMAT\n\n\ndef task_gantt_plot(df_task, time_completed=None):\n\n df_task = df_task.sort_values(by=['task_time_submitted'], ascending=False)\n\n # df_task['task_time_submitted'] = pd.to_datetime(df_task['task_time_submitted'], unit='s')\n # df_task['task_time_returned'] = pd.to_datetime(df_task['task_time_returned'], unit='s')\n\n # df_task = df_task.rename(index=str, columns={\"task_id\": \"Task\",\n # \"task_time_submitted\": \"Start\",\n # \"task_time_returned\": \"Finish\",\n # })\n # parsl_tasks = df_task.to_dict('records')\n parsl_tasks = []\n for i, task in df_task.iterrows():\n time_running, time_returned = task['task_time_running'], task['task_time_returned']\n if task['task_time_returned'] is None:\n time_returned = datetime.datetime.now()\n if time_completed is not None:\n time_returned = time_completed\n if task['task_time_running'] is None:\n time_running = task['task_time_submitted']\n description = \"Task ID: {}, app: {}\".format(task['task_id'], task['task_func_name'])\n dic1 = dict(Task=description, Start=task['task_time_submitted'],\n Finish=time_running, Resource=\"Pending\")\n dic2 = dict(Task=description, Start=time_running,\n Finish=time_returned, Resource=\"Running\")\n parsl_tasks.extend([dic1, dic2])\n colors = {'Pending': 'rgb(168, 168, 168)', 'Running': 'rgb(0, 0, 255)'}\n fig = ff.create_gantt(parsl_tasks,\n title=\"\",\n colors=colors,\n group_tasks=True,\n show_colorbar=True,\n index_col='Resource',\n )\n fig['layout']['yaxis']['title'] = 'Task'\n fig['layout']['yaxis']['showticklabels'] = False\n fig['layout']['xaxis']['title'] = 'Time'\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef task_per_app_plot(df_task, df_status):\n\n def y_axis_setup(array):\n count = 0\n items = []\n for n in array:\n if n:\n count += 1\n elif count > 0:\n count -= 1\n items.append(count)\n return items\n\n # Fill up dict \"apps\" like: {app1: [#task1, #task2], app2: [#task4], app3: [#task3]}\n apps_dict = dict()\n for i in range(len(df_task)):\n row = df_task.iloc[i]\n if row['task_func_name'] in apps_dict:\n apps_dict[row['task_func_name']].append(row['task_id'])\n else:\n apps_dict[row['task_func_name']] = [row['task_id']]\n\n fig = go.Figure(\n data=[go.Scatter(x=df_status[df_status['task_id'].isin(tasks)]['timestamp'],\n y=y_axis_setup(df_status[df_status['task_id'].isin(\n tasks)]['task_status_name'] == 'running'),\n name=app)\n for app, tasks in apps_dict.items()] +\n [go.Scatter(x=df_status['timestamp'],\n y=y_axis_setup(\n df_status['task_status_name'] == 'running'),\n name='all')],\n layout=go.Layout(xaxis=dict(tickformat='%m-%d\\n%H:%M:%S',\n autorange=True,\n title='Time'),\n yaxis=dict(tickformat=',d',\n title='Tasks'),\n hovermode='closest',\n title='Tasks per app'))\n\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef total_tasks_plot(df_task, df_status, columns=20):\n\n min_time = timestamp_to_int(min(df_status['timestamp']))\n max_time = timestamp_to_int(max(df_status['timestamp']))\n time_step = (max_time - min_time) / columns\n\n x_axis = []\n for i in np.arange(min_time, max_time + time_step, time_step):\n x_axis.append(num_to_timestamp(i).strftime(DB_DATE_FORMAT))\n\n # Fill up dict \"apps\" like: {app1: [#task1, #task2], app2: [#task4], app3: [#task3]}\n apps_dict = dict()\n for i in range(len(df_task)):\n row = df_task.iloc[i]\n if row['task_func_name'] in apps_dict:\n apps_dict[row['task_func_name']].append(row['task_id'])\n else:\n apps_dict[row['task_func_name']] = [row['task_id']]\n\n def y_axis_setup(value):\n items = []\n for app, tasks in apps_dict.items():\n tmp = []\n task = df_status[df_status['task_id'].isin(tasks)]\n for i in range(len(x_axis) - 1):\n x = task['timestamp'] >= x_axis[i]\n y = task['timestamp'] < x_axis[i + 1]\n tmp.append(sum(task.loc[x & y]['task_status_name'] == value))\n items = np.sum([items, tmp], axis=0)\n\n return items\n\n y_axis_done = y_axis_setup('done')\n y_axis_failed = y_axis_setup('failed')\n\n fig = go.Figure(data=[go.Bar(x=x_axis[:-1],\n y=y_axis_done,\n name='done'),\n go.Bar(x=x_axis[:-1],\n y=y_axis_failed,\n name='failed')],\n layout=go.Layout(xaxis=dict(tickformat='%m-%d\\n%H:%M:%S',\n autorange=True,\n title='Time'),\n yaxis=dict(tickformat=',d',\n title='Running tasks.' ' Bin width: ' + num_to_timestamp(time_step).strftime('%Mm%Ss')),\n annotations=[\n dict(\n x=0,\n y=1.07,\n showarrow=False,\n text='Total Done: ' +\n str(sum(y_axis_done)),\n xref='paper',\n yref='paper'\n ),\n dict(\n x=0,\n y=1.05,\n showarrow=False,\n text='Total Failed: ' +\n str(sum(y_axis_failed)),\n xref='paper',\n yref='paper'\n ),\n ],\n barmode='stack',\n title=\"Total tasks\"))\n\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n\n\ndef workflow_dag_plot(df_tasks, group_by_apps=True):\n G = nx.DiGraph(directed=True)\n nodes = df_tasks['task_id'].unique()\n dic = df_tasks.set_index('task_id').to_dict()\n G.add_nodes_from(nodes)\n\n # Add edges or links between the nodes:\n edges = []\n for k, v in dic['task_depends'].items():\n if v:\n adj = v.split(\",\")\n for e in adj:\n edges.append((int(e), k))\n G.add_edges_from(edges)\n\n node_positions = nx.nx_pydot.pydot_layout(G, prog='dot')\n node_traces = []\n\n if group_by_apps:\n groups_list = {app: i for i, app in enumerate(\n df_tasks['task_func_name'].unique())}\n else:\n groups_list = {'Pending': (0, 'gray'), \"Running\": (1, 'blue'), 'Completed': (2, 'green')}\n\n for k, _ in groups_list.items():\n node_trace = go.Scatter(\n x=[],\n y=[],\n text=[],\n mode='markers',\n textposition='top center',\n textfont=dict(\n family='arial',\n size=18,\n color='rgb(0,0,0)'\n ),\n hoverinfo='text',\n name=k, # legend app_name here\n marker=dict(\n showscale=False,\n # color='rgb(200,0,0)',\n size=11,\n line=dict(width=1, color='rgb(0,0,0)')))\n node_traces.append(node_trace)\n\n for node in node_positions:\n x, y = node_positions[node]\n if group_by_apps:\n name = dic['task_func_name'][node]\n index = groups_list[name]\n else:\n if dic['task_time_returned'][node] is not None:\n name = 'Completed'\n elif dic['task_time_running'][node] is not None:\n name = \"Running\"\n elif dic['task_time_submitted'][node] is not None:\n name = \"Pending\"\n index, color = groups_list[name]\n node_traces[index]['marker']['color'] = color\n node_traces[index]['x'] += tuple([x])\n node_traces[index]['y'] += tuple([y])\n node_traces[index]['text'] += tuple(\n [\"{}:{}\".format(dic['task_func_name'][node], node)])\n\n # The edges will be drawn as lines:\n edge_trace = go.Scatter(\n x=[],\n y=[],\n line=dict(width=1, color='rgb(160,160,160)'),\n hoverinfo='none',\n # showlegend=False,\n name='Dependency',\n mode='lines')\n\n for edge in G.edges:\n x0, y0 = node_positions[edge[0]]\n x1, y1 = node_positions[edge[1]]\n edge_trace['x'] += tuple([x0, x1, None])\n edge_trace['y'] += tuple([y0, y1, None])\n\n # Create figure:\n fig = go.Figure(data=[edge_trace] + node_traces,\n layout=go.Layout(\n title='Workflow DAG',\n titlefont=dict(size=16),\n showlegend=True,\n hovermode='closest',\n margin=dict(b=20, l=5, r=5, t=40), # noqa: E741\n xaxis=dict(showgrid=False, zeroline=False,\n showticklabels=False),\n yaxis=dict(showgrid=False, zeroline=False, showticklabels=False)))\n return plot(fig, show_link=False, output_type=\"div\", include_plotlyjs=False)\n", "path": "parsl/monitoring/visualization/plots/default/workflow_plots.py"}]}
| 3,699 | 1,012 |
gh_patches_debug_1417
|
rasdani/github-patches
|
git_diff
|
getmoto__moto-1400
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
mock_xray_client cannot be used as a context manager
PR #1255 added support for `aws_xray_sdk` which is great.
But there is a problem with it: `moto.mock_xray_client` is *only* a function decorator, and unlike all other `mock_*` methods it cannot be used as a context manager or directly with `start()`...`stop()`.
As a result, it is not possible to write a `py.test` fixture which would add support for mocking `xray_client`.
Also, `mock_xray_client` does not return the result of the function it decorates. Given it is meant to be used to decorate test functions it is most likely not a big issue, but I think it is still worth fixing.
I will prepare a PR for the return value issue soon.
Also I am thinking about refactoring `mock_xray_client` to base it on the existing infrastructure (`BaseBackend`, `base_decorator`) but am not yet enough familiar with `moto` internals to be sure which would be the best way to implement it.
Installed version: `moto-ext==1.1.25`
The problem seemingly persists in current `master` branch.
</issue>
<code>
[start of moto/xray/mock_client.py]
1 from functools import wraps
2 import os
3 from moto.xray import xray_backends
4 import aws_xray_sdk.core
5 from aws_xray_sdk.core.context import Context as AWSContext
6 from aws_xray_sdk.core.emitters.udp_emitter import UDPEmitter
7
8
9 class MockEmitter(UDPEmitter):
10 """
11 Replaces the code that sends UDP to local X-Ray daemon
12 """
13 def __init__(self, daemon_address='127.0.0.1:2000'):
14 address = os.getenv('AWS_XRAY_DAEMON_ADDRESS_YEAH_NOT_TODAY_MATE', daemon_address)
15 self._ip, self._port = self._parse_address(address)
16
17 def _xray_backend(self, region):
18 return xray_backends[region]
19
20 def send_entity(self, entity):
21 # Hack to get region
22 # region = entity.subsegments[0].aws['region']
23 # xray = self._xray_backend(region)
24
25 # TODO store X-Ray data, pretty sure X-Ray needs refactor for this
26 pass
27
28 def _send_data(self, data):
29 raise RuntimeError('Should not be running this')
30
31
32 def mock_xray_client(f):
33 """
34 Mocks the X-Ray sdk by pwning its evil singleton with our methods
35
36 The X-Ray SDK has normally been imported and `patched()` called long before we start mocking.
37 This means the Context() will be very unhappy if an env var isnt present, so we set that, save
38 the old context, then supply our new context.
39 We also patch the Emitter by subclassing the UDPEmitter class replacing its methods and pushing
40 that itno the recorder instance.
41 """
42 @wraps(f)
43 def _wrapped(*args, **kwargs):
44 print("Starting X-Ray Patch")
45
46 old_xray_context_var = os.environ.get('AWS_XRAY_CONTEXT_MISSING')
47 os.environ['AWS_XRAY_CONTEXT_MISSING'] = 'LOG_ERROR'
48 old_xray_context = aws_xray_sdk.core.xray_recorder._context
49 old_xray_emitter = aws_xray_sdk.core.xray_recorder._emitter
50 aws_xray_sdk.core.xray_recorder._context = AWSContext()
51 aws_xray_sdk.core.xray_recorder._emitter = MockEmitter()
52
53 try:
54 f(*args, **kwargs)
55 finally:
56
57 if old_xray_context_var is None:
58 del os.environ['AWS_XRAY_CONTEXT_MISSING']
59 else:
60 os.environ['AWS_XRAY_CONTEXT_MISSING'] = old_xray_context_var
61
62 aws_xray_sdk.core.xray_recorder._emitter = old_xray_emitter
63 aws_xray_sdk.core.xray_recorder._context = old_xray_context
64
65 return _wrapped
66
67
68 class XRaySegment(object):
69 """
70 XRay is request oriented, when a request comes in, normally middleware like django (or automatically in lambda) will mark
71 the start of a segment, this stay open during the lifetime of the request. During that time subsegments may be generated
72 by calling other SDK aware services or using some boto functions. Once the request is finished, middleware will also stop
73 the segment, thus causing it to be emitted via UDP.
74
75 During testing we're going to have to control the start and end of a segment via context managers.
76 """
77 def __enter__(self):
78 aws_xray_sdk.core.xray_recorder.begin_segment(name='moto_mock', traceid=None, parent_id=None, sampling=1)
79
80 return self
81
82 def __exit__(self, exc_type, exc_val, exc_tb):
83 aws_xray_sdk.core.xray_recorder.end_segment()
84
[end of moto/xray/mock_client.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/moto/xray/mock_client.py b/moto/xray/mock_client.py
--- a/moto/xray/mock_client.py
+++ b/moto/xray/mock_client.py
@@ -51,7 +51,7 @@
aws_xray_sdk.core.xray_recorder._emitter = MockEmitter()
try:
- f(*args, **kwargs)
+ return f(*args, **kwargs)
finally:
if old_xray_context_var is None:
|
{"golden_diff": "diff --git a/moto/xray/mock_client.py b/moto/xray/mock_client.py\n--- a/moto/xray/mock_client.py\n+++ b/moto/xray/mock_client.py\n@@ -51,7 +51,7 @@\n aws_xray_sdk.core.xray_recorder._emitter = MockEmitter()\n \n try:\n- f(*args, **kwargs)\n+ return f(*args, **kwargs)\n finally:\n \n if old_xray_context_var is None:\n", "issue": "mock_xray_client cannot be used as a context manager\nPR #1255 added support for `aws_xray_sdk` which is great.\r\nBut there is a problem with it: `moto.mock_xray_client` is *only* a function decorator, and unlike all other `mock_*` methods it cannot be used as a context manager or directly with `start()`...`stop()`.\r\nAs a result, it is not possible to write a `py.test` fixture which would add support for mocking `xray_client`.\r\n\r\nAlso, `mock_xray_client` does not return the result of the function it decorates. Given it is meant to be used to decorate test functions it is most likely not a big issue, but I think it is still worth fixing.\r\n\r\nI will prepare a PR for the return value issue soon.\r\nAlso I am thinking about refactoring `mock_xray_client` to base it on the existing infrastructure (`BaseBackend`, `base_decorator`) but am not yet enough familiar with `moto` internals to be sure which would be the best way to implement it.\r\n\r\nInstalled version: `moto-ext==1.1.25`\r\nThe problem seemingly persists in current `master` branch.\n", "before_files": [{"content": "from functools import wraps\nimport os\nfrom moto.xray import xray_backends\nimport aws_xray_sdk.core\nfrom aws_xray_sdk.core.context import Context as AWSContext\nfrom aws_xray_sdk.core.emitters.udp_emitter import UDPEmitter\n\n\nclass MockEmitter(UDPEmitter):\n \"\"\"\n Replaces the code that sends UDP to local X-Ray daemon\n \"\"\"\n def __init__(self, daemon_address='127.0.0.1:2000'):\n address = os.getenv('AWS_XRAY_DAEMON_ADDRESS_YEAH_NOT_TODAY_MATE', daemon_address)\n self._ip, self._port = self._parse_address(address)\n\n def _xray_backend(self, region):\n return xray_backends[region]\n\n def send_entity(self, entity):\n # Hack to get region\n # region = entity.subsegments[0].aws['region']\n # xray = self._xray_backend(region)\n\n # TODO store X-Ray data, pretty sure X-Ray needs refactor for this\n pass\n\n def _send_data(self, data):\n raise RuntimeError('Should not be running this')\n\n\ndef mock_xray_client(f):\n \"\"\"\n Mocks the X-Ray sdk by pwning its evil singleton with our methods\n\n The X-Ray SDK has normally been imported and `patched()` called long before we start mocking.\n This means the Context() will be very unhappy if an env var isnt present, so we set that, save\n the old context, then supply our new context.\n We also patch the Emitter by subclassing the UDPEmitter class replacing its methods and pushing\n that itno the recorder instance.\n \"\"\"\n @wraps(f)\n def _wrapped(*args, **kwargs):\n print(\"Starting X-Ray Patch\")\n\n old_xray_context_var = os.environ.get('AWS_XRAY_CONTEXT_MISSING')\n os.environ['AWS_XRAY_CONTEXT_MISSING'] = 'LOG_ERROR'\n old_xray_context = aws_xray_sdk.core.xray_recorder._context\n old_xray_emitter = aws_xray_sdk.core.xray_recorder._emitter\n aws_xray_sdk.core.xray_recorder._context = AWSContext()\n aws_xray_sdk.core.xray_recorder._emitter = MockEmitter()\n\n try:\n f(*args, **kwargs)\n finally:\n\n if old_xray_context_var is None:\n del os.environ['AWS_XRAY_CONTEXT_MISSING']\n else:\n os.environ['AWS_XRAY_CONTEXT_MISSING'] = old_xray_context_var\n\n aws_xray_sdk.core.xray_recorder._emitter = old_xray_emitter\n aws_xray_sdk.core.xray_recorder._context = old_xray_context\n\n return _wrapped\n\n\nclass XRaySegment(object):\n \"\"\"\n XRay is request oriented, when a request comes in, normally middleware like django (or automatically in lambda) will mark\n the start of a segment, this stay open during the lifetime of the request. During that time subsegments may be generated\n by calling other SDK aware services or using some boto functions. Once the request is finished, middleware will also stop\n the segment, thus causing it to be emitted via UDP.\n\n During testing we're going to have to control the start and end of a segment via context managers.\n \"\"\"\n def __enter__(self):\n aws_xray_sdk.core.xray_recorder.begin_segment(name='moto_mock', traceid=None, parent_id=None, sampling=1)\n\n return self\n\n def __exit__(self, exc_type, exc_val, exc_tb):\n aws_xray_sdk.core.xray_recorder.end_segment()\n", "path": "moto/xray/mock_client.py"}]}
| 1,739 | 106 |
gh_patches_debug_36728
|
rasdani/github-patches
|
git_diff
|
mkdocs__mkdocs-2427
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Re-building w/ symbolic links stopped working, regression after #2385
Since a444c43 in master using the local development server via `mkdocs serve` updating files that are symbolically linked is not triggering to rebuild (and therefore not reloading browser tabs).
On first glance this is due to the switch to watchdog for detecting file-system changes which needs more guidance to handle this file-type.
Preparing a PR with a patch.
Ref: a444c43474f91dea089922dd8fb188d1db3a4535
restore re-building with symbolic-links, closes #2425
previously (1.1.2 + master at 23e2051) building was triggered by changes
of file-content that was symbolically linked within docs_dir while
`mkdocs serve` was running.
since migrating from livereload>=2.6.1 to watchdog>=2.0.0 to detect
file-system changes (triggering the re-build) it stopped working.
this is because watchdog does not support symbolic links out of the box,
e.g. see [1].
change is to provide additional observe instructions on the realpath [2]
for the following cases:
1. docs_dir & config_file_path path deviation:
when the absolute path to either the `docs_dir` or the `config_file` is
different from its realpath, the realpath is added for observing (as
well).
2. symbolic links within docs_dir:
if a file within docs_dir is a symbolic link, the files real path
is added for observing. sub-directories (that are not symbolically
linked) are traversed up to a depth of nine levels (only if the
recursive flag is enabled, otherwise no traversal into sub-directories).
Ref: 23e205153f01d24d50fe9ba18e5186cdbc2c2dbe
[1]: https://github.com/gorakhargosh/watchdog/issues/365
[2]: <https://docs.python.org/3.8/library/os.path.html#os.path.realpath>
</issue>
<code>
[start of mkdocs/livereload/__init__.py]
1 import functools
2 import io
3 import logging
4 import mimetypes
5 import os
6 import os.path
7 import re
8 import socketserver
9 import threading
10 import time
11 import warnings
12 import wsgiref.simple_server
13
14 import watchdog.events
15 import watchdog.observers
16
17
18 class _LoggerAdapter(logging.LoggerAdapter):
19 def process(self, msg, kwargs):
20 return time.strftime("[%H:%M:%S] ") + msg, kwargs
21
22
23 log = _LoggerAdapter(logging.getLogger(__name__), {})
24
25
26 class LiveReloadServer(socketserver.ThreadingMixIn, wsgiref.simple_server.WSGIServer):
27 daemon_threads = True
28 poll_response_timeout = 60
29
30 def __init__(
31 self,
32 builder,
33 host,
34 port,
35 root,
36 mount_path="/",
37 build_delay=0.25,
38 shutdown_delay=0.25,
39 **kwargs,
40 ):
41 self.builder = builder
42 self.server_name = host
43 self.server_port = port
44 self.root = os.path.abspath(root)
45 self.mount_path = ("/" + mount_path.lstrip("/")).rstrip("/") + "/"
46 self.url = f"http://{self.server_name}:{self.server_port}{self.mount_path}"
47 self.build_delay = build_delay
48 self.shutdown_delay = shutdown_delay
49 # To allow custom error pages.
50 self.error_handler = lambda code: None
51
52 super().__init__((host, port), _Handler, **kwargs)
53 self.set_app(self.serve_request)
54
55 self._wanted_epoch = _timestamp() # The version of the site that started building.
56 self._visible_epoch = self._wanted_epoch # Latest fully built version of the site.
57 self._epoch_cond = threading.Condition() # Must be held when accessing _visible_epoch.
58
59 self._to_rebuild = {} # Used as an ordered set of functions to call.
60 self._rebuild_cond = threading.Condition() # Must be held when accessing _to_rebuild.
61
62 self._shutdown = False
63 self.serve_thread = threading.Thread(target=lambda: self.serve_forever(shutdown_delay))
64 self.observer = watchdog.observers.Observer(timeout=shutdown_delay)
65
66 def watch(self, path, func=None, recursive=True):
67 """Add the 'path' to watched paths, call the function and reload when any file changes under it."""
68 path = os.path.abspath(path)
69 if func in (None, self.builder):
70 func = self.builder
71 else:
72 warnings.warn(
73 "Plugins should not pass the 'func' parameter of watch(). "
74 "The ability to execute custom callbacks will be removed soon.",
75 DeprecationWarning,
76 stacklevel=2,
77 )
78
79 def callback(event):
80 if event.is_directory:
81 return
82 # Text editors always cause a "file close" event in addition to "modified" when saving
83 # a file. Some editors also have "swap" functionality that keeps writing into another
84 # file that's never closed. Prevent such write events from causing a rebuild.
85 if isinstance(event, watchdog.events.FileModifiedEvent):
86 # But FileClosedEvent is implemented only on Linux, otherwise we mustn't skip this:
87 if type(self.observer).__name__ == "InotifyObserver":
88 return
89 log.debug(str(event))
90 with self._rebuild_cond:
91 self._to_rebuild[func] = True
92 self._rebuild_cond.notify_all()
93
94 handler = watchdog.events.FileSystemEventHandler()
95 handler.on_any_event = callback
96 self.observer.schedule(handler, path, recursive=recursive)
97
98 def serve(self):
99 self.observer.start()
100
101 log.info(f"Serving on {self.url}")
102 self.serve_thread.start()
103
104 self._build_loop()
105
106 def _build_loop(self):
107 while True:
108 with self._rebuild_cond:
109 while not self._rebuild_cond.wait_for(
110 lambda: self._to_rebuild or self._shutdown, timeout=self.shutdown_delay
111 ):
112 # We could have used just one wait instead of a loop + timeout, but we need
113 # occasional breaks, otherwise on Windows we can't receive KeyboardInterrupt.
114 pass
115 if self._shutdown:
116 break
117 log.info("Detected file changes")
118 while self._rebuild_cond.wait(timeout=self.build_delay):
119 log.debug("Waiting for file changes to stop happening")
120
121 self._wanted_epoch = _timestamp()
122 funcs = list(self._to_rebuild)
123 self._to_rebuild.clear()
124
125 for func in funcs:
126 func()
127
128 with self._epoch_cond:
129 log.info("Reloading browsers")
130 self._visible_epoch = self._wanted_epoch
131 self._epoch_cond.notify_all()
132
133 def shutdown(self):
134 self.observer.stop()
135 with self._rebuild_cond:
136 self._shutdown = True
137 self._rebuild_cond.notify_all()
138
139 if self.serve_thread.is_alive():
140 super().shutdown()
141 self.serve_thread.join()
142 self.observer.join()
143
144 def serve_request(self, environ, start_response):
145 try:
146 result = self._serve_request(environ, start_response)
147 except Exception:
148 code = 500
149 msg = "500 Internal Server Error"
150 log.exception(msg)
151 else:
152 if result is not None:
153 return result
154 code = 404
155 msg = "404 Not Found"
156
157 error_content = None
158 try:
159 error_content = self.error_handler(code)
160 except Exception:
161 log.exception("Failed to render an error message!")
162 if error_content is None:
163 error_content = msg.encode()
164
165 start_response(msg, [("Content-Type", "text/html")])
166 return [error_content]
167
168 def _serve_request(self, environ, start_response):
169 path = environ["PATH_INFO"]
170
171 m = re.fullmatch(r"/livereload/([0-9]+)/[0-9]+", path)
172 if m:
173 epoch = int(m[1])
174 start_response("200 OK", [("Content-Type", "text/plain")])
175
176 def condition():
177 return self._visible_epoch > epoch
178
179 with self._epoch_cond:
180 if not condition():
181 # Stall the browser, respond as soon as there's something new.
182 # If there's not, respond anyway after a minute.
183 self._log_poll_request(environ.get("HTTP_REFERER"), request_id=path)
184 self._epoch_cond.wait_for(condition, timeout=self.poll_response_timeout)
185 return [b"%d" % self._visible_epoch]
186
187 if path == "/js/livereload.js":
188 file_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), "livereload.js")
189 elif path.startswith(self.mount_path):
190 if path.endswith("/"):
191 path += "index.html"
192 path = path[len(self.mount_path):]
193 file_path = os.path.join(self.root, path.lstrip("/"))
194 elif path == "/":
195 start_response("302 Found", [("Location", self.mount_path)])
196 return []
197 else:
198 return None # Not found
199
200 # Wait until the ongoing rebuild (if any) finishes, so we're not serving a half-built site.
201 with self._epoch_cond:
202 self._epoch_cond.wait_for(lambda: self._visible_epoch == self._wanted_epoch)
203 epoch = self._visible_epoch
204
205 try:
206 file = open(file_path, "rb")
207 except OSError:
208 return None # Not found
209
210 if path.endswith(".html"):
211 with file:
212 content = file.read()
213 content = self._inject_js_into_html(content, epoch)
214 file = io.BytesIO(content)
215 content_length = len(content)
216 else:
217 content_length = os.path.getsize(file_path)
218
219 content_type = self._guess_type(file_path)
220 start_response(
221 "200 OK", [("Content-Type", content_type), ("Content-Length", str(content_length))]
222 )
223 return wsgiref.util.FileWrapper(file)
224
225 @classmethod
226 def _inject_js_into_html(cls, content, epoch):
227 try:
228 body_end = content.rindex(b"</body>")
229 except ValueError:
230 body_end = len(content)
231 # The page will reload if the livereload poller returns a newer epoch than what it knows.
232 # The other timestamp becomes just a unique identifier for the initiating page.
233 return (
234 b'%b<script src="/js/livereload.js"></script><script>livereload(%d, %d);</script>%b'
235 % (content[:body_end], epoch, _timestamp(), content[body_end:])
236 )
237
238 @classmethod
239 @functools.lru_cache() # "Cache" to not repeat the same message for the same browser tab.
240 def _log_poll_request(cls, url, request_id):
241 log.info(f"Browser connected: {url}")
242
243 def _guess_type(cls, path):
244 # MkDocs only ensures a few common types (as seen in livereload_tests.py::test_mime_types).
245 # Other uncommon types will not be accepted.
246 if path.endswith((".js", ".JS")):
247 return "application/javascript"
248 if path.endswith(".gz"):
249 return "application/gzip"
250
251 guess, _ = mimetypes.guess_type(path)
252 if guess:
253 return guess
254 return "application/octet-stream"
255
256
257 class _Handler(wsgiref.simple_server.WSGIRequestHandler):
258 def log_request(self, code="-", size="-"):
259 level = logging.DEBUG if str(code) == "200" else logging.WARNING
260 log.log(level, f'"{self.requestline}" code {code}')
261
262 def log_message(self, format, *args):
263 log.debug(format, *args)
264
265
266 def _timestamp():
267 return round(time.monotonic() * 1000)
268
[end of mkdocs/livereload/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mkdocs/livereload/__init__.py b/mkdocs/livereload/__init__.py
--- a/mkdocs/livereload/__init__.py
+++ b/mkdocs/livereload/__init__.py
@@ -4,6 +4,7 @@
import mimetypes
import os
import os.path
+import pathlib
import re
import socketserver
import threading
@@ -76,8 +77,10 @@
stacklevel=2,
)
- def callback(event):
- if event.is_directory:
+ def callback(event, allowed_path=None):
+ if isinstance(event, watchdog.events.DirCreatedEvent):
+ return
+ if allowed_path is not None and event.src_path != allowed_path:
return
# Text editors always cause a "file close" event in addition to "modified" when saving
# a file. Some editors also have "swap" functionality that keeps writing into another
@@ -91,9 +94,43 @@
self._to_rebuild[func] = True
self._rebuild_cond.notify_all()
- handler = watchdog.events.FileSystemEventHandler()
- handler.on_any_event = callback
- self.observer.schedule(handler, path, recursive=recursive)
+ dir_handler = watchdog.events.FileSystemEventHandler()
+ dir_handler.on_any_event = callback
+
+ seen = set()
+
+ def schedule(path):
+ seen.add(path)
+ if os.path.isfile(path):
+ # Watchdog doesn't support watching files, so watch its directory and filter by path
+ handler = watchdog.events.FileSystemEventHandler()
+ handler.on_any_event = lambda event: callback(event, allowed_path=path)
+
+ parent = os.path.dirname(path)
+ log.debug(f"Watching file '{path}' through directory '{parent}'")
+ self.observer.schedule(handler, parent)
+ else:
+ log.debug(f"Watching directory '{path}'")
+ self.observer.schedule(dir_handler, path, recursive=recursive)
+
+ schedule(os.path.realpath(path))
+
+ def watch_symlink_targets(path_obj): # path is os.DirEntry or pathlib.Path
+ if path_obj.is_symlink():
+ # The extra `readlink` is needed due to https://bugs.python.org/issue9949
+ target = os.path.realpath(os.readlink(os.fspath(path_obj)))
+ if target in seen or not os.path.exists(target):
+ return
+ schedule(target)
+
+ path_obj = pathlib.Path(target)
+
+ if path_obj.is_dir() and recursive:
+ with os.scandir(os.fspath(path_obj)) as scan:
+ for entry in scan:
+ watch_symlink_targets(entry)
+
+ watch_symlink_targets(pathlib.Path(path))
def serve(self):
self.observer.start()
|
{"golden_diff": "diff --git a/mkdocs/livereload/__init__.py b/mkdocs/livereload/__init__.py\n--- a/mkdocs/livereload/__init__.py\n+++ b/mkdocs/livereload/__init__.py\n@@ -4,6 +4,7 @@\n import mimetypes\n import os\n import os.path\n+import pathlib\n import re\n import socketserver\n import threading\n@@ -76,8 +77,10 @@\n stacklevel=2,\n )\n \n- def callback(event):\n- if event.is_directory:\n+ def callback(event, allowed_path=None):\n+ if isinstance(event, watchdog.events.DirCreatedEvent):\n+ return\n+ if allowed_path is not None and event.src_path != allowed_path:\n return\n # Text editors always cause a \"file close\" event in addition to \"modified\" when saving\n # a file. Some editors also have \"swap\" functionality that keeps writing into another\n@@ -91,9 +94,43 @@\n self._to_rebuild[func] = True\n self._rebuild_cond.notify_all()\n \n- handler = watchdog.events.FileSystemEventHandler()\n- handler.on_any_event = callback\n- self.observer.schedule(handler, path, recursive=recursive)\n+ dir_handler = watchdog.events.FileSystemEventHandler()\n+ dir_handler.on_any_event = callback\n+\n+ seen = set()\n+\n+ def schedule(path):\n+ seen.add(path)\n+ if os.path.isfile(path):\n+ # Watchdog doesn't support watching files, so watch its directory and filter by path\n+ handler = watchdog.events.FileSystemEventHandler()\n+ handler.on_any_event = lambda event: callback(event, allowed_path=path)\n+\n+ parent = os.path.dirname(path)\n+ log.debug(f\"Watching file '{path}' through directory '{parent}'\")\n+ self.observer.schedule(handler, parent)\n+ else:\n+ log.debug(f\"Watching directory '{path}'\")\n+ self.observer.schedule(dir_handler, path, recursive=recursive)\n+\n+ schedule(os.path.realpath(path))\n+\n+ def watch_symlink_targets(path_obj): # path is os.DirEntry or pathlib.Path\n+ if path_obj.is_symlink():\n+ # The extra `readlink` is needed due to https://bugs.python.org/issue9949\n+ target = os.path.realpath(os.readlink(os.fspath(path_obj)))\n+ if target in seen or not os.path.exists(target):\n+ return\n+ schedule(target)\n+\n+ path_obj = pathlib.Path(target)\n+\n+ if path_obj.is_dir() and recursive:\n+ with os.scandir(os.fspath(path_obj)) as scan:\n+ for entry in scan:\n+ watch_symlink_targets(entry)\n+\n+ watch_symlink_targets(pathlib.Path(path))\n \n def serve(self):\n self.observer.start()\n", "issue": "Re-building w/ symbolic links stopped working, regression after #2385\nSince a444c43 in master using the local development server via `mkdocs serve` updating files that are symbolically linked is not triggering to rebuild (and therefore not reloading browser tabs).\r\n\r\nOn first glance this is due to the switch to watchdog for detecting file-system changes which needs more guidance to handle this file-type.\r\n\r\nPreparing a PR with a patch.\r\n\r\nRef: a444c43474f91dea089922dd8fb188d1db3a4535\nrestore re-building with symbolic-links, closes #2425\npreviously (1.1.2 + master at 23e2051) building was triggered by changes\r\nof file-content that was symbolically linked within docs_dir while\r\n`mkdocs serve` was running.\r\n\r\nsince migrating from livereload>=2.6.1 to watchdog>=2.0.0 to detect\r\nfile-system changes (triggering the re-build) it stopped working.\r\n\r\nthis is because watchdog does not support symbolic links out of the box,\r\ne.g. see [1].\r\n\r\nchange is to provide additional observe instructions on the realpath [2]\r\nfor the following cases:\r\n\r\n1. docs_dir & config_file_path path deviation:\r\n\r\n when the absolute path to either the `docs_dir` or the `config_file` is\r\n different from its realpath, the realpath is added for observing (as\r\n well).\r\n\r\n2. symbolic links within docs_dir:\r\n\r\n if a file within docs_dir is a symbolic link, the files real path\r\n is added for observing. sub-directories (that are not symbolically\r\n linked) are traversed up to a depth of nine levels (only if the\r\n recursive flag is enabled, otherwise no traversal into sub-directories).\r\n\r\nRef: 23e205153f01d24d50fe9ba18e5186cdbc2c2dbe\r\n[1]: https://github.com/gorakhargosh/watchdog/issues/365\r\n[2]: <https://docs.python.org/3.8/library/os.path.html#os.path.realpath>\n", "before_files": [{"content": "import functools\nimport io\nimport logging\nimport mimetypes\nimport os\nimport os.path\nimport re\nimport socketserver\nimport threading\nimport time\nimport warnings\nimport wsgiref.simple_server\n\nimport watchdog.events\nimport watchdog.observers\n\n\nclass _LoggerAdapter(logging.LoggerAdapter):\n def process(self, msg, kwargs):\n return time.strftime(\"[%H:%M:%S] \") + msg, kwargs\n\n\nlog = _LoggerAdapter(logging.getLogger(__name__), {})\n\n\nclass LiveReloadServer(socketserver.ThreadingMixIn, wsgiref.simple_server.WSGIServer):\n daemon_threads = True\n poll_response_timeout = 60\n\n def __init__(\n self,\n builder,\n host,\n port,\n root,\n mount_path=\"/\",\n build_delay=0.25,\n shutdown_delay=0.25,\n **kwargs,\n ):\n self.builder = builder\n self.server_name = host\n self.server_port = port\n self.root = os.path.abspath(root)\n self.mount_path = (\"/\" + mount_path.lstrip(\"/\")).rstrip(\"/\") + \"/\"\n self.url = f\"http://{self.server_name}:{self.server_port}{self.mount_path}\"\n self.build_delay = build_delay\n self.shutdown_delay = shutdown_delay\n # To allow custom error pages.\n self.error_handler = lambda code: None\n\n super().__init__((host, port), _Handler, **kwargs)\n self.set_app(self.serve_request)\n\n self._wanted_epoch = _timestamp() # The version of the site that started building.\n self._visible_epoch = self._wanted_epoch # Latest fully built version of the site.\n self._epoch_cond = threading.Condition() # Must be held when accessing _visible_epoch.\n\n self._to_rebuild = {} # Used as an ordered set of functions to call.\n self._rebuild_cond = threading.Condition() # Must be held when accessing _to_rebuild.\n\n self._shutdown = False\n self.serve_thread = threading.Thread(target=lambda: self.serve_forever(shutdown_delay))\n self.observer = watchdog.observers.Observer(timeout=shutdown_delay)\n\n def watch(self, path, func=None, recursive=True):\n \"\"\"Add the 'path' to watched paths, call the function and reload when any file changes under it.\"\"\"\n path = os.path.abspath(path)\n if func in (None, self.builder):\n func = self.builder\n else:\n warnings.warn(\n \"Plugins should not pass the 'func' parameter of watch(). \"\n \"The ability to execute custom callbacks will be removed soon.\",\n DeprecationWarning,\n stacklevel=2,\n )\n\n def callback(event):\n if event.is_directory:\n return\n # Text editors always cause a \"file close\" event in addition to \"modified\" when saving\n # a file. Some editors also have \"swap\" functionality that keeps writing into another\n # file that's never closed. Prevent such write events from causing a rebuild.\n if isinstance(event, watchdog.events.FileModifiedEvent):\n # But FileClosedEvent is implemented only on Linux, otherwise we mustn't skip this:\n if type(self.observer).__name__ == \"InotifyObserver\":\n return\n log.debug(str(event))\n with self._rebuild_cond:\n self._to_rebuild[func] = True\n self._rebuild_cond.notify_all()\n\n handler = watchdog.events.FileSystemEventHandler()\n handler.on_any_event = callback\n self.observer.schedule(handler, path, recursive=recursive)\n\n def serve(self):\n self.observer.start()\n\n log.info(f\"Serving on {self.url}\")\n self.serve_thread.start()\n\n self._build_loop()\n\n def _build_loop(self):\n while True:\n with self._rebuild_cond:\n while not self._rebuild_cond.wait_for(\n lambda: self._to_rebuild or self._shutdown, timeout=self.shutdown_delay\n ):\n # We could have used just one wait instead of a loop + timeout, but we need\n # occasional breaks, otherwise on Windows we can't receive KeyboardInterrupt.\n pass\n if self._shutdown:\n break\n log.info(\"Detected file changes\")\n while self._rebuild_cond.wait(timeout=self.build_delay):\n log.debug(\"Waiting for file changes to stop happening\")\n\n self._wanted_epoch = _timestamp()\n funcs = list(self._to_rebuild)\n self._to_rebuild.clear()\n\n for func in funcs:\n func()\n\n with self._epoch_cond:\n log.info(\"Reloading browsers\")\n self._visible_epoch = self._wanted_epoch\n self._epoch_cond.notify_all()\n\n def shutdown(self):\n self.observer.stop()\n with self._rebuild_cond:\n self._shutdown = True\n self._rebuild_cond.notify_all()\n\n if self.serve_thread.is_alive():\n super().shutdown()\n self.serve_thread.join()\n self.observer.join()\n\n def serve_request(self, environ, start_response):\n try:\n result = self._serve_request(environ, start_response)\n except Exception:\n code = 500\n msg = \"500 Internal Server Error\"\n log.exception(msg)\n else:\n if result is not None:\n return result\n code = 404\n msg = \"404 Not Found\"\n\n error_content = None\n try:\n error_content = self.error_handler(code)\n except Exception:\n log.exception(\"Failed to render an error message!\")\n if error_content is None:\n error_content = msg.encode()\n\n start_response(msg, [(\"Content-Type\", \"text/html\")])\n return [error_content]\n\n def _serve_request(self, environ, start_response):\n path = environ[\"PATH_INFO\"]\n\n m = re.fullmatch(r\"/livereload/([0-9]+)/[0-9]+\", path)\n if m:\n epoch = int(m[1])\n start_response(\"200 OK\", [(\"Content-Type\", \"text/plain\")])\n\n def condition():\n return self._visible_epoch > epoch\n\n with self._epoch_cond:\n if not condition():\n # Stall the browser, respond as soon as there's something new.\n # If there's not, respond anyway after a minute.\n self._log_poll_request(environ.get(\"HTTP_REFERER\"), request_id=path)\n self._epoch_cond.wait_for(condition, timeout=self.poll_response_timeout)\n return [b\"%d\" % self._visible_epoch]\n\n if path == \"/js/livereload.js\":\n file_path = os.path.join(os.path.dirname(os.path.abspath(__file__)), \"livereload.js\")\n elif path.startswith(self.mount_path):\n if path.endswith(\"/\"):\n path += \"index.html\"\n path = path[len(self.mount_path):]\n file_path = os.path.join(self.root, path.lstrip(\"/\"))\n elif path == \"/\":\n start_response(\"302 Found\", [(\"Location\", self.mount_path)])\n return []\n else:\n return None # Not found\n\n # Wait until the ongoing rebuild (if any) finishes, so we're not serving a half-built site.\n with self._epoch_cond:\n self._epoch_cond.wait_for(lambda: self._visible_epoch == self._wanted_epoch)\n epoch = self._visible_epoch\n\n try:\n file = open(file_path, \"rb\")\n except OSError:\n return None # Not found\n\n if path.endswith(\".html\"):\n with file:\n content = file.read()\n content = self._inject_js_into_html(content, epoch)\n file = io.BytesIO(content)\n content_length = len(content)\n else:\n content_length = os.path.getsize(file_path)\n\n content_type = self._guess_type(file_path)\n start_response(\n \"200 OK\", [(\"Content-Type\", content_type), (\"Content-Length\", str(content_length))]\n )\n return wsgiref.util.FileWrapper(file)\n\n @classmethod\n def _inject_js_into_html(cls, content, epoch):\n try:\n body_end = content.rindex(b\"</body>\")\n except ValueError:\n body_end = len(content)\n # The page will reload if the livereload poller returns a newer epoch than what it knows.\n # The other timestamp becomes just a unique identifier for the initiating page.\n return (\n b'%b<script src=\"/js/livereload.js\"></script><script>livereload(%d, %d);</script>%b'\n % (content[:body_end], epoch, _timestamp(), content[body_end:])\n )\n\n @classmethod\n @functools.lru_cache() # \"Cache\" to not repeat the same message for the same browser tab.\n def _log_poll_request(cls, url, request_id):\n log.info(f\"Browser connected: {url}\")\n\n def _guess_type(cls, path):\n # MkDocs only ensures a few common types (as seen in livereload_tests.py::test_mime_types).\n # Other uncommon types will not be accepted.\n if path.endswith((\".js\", \".JS\")):\n return \"application/javascript\"\n if path.endswith(\".gz\"):\n return \"application/gzip\"\n\n guess, _ = mimetypes.guess_type(path)\n if guess:\n return guess\n return \"application/octet-stream\"\n\n\nclass _Handler(wsgiref.simple_server.WSGIRequestHandler):\n def log_request(self, code=\"-\", size=\"-\"):\n level = logging.DEBUG if str(code) == \"200\" else logging.WARNING\n log.log(level, f'\"{self.requestline}\" code {code}')\n\n def log_message(self, format, *args):\n log.debug(format, *args)\n\n\ndef _timestamp():\n return round(time.monotonic() * 1000)\n", "path": "mkdocs/livereload/__init__.py"}]}
| 3,817 | 616 |
gh_patches_debug_3766
|
rasdani/github-patches
|
git_diff
|
python__typeshed-8843
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
scripts/create_baseline_stubs.py instructions are outdated
I would prefer:
```
1. Manually review the generated stubs in {stub_dir}
2. Run tests locally if you want (see CONTRIBUTING.md)
3. Commit the changes on a new branch and create a typeshed PR
```
The CI will check everything anyway, and is set up so that you don't have to run anything locally. This would also be consistent with CONTRIBUTING.md.
_Originally posted by @Akuli in https://github.com/python/typeshed/issues/8686#issuecomment-1237374669_
</issue>
<code>
[start of scripts/create_baseline_stubs.py]
1 #!/usr/bin/env python3
2
3 """Script to generate unannotated baseline stubs using stubgen.
4
5 Basic usage:
6 $ python3 scripts/create_baseline_stubs.py <project on PyPI>
7
8 Run with -h for more help.
9 """
10
11 from __future__ import annotations
12
13 import argparse
14 import os
15 import re
16 import subprocess
17 import sys
18
19 if sys.version_info >= (3, 8):
20 from importlib.metadata import distribution
21
22 PYRIGHT_CONFIG = "pyrightconfig.stricter.json"
23
24
25 def search_pip_freeze_output(project: str, output: str) -> tuple[str, str] | None:
26 # Look for lines such as "typed-ast==1.4.2". '-' matches '_' and
27 # '_' matches '-' in project name, so that "typed_ast" matches
28 # "typed-ast", and vice versa.
29 regex = "^(" + re.sub(r"[-_]", "[-_]", project) + ")==(.*)"
30 m = re.search(regex, output, flags=re.IGNORECASE | re.MULTILINE)
31 if not m:
32 return None
33 return m.group(1), m.group(2)
34
35
36 def get_installed_package_info(project: str) -> tuple[str, str] | None:
37 """Find package information from pip freeze output.
38
39 Match project name somewhat fuzzily (case sensitive; '-' matches '_', and
40 vice versa).
41
42 Return (normalized project name, installed version) if successful.
43 """
44 r = subprocess.run(["pip", "freeze"], capture_output=True, text=True, check=True)
45 return search_pip_freeze_output(project, r.stdout)
46
47
48 def run_stubgen(package: str, output: str) -> None:
49 print(f"Running stubgen: stubgen -o {output} -p {package}")
50 subprocess.run(["stubgen", "-o", output, "-p", package], check=True)
51
52
53 def run_black(stub_dir: str) -> None:
54 print(f"Running black: black {stub_dir}")
55 subprocess.run(["black", stub_dir])
56
57
58 def run_isort(stub_dir: str) -> None:
59 print(f"Running isort: isort {stub_dir}")
60 subprocess.run(["python3", "-m", "isort", stub_dir])
61
62
63 def create_metadata(stub_dir: str, version: str) -> None:
64 """Create a METADATA.toml file."""
65 match = re.match(r"[0-9]+.[0-9]+", version)
66 if match is None:
67 sys.exit(f"Error: Cannot parse version number: {version}")
68 filename = os.path.join(stub_dir, "METADATA.toml")
69 version = match.group(0)
70 if os.path.exists(filename):
71 return
72 print(f"Writing {filename}")
73 with open(filename, "w") as file:
74 file.write(
75 f"""\
76 version = "{version}.*"
77
78 [tool.stubtest]
79 ignore_missing_stub = false
80 """
81 )
82
83
84 def add_pyright_exclusion(stub_dir: str) -> None:
85 """Exclude stub_dir from strict pyright checks."""
86 with open(PYRIGHT_CONFIG) as f:
87 lines = f.readlines()
88 i = 0
89 while i < len(lines) and not lines[i].strip().startswith('"exclude": ['):
90 i += 1
91 assert i < len(lines), f"Error parsing {PYRIGHT_CONFIG}"
92 while not lines[i].strip().startswith("]"):
93 i += 1
94 # Must use forward slash in the .json file
95 line_to_add = f' "{stub_dir}",'.replace("\\", "/")
96 initial = i - 1
97 while lines[i].lower() > line_to_add.lower():
98 i -= 1
99 if lines[i + 1].strip().rstrip(",") == line_to_add.strip().rstrip(","):
100 print(f"{PYRIGHT_CONFIG} already up-to-date")
101 return
102 if i == initial:
103 # Special case: when adding to the end of the list, commas need tweaking
104 line_to_add = line_to_add.rstrip(",")
105 lines[i] = lines[i].rstrip() + ",\n"
106 lines.insert(i + 1, line_to_add + "\n")
107 print(f"Updating {PYRIGHT_CONFIG}")
108 with open(PYRIGHT_CONFIG, "w") as f:
109 f.writelines(lines)
110
111
112 def main() -> None:
113 parser = argparse.ArgumentParser(
114 description="""Generate baseline stubs automatically for an installed pip package
115 using stubgen. Also run black and isort. If the name of
116 the project is different from the runtime Python package name, you may
117 need to use --package (example: --package yaml PyYAML)."""
118 )
119 parser.add_argument("project", help="name of PyPI project for which to generate stubs under stubs/")
120 parser.add_argument("--package", help="generate stubs for this Python package (default is autodetected)")
121 args = parser.parse_args()
122 project = args.project
123 package = args.package
124
125 if not re.match(r"[a-zA-Z0-9-_.]+$", project):
126 sys.exit(f"Invalid character in project name: {project!r}")
127
128 if not package:
129 package = project # default
130 # Try to find which packages are provided by the project
131 # Use default if that fails or if several packages are found
132 #
133 # The importlib.metadata module is used for projects whose name is different
134 # from the runtime Python package name (example: PyYAML/yaml)
135 if sys.version_info >= (3, 8):
136 dist = distribution(project).read_text("top_level.txt")
137 if dist is not None:
138 packages = [name for name in dist.split() if not name.startswith("_")]
139 if len(packages) == 1:
140 package = packages[0]
141 print(f'Using detected package "{package}" for project "{project}"', file=sys.stderr)
142 print("Suggestion: Try again with --package argument if that's not what you wanted", file=sys.stderr)
143
144 if not os.path.isdir("stubs") or not os.path.isdir("stdlib"):
145 sys.exit("Error: Current working directory must be the root of typeshed repository")
146
147 # Get normalized project name and version of installed package.
148 info = get_installed_package_info(project)
149 if info is None:
150 print(f'Error: "{project}" is not installed', file=sys.stderr)
151 print("", file=sys.stderr)
152 print(f'Suggestion: Run "python3 -m pip install {project}" and try again', file=sys.stderr)
153 sys.exit(1)
154 project, version = info
155
156 stub_dir = os.path.join("stubs", project)
157 package_dir = os.path.join(stub_dir, package)
158 if os.path.exists(package_dir):
159 sys.exit(f"Error: {package_dir} already exists (delete it first)")
160
161 run_stubgen(package, stub_dir)
162
163 run_isort(stub_dir)
164 run_black(stub_dir)
165
166 create_metadata(stub_dir, version)
167
168 # Since the generated stubs won't have many type annotations, we
169 # have to exclude them from strict pyright checks.
170 add_pyright_exclusion(stub_dir)
171
172 print("\nDone!\n\nSuggested next steps:")
173 print(f" 1. Manually review the generated stubs in {stub_dir}")
174 print(" 2. Optionally run tests and autofixes (see tests/README.md for details")
175 print(" 3. Commit the changes on a new branch and create a typeshed PR (don't force-push!)")
176
177
178 if __name__ == "__main__":
179 main()
180
[end of scripts/create_baseline_stubs.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scripts/create_baseline_stubs.py b/scripts/create_baseline_stubs.py
--- a/scripts/create_baseline_stubs.py
+++ b/scripts/create_baseline_stubs.py
@@ -47,7 +47,7 @@
def run_stubgen(package: str, output: str) -> None:
print(f"Running stubgen: stubgen -o {output} -p {package}")
- subprocess.run(["stubgen", "-o", output, "-p", package], check=True)
+ subprocess.run(["stubgen", "-o", output, "-p", package, "--export-less"], check=True)
def run_black(stub_dir: str) -> None:
|
{"golden_diff": "diff --git a/scripts/create_baseline_stubs.py b/scripts/create_baseline_stubs.py\n--- a/scripts/create_baseline_stubs.py\n+++ b/scripts/create_baseline_stubs.py\n@@ -47,7 +47,7 @@\n \n def run_stubgen(package: str, output: str) -> None:\n print(f\"Running stubgen: stubgen -o {output} -p {package}\")\n- subprocess.run([\"stubgen\", \"-o\", output, \"-p\", package], check=True)\n+ subprocess.run([\"stubgen\", \"-o\", output, \"-p\", package, \"--export-less\"], check=True)\n \n \n def run_black(stub_dir: str) -> None:\n", "issue": "scripts/create_baseline_stubs.py instructions are outdated\nI would prefer:\r\n ```\r\n 1. Manually review the generated stubs in {stub_dir}\r\n 2. Run tests locally if you want (see CONTRIBUTING.md)\r\n 3. Commit the changes on a new branch and create a typeshed PR\r\n ```\r\n\r\nThe CI will check everything anyway, and is set up so that you don't have to run anything locally. This would also be consistent with CONTRIBUTING.md.\r\n\r\n_Originally posted by @Akuli in https://github.com/python/typeshed/issues/8686#issuecomment-1237374669_\r\n \n", "before_files": [{"content": "#!/usr/bin/env python3\n\n\"\"\"Script to generate unannotated baseline stubs using stubgen.\n\nBasic usage:\n$ python3 scripts/create_baseline_stubs.py <project on PyPI>\n\nRun with -h for more help.\n\"\"\"\n\nfrom __future__ import annotations\n\nimport argparse\nimport os\nimport re\nimport subprocess\nimport sys\n\nif sys.version_info >= (3, 8):\n from importlib.metadata import distribution\n\nPYRIGHT_CONFIG = \"pyrightconfig.stricter.json\"\n\n\ndef search_pip_freeze_output(project: str, output: str) -> tuple[str, str] | None:\n # Look for lines such as \"typed-ast==1.4.2\". '-' matches '_' and\n # '_' matches '-' in project name, so that \"typed_ast\" matches\n # \"typed-ast\", and vice versa.\n regex = \"^(\" + re.sub(r\"[-_]\", \"[-_]\", project) + \")==(.*)\"\n m = re.search(regex, output, flags=re.IGNORECASE | re.MULTILINE)\n if not m:\n return None\n return m.group(1), m.group(2)\n\n\ndef get_installed_package_info(project: str) -> tuple[str, str] | None:\n \"\"\"Find package information from pip freeze output.\n\n Match project name somewhat fuzzily (case sensitive; '-' matches '_', and\n vice versa).\n\n Return (normalized project name, installed version) if successful.\n \"\"\"\n r = subprocess.run([\"pip\", \"freeze\"], capture_output=True, text=True, check=True)\n return search_pip_freeze_output(project, r.stdout)\n\n\ndef run_stubgen(package: str, output: str) -> None:\n print(f\"Running stubgen: stubgen -o {output} -p {package}\")\n subprocess.run([\"stubgen\", \"-o\", output, \"-p\", package], check=True)\n\n\ndef run_black(stub_dir: str) -> None:\n print(f\"Running black: black {stub_dir}\")\n subprocess.run([\"black\", stub_dir])\n\n\ndef run_isort(stub_dir: str) -> None:\n print(f\"Running isort: isort {stub_dir}\")\n subprocess.run([\"python3\", \"-m\", \"isort\", stub_dir])\n\n\ndef create_metadata(stub_dir: str, version: str) -> None:\n \"\"\"Create a METADATA.toml file.\"\"\"\n match = re.match(r\"[0-9]+.[0-9]+\", version)\n if match is None:\n sys.exit(f\"Error: Cannot parse version number: {version}\")\n filename = os.path.join(stub_dir, \"METADATA.toml\")\n version = match.group(0)\n if os.path.exists(filename):\n return\n print(f\"Writing {filename}\")\n with open(filename, \"w\") as file:\n file.write(\n f\"\"\"\\\nversion = \"{version}.*\"\n\n[tool.stubtest]\nignore_missing_stub = false\n\"\"\"\n )\n\n\ndef add_pyright_exclusion(stub_dir: str) -> None:\n \"\"\"Exclude stub_dir from strict pyright checks.\"\"\"\n with open(PYRIGHT_CONFIG) as f:\n lines = f.readlines()\n i = 0\n while i < len(lines) and not lines[i].strip().startswith('\"exclude\": ['):\n i += 1\n assert i < len(lines), f\"Error parsing {PYRIGHT_CONFIG}\"\n while not lines[i].strip().startswith(\"]\"):\n i += 1\n # Must use forward slash in the .json file\n line_to_add = f' \"{stub_dir}\",'.replace(\"\\\\\", \"/\")\n initial = i - 1\n while lines[i].lower() > line_to_add.lower():\n i -= 1\n if lines[i + 1].strip().rstrip(\",\") == line_to_add.strip().rstrip(\",\"):\n print(f\"{PYRIGHT_CONFIG} already up-to-date\")\n return\n if i == initial:\n # Special case: when adding to the end of the list, commas need tweaking\n line_to_add = line_to_add.rstrip(\",\")\n lines[i] = lines[i].rstrip() + \",\\n\"\n lines.insert(i + 1, line_to_add + \"\\n\")\n print(f\"Updating {PYRIGHT_CONFIG}\")\n with open(PYRIGHT_CONFIG, \"w\") as f:\n f.writelines(lines)\n\n\ndef main() -> None:\n parser = argparse.ArgumentParser(\n description=\"\"\"Generate baseline stubs automatically for an installed pip package\n using stubgen. Also run black and isort. If the name of\n the project is different from the runtime Python package name, you may\n need to use --package (example: --package yaml PyYAML).\"\"\"\n )\n parser.add_argument(\"project\", help=\"name of PyPI project for which to generate stubs under stubs/\")\n parser.add_argument(\"--package\", help=\"generate stubs for this Python package (default is autodetected)\")\n args = parser.parse_args()\n project = args.project\n package = args.package\n\n if not re.match(r\"[a-zA-Z0-9-_.]+$\", project):\n sys.exit(f\"Invalid character in project name: {project!r}\")\n\n if not package:\n package = project # default\n # Try to find which packages are provided by the project\n # Use default if that fails or if several packages are found\n #\n # The importlib.metadata module is used for projects whose name is different\n # from the runtime Python package name (example: PyYAML/yaml)\n if sys.version_info >= (3, 8):\n dist = distribution(project).read_text(\"top_level.txt\")\n if dist is not None:\n packages = [name for name in dist.split() if not name.startswith(\"_\")]\n if len(packages) == 1:\n package = packages[0]\n print(f'Using detected package \"{package}\" for project \"{project}\"', file=sys.stderr)\n print(\"Suggestion: Try again with --package argument if that's not what you wanted\", file=sys.stderr)\n\n if not os.path.isdir(\"stubs\") or not os.path.isdir(\"stdlib\"):\n sys.exit(\"Error: Current working directory must be the root of typeshed repository\")\n\n # Get normalized project name and version of installed package.\n info = get_installed_package_info(project)\n if info is None:\n print(f'Error: \"{project}\" is not installed', file=sys.stderr)\n print(\"\", file=sys.stderr)\n print(f'Suggestion: Run \"python3 -m pip install {project}\" and try again', file=sys.stderr)\n sys.exit(1)\n project, version = info\n\n stub_dir = os.path.join(\"stubs\", project)\n package_dir = os.path.join(stub_dir, package)\n if os.path.exists(package_dir):\n sys.exit(f\"Error: {package_dir} already exists (delete it first)\")\n\n run_stubgen(package, stub_dir)\n\n run_isort(stub_dir)\n run_black(stub_dir)\n\n create_metadata(stub_dir, version)\n\n # Since the generated stubs won't have many type annotations, we\n # have to exclude them from strict pyright checks.\n add_pyright_exclusion(stub_dir)\n\n print(\"\\nDone!\\n\\nSuggested next steps:\")\n print(f\" 1. Manually review the generated stubs in {stub_dir}\")\n print(\" 2. Optionally run tests and autofixes (see tests/README.md for details\")\n print(\" 3. Commit the changes on a new branch and create a typeshed PR (don't force-push!)\")\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "scripts/create_baseline_stubs.py"}]}
| 2,761 | 143 |
gh_patches_debug_17680
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-2151
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to fetch project updates in Up app
The current UAT build for the 3.13 release breaks the project updates download when Up data is loaded or refreshed:
https://github.com/akvo/akvo-rsr-up/issues/186
Environment:
Request Method: GET
Request URL: http://rsr.uat.akvo.org/rest/v1/project_up/2210/?format=xml&image_thumb_name=up&image_thumb_up_width=100
Django Version: 1.7.7
Python Version: 2.7.3
Installed Applications:
('nested_inline',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.humanize',
'django.contrib.messages',
'django.contrib.sessions',
'django.contrib.staticfiles',
'django.contrib.webdesign',
'akvo.codelists',
'akvo.rsr',
'akvo.api',
'registration',
'template_utils',
'paypal.standard.ipn',
'sorl.thumbnail',
'django_counter',
'mollie.ideal',
'django_sorting',
'pagination',
'embed_video',
'django_markup',
'django_filters',
'tastypie',
'rest_framework',
'rest_framework.authtoken',
'rest_framework_swagger',
'pipeline',
'bootstrap3',
'rules',
'django_crontab',
'raven.contrib.django.raven_compat')
Installed Middleware:
('akvo.rsr.middleware.HostDispatchMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.locale.LocaleMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.middleware.http.ConditionalGetMiddleware',
'django_sorting.middleware.SortingMiddleware',
'pagination.middleware.PaginationMiddleware',
'django.middleware.common.CommonMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.middleware.doc.XViewMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'akvo.rsr.middleware.ExceptionLoggingMiddleware',
'akvo.rsr.middleware.RSRVersionHeaderMiddleware',
'django_statsd.middleware.GraphiteRequestTimingMiddleware',
'django_statsd.middleware.GraphiteMiddleware',
'django_statsd.middleware.TastyPieRequestTimingMiddleware')
Traceback:
File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py" in get_response
response = wrapped_callback(request, callback_args, _callback_kwargs) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/viewsets.py" in view
return self.dispatch(request, args, *kwargs) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/views/decorators/csrf.py" in wrapped_view
return view_func(args, *kwargs) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
response = self.handle_exception(exc) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/views.py" in dispatch
response = handler(request, args, *kwargs) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/mixins.py" in retrieve
self.object = self.get_object() File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/generics.py" in get_object
queryset = self.filter_queryset(self.get_queryset()) File "/var/akvo/rsr/code/akvo/rest/views/project.py" in get_queryset
return super(ProjectViewSet, self).get_queryset() File "/var/akvo/rsr/code/akvo/rest/viewsets.py" in get_queryset
queryset = super(PublicProjectViewSet, self).get_queryset() File "/var/akvo/rsr/code/akvo/rest/viewsets.py" in get_queryset
queryset = queryset.filter(_*lookup) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/query.py" in filter
return self._filter_or_exclude(False, args, *kwargs) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/query.py" in _filter_or_exclude
clone.query.add_q(Q(args, *kwargs)) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in add_q
clause, require_inner = self._add_q(where_part, self.used_aliases) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in _add_q
current_negated=current_negated, connector=connector) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in build_filter
lookups, parts, reffed_aggregate = self.solve_lookup_type(arg) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in solve_lookup_type
_, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta()) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in names_to_path
self.raise_field_error(opts, name) File "/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py" in raise_field_error
"Choices are: %s" % (name, ", ".join(available)))
Exception Type: FieldError at /rest/v1/project_up/2210/
Exception Value: Cannot resolve keyword u'image_thumb_name' into field. Choices are: background, benchmarks, budget, budget_items, capital_spend_percentage, categories, collaboration_type, comments, conditions, contacts, country_budget_items, country_budget_vocabulary, created_at, crsadd, currency, current_image, current_image_caption, current_image_credit, current_status, custom_fields, date_end_actual, date_end_planned, date_start_actual, date_start_planned, default_aid_type, default_finance_type, default_flow_type, default_tied_status, documents, donate_button, fss, funds, funds_needed, goals, goals_overview, hierarchy, humanitarian, humanitarian_scopes, iati_activity_id, iati_checks, iati_project_exports, iati_project_import_logs, iati_project_imports, iatiexport, iatiimportjob, id, invoices, is_impact_project, is_public, keywords, language, last_modified_at, last_update, last_update_id, legacy_data, links, locations, notes, partners, partnerships, paymentgatewayselector, planned_disbursements, policy_markers, primary_location, primary_location_id, primary_organisation, primary_organisation_id, project_plan, project_plan_summary, project_scope, project_updates, publishingstatus, recipient_countries, recipient_regions, related_projects, related_to_projects, results, sectors, status, subtitle, sustainability, target_group, title, transactions, validations
</issue>
<code>
[start of akvo/rest/viewsets.py]
1 # -*- coding: utf-8 -*-
2
3 # Akvo RSR is covered by the GNU Affero General Public License.
4 # See more details in the license.txt file located at the root folder of the Akvo RSR module.
5 # For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
6
7 from django.db.models.fields.related import ForeignKey, ForeignObject
8
9 from akvo.rest.models import TastyTokenAuthentication
10
11 from rest_framework import authentication, filters, permissions, viewsets
12
13 from .filters import RSRGenericFilterBackend
14
15
16 class SafeMethodsPermissions(permissions.DjangoObjectPermissions):
17 """
18 Base class to allow any safe methods ('GET', 'OPTIONS' and 'HEAD') without needing to
19 authenticate.
20 """
21
22 def has_permission(self, request, view):
23 if request.method in permissions.SAFE_METHODS:
24 return True
25 return super(SafeMethodsPermissions, self).has_permission(request, view)
26
27
28 class BaseRSRViewSet(viewsets.ModelViewSet):
29 """
30 Base class used for the view sets for RSR models. Provides unified auth and perms settings.
31 """
32 authentication_classes = (authentication.SessionAuthentication, TastyTokenAuthentication, )
33 permission_classes = (SafeMethodsPermissions, )
34 filter_backends = (filters.OrderingFilter, RSRGenericFilterBackend,)
35 ordering_fields = '__all__'
36
37 def get_queryset(self):
38
39 def django_filter_filters(request):
40 """
41 Support emulating the DjangoFilterBackend-based filtering that some views used to have
42 """
43 # query string keys reserved by the RSRGenericFilterBackend
44 qs_params = ['filter', 'exclude', 'select_related', 'prefetch_related', ]
45 # query string keys used by core DRF, OrderingFilter and Akvo custom views
46 exclude_params = ['limit', 'format', 'page', 'ordering', 'partner_type', 'sync_owner',
47 'reporting_org']
48 filters = {}
49 for key in request.QUERY_PARAMS.keys():
50 if key not in qs_params + exclude_params:
51 filters.update({key: request.QUERY_PARAMS.get(key)})
52 return filters
53
54 def get_lookups_from_filters(legacy_filters):
55 """
56 Cast the values in DjangoFilterBackend-styled query string filters to correct types to
57 be able to use them in regular queryset-filter() calls
58 """
59 # types of lookups supported by the views using DjangoFilterBackend
60 LEGACY_FIELD_LOOKUPS = ['exact', 'contains', 'icontains', 'gt', 'gte', 'lt',
61 'lte', ]
62 query_set_lookups = []
63 for key, value in legacy_filters.items():
64 parts = key.split('__')
65 if parts[-1] in LEGACY_FIELD_LOOKUPS:
66 parts = parts[:-1]
67 model = queryset.model
68 for part in parts:
69 field_object, related_model, direct, m2m = model._meta.get_field_by_name(
70 part)
71 if direct:
72 if issubclass(field_object.__class__, ForeignObject):
73 model = field_object.related.parent_model
74 else:
75 value = field_object.to_python(value)
76 break
77 else:
78 model = related_model
79 query_set_lookups += [{key: value}]
80 return query_set_lookups
81
82 queryset = super(BaseRSRViewSet, self).get_queryset()
83
84 # support for old DjangoFilterBackend-based filtering
85 # find all "old styled" filters
86 legacy_filters = django_filter_filters(self.request)
87 # create lookup dicts from the filters found
88 lookups = get_lookups_from_filters(legacy_filters)
89 for lookup in lookups:
90 queryset = queryset.filter(**lookup)
91
92 return queryset
93
94
95 class PublicProjectViewSet(BaseRSRViewSet):
96 """
97 Only public projects or objects related to public projects will be shown.
98 """
99 # project_relation is the default string for constructing a field lookup to the is_public field
100 # on the related Project. Override this in when the viewset is for a model that doesn't have a
101 # direct FK to Project or the FK field isn't named project. E.g. IndicatorViewSet:
102 # project_relation = 'result__project__'
103 # The lookup is used to filter out objects associated with private projects, see below.
104 project_relation = 'project__'
105
106 def get_queryset(self):
107
108 request = self.request
109 user = request.user
110
111 queryset = super(PublicProjectViewSet, self).get_queryset()
112
113 def projects_filter_for_non_privileged_users(user, queryset):
114 # Construct the public projects filter field lookup.
115 project_filter = self.project_relation + 'is_public'
116
117 # Filter the object list into two querysets;
118 # One where the related Projects are public and one where they are private
119 public_objects = queryset.filter(**{project_filter: True}).distinct()
120 private_objects = queryset.filter(**{project_filter: False}).distinct()
121
122 # In case of an anonymous user, only return the public objects
123 if user.is_anonymous():
124 queryset = public_objects
125
126 # Otherwise, check to which objects the user has (change) permission
127 elif private_objects:
128 permission = type(private_objects[0])._meta.db_table.replace('_', '.change_')
129 permitted_obj_pks = []
130
131 # Loop through all 'private' objects to see if the user has permission to change
132 # it. If so add its PK to the list of permitted objects.
133 for obj in private_objects:
134 if user.has_perm(permission, obj):
135 permitted_obj_pks.append(obj.pk)
136
137 queryset = public_objects | queryset.filter(pk__in=permitted_obj_pks).distinct()
138
139 return queryset
140
141 # filter projects if user is "non-privileged"
142 if user.is_anonymous() or not (user.is_superuser or user.is_admin):
143 queryset = projects_filter_for_non_privileged_users(user, queryset)
144
145 return queryset
146
[end of akvo/rest/viewsets.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rest/viewsets.py b/akvo/rest/viewsets.py
--- a/akvo/rest/viewsets.py
+++ b/akvo/rest/viewsets.py
@@ -44,10 +44,10 @@
qs_params = ['filter', 'exclude', 'select_related', 'prefetch_related', ]
# query string keys used by core DRF, OrderingFilter and Akvo custom views
exclude_params = ['limit', 'format', 'page', 'ordering', 'partner_type', 'sync_owner',
- 'reporting_org']
+ 'reporting_org', ]
filters = {}
for key in request.QUERY_PARAMS.keys():
- if key not in qs_params + exclude_params:
+ if key not in qs_params + exclude_params and not key.startswith('image_thumb_'):
filters.update({key: request.QUERY_PARAMS.get(key)})
return filters
|
{"golden_diff": "diff --git a/akvo/rest/viewsets.py b/akvo/rest/viewsets.py\n--- a/akvo/rest/viewsets.py\n+++ b/akvo/rest/viewsets.py\n@@ -44,10 +44,10 @@\n qs_params = ['filter', 'exclude', 'select_related', 'prefetch_related', ]\n # query string keys used by core DRF, OrderingFilter and Akvo custom views\n exclude_params = ['limit', 'format', 'page', 'ordering', 'partner_type', 'sync_owner',\n- 'reporting_org']\n+ 'reporting_org', ]\n filters = {}\n for key in request.QUERY_PARAMS.keys():\n- if key not in qs_params + exclude_params:\n+ if key not in qs_params + exclude_params and not key.startswith('image_thumb_'):\n filters.update({key: request.QUERY_PARAMS.get(key)})\n return filters\n", "issue": "Unable to fetch project updates in Up app\nThe current UAT build for the 3.13 release breaks the project updates download when Up data is loaded or refreshed:\n\nhttps://github.com/akvo/akvo-rsr-up/issues/186\n\nEnvironment:\n\nRequest Method: GET\nRequest URL: http://rsr.uat.akvo.org/rest/v1/project_up/2210/?format=xml&image_thumb_name=up&image_thumb_up_width=100\n\nDjango Version: 1.7.7\nPython Version: 2.7.3\nInstalled Applications:\n('nested_inline',\n'django.contrib.admin',\n'django.contrib.auth',\n'django.contrib.contenttypes',\n'django.contrib.humanize',\n'django.contrib.messages',\n'django.contrib.sessions',\n'django.contrib.staticfiles',\n'django.contrib.webdesign',\n'akvo.codelists',\n'akvo.rsr',\n'akvo.api',\n'registration',\n'template_utils',\n'paypal.standard.ipn',\n'sorl.thumbnail',\n'django_counter',\n'mollie.ideal',\n'django_sorting',\n'pagination',\n'embed_video',\n'django_markup',\n'django_filters',\n'tastypie',\n'rest_framework',\n'rest_framework.authtoken',\n'rest_framework_swagger',\n'pipeline',\n'bootstrap3',\n'rules',\n'django_crontab',\n'raven.contrib.django.raven_compat')\nInstalled Middleware:\n('akvo.rsr.middleware.HostDispatchMiddleware',\n'django.contrib.sessions.middleware.SessionMiddleware',\n'django.middleware.locale.LocaleMiddleware',\n'django.middleware.csrf.CsrfViewMiddleware',\n'django.middleware.http.ConditionalGetMiddleware',\n'django_sorting.middleware.SortingMiddleware',\n'pagination.middleware.PaginationMiddleware',\n'django.middleware.common.CommonMiddleware',\n'django.contrib.auth.middleware.AuthenticationMiddleware',\n'django.middleware.doc.XViewMiddleware',\n'django.contrib.messages.middleware.MessageMiddleware',\n'akvo.rsr.middleware.ExceptionLoggingMiddleware',\n'akvo.rsr.middleware.RSRVersionHeaderMiddleware',\n'django_statsd.middleware.GraphiteRequestTimingMiddleware',\n'django_statsd.middleware.GraphiteMiddleware',\n'django_statsd.middleware.TastyPieRequestTimingMiddleware')\n\nTraceback:\nFile \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/core/handlers/base.py\" in get_response\n\nresponse = wrapped_callback(request, callback_args, _callback_kwargs) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/viewsets.py\" in view\nreturn self.dispatch(request, args, *kwargs) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/views/decorators/csrf.py\" in wrapped_view\nreturn view_func(args, *kwargs) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/views.py\" in dispatch\nresponse = self.handle_exception(exc) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/views.py\" in dispatch\nresponse = handler(request, args, *kwargs) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/mixins.py\" in retrieve\nself.object = self.get_object() File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/rest_framework/generics.py\" in get_object\nqueryset = self.filter_queryset(self.get_queryset()) File \"/var/akvo/rsr/code/akvo/rest/views/project.py\" in get_queryset\nreturn super(ProjectViewSet, self).get_queryset() File \"/var/akvo/rsr/code/akvo/rest/viewsets.py\" in get_queryset\nqueryset = super(PublicProjectViewSet, self).get_queryset() File \"/var/akvo/rsr/code/akvo/rest/viewsets.py\" in get_queryset\nqueryset = queryset.filter(_*lookup) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/query.py\" in filter\nreturn self._filter_or_exclude(False, args, *kwargs) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/query.py\" in _filter_or_exclude\nclone.query.add_q(Q(args, *kwargs)) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py\" in add_q\nclause, require_inner = self._add_q(where_part, self.used_aliases) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py\" in _add_q\ncurrent_negated=current_negated, connector=connector) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py\" in build_filter\nlookups, parts, reffed_aggregate = self.solve_lookup_type(arg) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py\" in solve_lookup_type\n_, field, _, lookup_parts = self.names_to_path(lookup_splitted, self.get_meta()) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py\" in names_to_path\nself.raise_field_error(opts, name) File \"/var/akvo/rsr/versions/deploy-RSR_Deploy-251/venv/local/lib/python2.7/site-packages/django/db/models/sql/query.py\" in raise_field_error\n\"Choices are: %s\" % (name, \", \".join(available)))\nException Type: FieldError at /rest/v1/project_up/2210/\nException Value: Cannot resolve keyword u'image_thumb_name' into field. Choices are: background, benchmarks, budget, budget_items, capital_spend_percentage, categories, collaboration_type, comments, conditions, contacts, country_budget_items, country_budget_vocabulary, created_at, crsadd, currency, current_image, current_image_caption, current_image_credit, current_status, custom_fields, date_end_actual, date_end_planned, date_start_actual, date_start_planned, default_aid_type, default_finance_type, default_flow_type, default_tied_status, documents, donate_button, fss, funds, funds_needed, goals, goals_overview, hierarchy, humanitarian, humanitarian_scopes, iati_activity_id, iati_checks, iati_project_exports, iati_project_import_logs, iati_project_imports, iatiexport, iatiimportjob, id, invoices, is_impact_project, is_public, keywords, language, last_modified_at, last_update, last_update_id, legacy_data, links, locations, notes, partners, partnerships, paymentgatewayselector, planned_disbursements, policy_markers, primary_location, primary_location_id, primary_organisation, primary_organisation_id, project_plan, project_plan_summary, project_scope, project_updates, publishingstatus, recipient_countries, recipient_regions, related_projects, related_to_projects, results, sectors, status, subtitle, sustainability, target_group, title, transactions, validations\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n# Akvo RSR is covered by the GNU Affero General Public License.\n# See more details in the license.txt file located at the root folder of the Akvo RSR module.\n# For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\nfrom django.db.models.fields.related import ForeignKey, ForeignObject\n\nfrom akvo.rest.models import TastyTokenAuthentication\n\nfrom rest_framework import authentication, filters, permissions, viewsets\n\nfrom .filters import RSRGenericFilterBackend\n\n\nclass SafeMethodsPermissions(permissions.DjangoObjectPermissions):\n \"\"\"\n Base class to allow any safe methods ('GET', 'OPTIONS' and 'HEAD') without needing to\n authenticate.\n \"\"\"\n\n def has_permission(self, request, view):\n if request.method in permissions.SAFE_METHODS:\n return True\n return super(SafeMethodsPermissions, self).has_permission(request, view)\n\n\nclass BaseRSRViewSet(viewsets.ModelViewSet):\n \"\"\"\n Base class used for the view sets for RSR models. Provides unified auth and perms settings.\n \"\"\"\n authentication_classes = (authentication.SessionAuthentication, TastyTokenAuthentication, )\n permission_classes = (SafeMethodsPermissions, )\n filter_backends = (filters.OrderingFilter, RSRGenericFilterBackend,)\n ordering_fields = '__all__'\n\n def get_queryset(self):\n\n def django_filter_filters(request):\n \"\"\"\n Support emulating the DjangoFilterBackend-based filtering that some views used to have\n \"\"\"\n # query string keys reserved by the RSRGenericFilterBackend\n qs_params = ['filter', 'exclude', 'select_related', 'prefetch_related', ]\n # query string keys used by core DRF, OrderingFilter and Akvo custom views\n exclude_params = ['limit', 'format', 'page', 'ordering', 'partner_type', 'sync_owner',\n 'reporting_org']\n filters = {}\n for key in request.QUERY_PARAMS.keys():\n if key not in qs_params + exclude_params:\n filters.update({key: request.QUERY_PARAMS.get(key)})\n return filters\n\n def get_lookups_from_filters(legacy_filters):\n \"\"\"\n Cast the values in DjangoFilterBackend-styled query string filters to correct types to\n be able to use them in regular queryset-filter() calls\n \"\"\"\n # types of lookups supported by the views using DjangoFilterBackend\n LEGACY_FIELD_LOOKUPS = ['exact', 'contains', 'icontains', 'gt', 'gte', 'lt',\n 'lte', ]\n query_set_lookups = []\n for key, value in legacy_filters.items():\n parts = key.split('__')\n if parts[-1] in LEGACY_FIELD_LOOKUPS:\n parts = parts[:-1]\n model = queryset.model\n for part in parts:\n field_object, related_model, direct, m2m = model._meta.get_field_by_name(\n part)\n if direct:\n if issubclass(field_object.__class__, ForeignObject):\n model = field_object.related.parent_model\n else:\n value = field_object.to_python(value)\n break\n else:\n model = related_model\n query_set_lookups += [{key: value}]\n return query_set_lookups\n\n queryset = super(BaseRSRViewSet, self).get_queryset()\n\n # support for old DjangoFilterBackend-based filtering\n # find all \"old styled\" filters\n legacy_filters = django_filter_filters(self.request)\n # create lookup dicts from the filters found\n lookups = get_lookups_from_filters(legacy_filters)\n for lookup in lookups:\n queryset = queryset.filter(**lookup)\n\n return queryset\n\n\nclass PublicProjectViewSet(BaseRSRViewSet):\n \"\"\"\n Only public projects or objects related to public projects will be shown.\n \"\"\"\n # project_relation is the default string for constructing a field lookup to the is_public field\n # on the related Project. Override this in when the viewset is for a model that doesn't have a\n # direct FK to Project or the FK field isn't named project. E.g. IndicatorViewSet:\n # project_relation = 'result__project__'\n # The lookup is used to filter out objects associated with private projects, see below.\n project_relation = 'project__'\n\n def get_queryset(self):\n\n request = self.request\n user = request.user\n\n queryset = super(PublicProjectViewSet, self).get_queryset()\n\n def projects_filter_for_non_privileged_users(user, queryset):\n # Construct the public projects filter field lookup.\n project_filter = self.project_relation + 'is_public'\n\n # Filter the object list into two querysets;\n # One where the related Projects are public and one where they are private\n public_objects = queryset.filter(**{project_filter: True}).distinct()\n private_objects = queryset.filter(**{project_filter: False}).distinct()\n\n # In case of an anonymous user, only return the public objects\n if user.is_anonymous():\n queryset = public_objects\n\n # Otherwise, check to which objects the user has (change) permission\n elif private_objects:\n permission = type(private_objects[0])._meta.db_table.replace('_', '.change_')\n permitted_obj_pks = []\n\n # Loop through all 'private' objects to see if the user has permission to change\n # it. If so add its PK to the list of permitted objects.\n for obj in private_objects:\n if user.has_perm(permission, obj):\n permitted_obj_pks.append(obj.pk)\n\n queryset = public_objects | queryset.filter(pk__in=permitted_obj_pks).distinct()\n\n return queryset\n\n # filter projects if user is \"non-privileged\"\n if user.is_anonymous() or not (user.is_superuser or user.is_admin):\n queryset = projects_filter_for_non_privileged_users(user, queryset)\n\n return queryset\n", "path": "akvo/rest/viewsets.py"}]}
| 3,781 | 195 |
gh_patches_debug_14030
|
rasdani/github-patches
|
git_diff
|
DataDog__dd-trace-py-5182
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Intermittent `RuntimeError: the memalloc module was not started` error
### Which version of dd-trace-py are you using?
`ddtrace==0.57.0`
### What is the result that you get?
`RuntimeError: the memalloc module was not started`

### What is the result that you expected?
No errors.
This seems to be happening a few times a day.
We have tried setting `DD_PROFILING_HEAP_ENABLED=False` and `DD_PROFILING_MEMALLOC=0` in the environment, but the errors continue to appear.
Configuration in Django:
```
import os
from ddtrace import config, tracer
# DataDog Setup
tracer.configure(hostname=os.environ.get("HOST_IP"))
tracer.configure(enabled=True)
tracer.set_tags(
{"env": os.environ.get("ENVIRONMENT"), "namespace": os.environ.get("NAMESPACE")}
)
config.django["analytics_enabled"] = True
config.django["cache_service_name"] = "xxx-cache"
config.django["database_service_name_prefix"] = "xxx"
config.django["distributed_tracing_enabled"] = True
config.django["instrument_middleware"] = True
config.django["service_name"] = "xxx"
```
</issue>
<code>
[start of ddtrace/profiling/collector/memalloc.py]
1 # -*- encoding: utf-8 -*-
2 import logging
3 import math
4 import os
5 import threading
6 import typing
7
8 import attr
9
10
11 try:
12 from ddtrace.profiling.collector import _memalloc
13 except ImportError:
14 _memalloc = None # type: ignore[assignment]
15
16 from ddtrace.internal.utils import attr as attr_utils
17 from ddtrace.internal.utils import formats
18 from ddtrace.profiling import _threading
19 from ddtrace.profiling import collector
20 from ddtrace.profiling import event
21
22
23 LOG = logging.getLogger(__name__)
24
25
26 @event.event_class
27 class MemoryAllocSampleEvent(event.StackBasedEvent):
28 """A sample storing memory allocation tracked."""
29
30 size = attr.ib(default=0, type=int)
31 """Allocation size in bytes."""
32
33 capture_pct = attr.ib(default=None, type=float)
34 """The capture percentage."""
35
36 nevents = attr.ib(default=0, type=int)
37 """The total number of allocation events sampled."""
38
39
40 @event.event_class
41 class MemoryHeapSampleEvent(event.StackBasedEvent):
42 """A sample storing memory allocation tracked."""
43
44 size = attr.ib(default=0, type=int)
45 """Allocation size in bytes."""
46
47 sample_size = attr.ib(default=0, type=int)
48 """The sampling size."""
49
50
51 def _get_default_heap_sample_size(
52 default_heap_sample_size=1024 * 1024, # type: int
53 ):
54 # type: (...) -> int
55 heap_sample_size = os.environ.get("DD_PROFILING_HEAP_SAMPLE_SIZE")
56 if heap_sample_size is not None:
57 return int(heap_sample_size)
58
59 if not formats.asbool(os.environ.get("DD_PROFILING_HEAP_ENABLED", "1")):
60 return 0
61
62 try:
63 from ddtrace.vendor import psutil
64
65 total_mem = psutil.swap_memory().total + psutil.virtual_memory().total
66 except Exception:
67 LOG.warning(
68 "Unable to get total memory available, using default value of %d KB",
69 default_heap_sample_size / 1024,
70 exc_info=True,
71 )
72 return default_heap_sample_size
73
74 # This is TRACEBACK_ARRAY_MAX_COUNT
75 max_samples = 2 ** 16
76
77 return max(math.ceil(total_mem / max_samples), default_heap_sample_size)
78
79
80 @attr.s
81 class MemoryCollector(collector.PeriodicCollector):
82 """Memory allocation collector."""
83
84 _DEFAULT_MAX_EVENTS = 16
85 _DEFAULT_INTERVAL = 0.5
86
87 # Arbitrary interval to empty the _memalloc event buffer
88 _interval = attr.ib(default=_DEFAULT_INTERVAL, repr=False)
89
90 # TODO make this dynamic based on the 1. interval and 2. the max number of events allowed in the Recorder
91 _max_events = attr.ib(
92 factory=attr_utils.from_env(
93 "_DD_PROFILING_MEMORY_EVENTS_BUFFER",
94 _DEFAULT_MAX_EVENTS,
95 int,
96 )
97 )
98 max_nframe = attr.ib(factory=attr_utils.from_env("DD_PROFILING_MAX_FRAMES", 64, int))
99 heap_sample_size = attr.ib(type=int, factory=_get_default_heap_sample_size)
100 ignore_profiler = attr.ib(factory=attr_utils.from_env("DD_PROFILING_IGNORE_PROFILER", False, formats.asbool))
101
102 def _start_service(self):
103 # type: (...) -> None
104 """Start collecting memory profiles."""
105 if _memalloc is None:
106 raise collector.CollectorUnavailable
107
108 _memalloc.start(self.max_nframe, self._max_events, self.heap_sample_size)
109
110 super(MemoryCollector, self)._start_service()
111
112 def _stop_service(self):
113 # type: (...) -> None
114 super(MemoryCollector, self)._stop_service()
115
116 if _memalloc is not None:
117 try:
118 _memalloc.stop()
119 except RuntimeError:
120 pass
121
122 def _get_thread_id_ignore_set(self):
123 # type: () -> typing.Set[int]
124 # This method is not perfect and prone to race condition in theory, but very little in practice.
125 # Anyhow it's not a big deal — it's a best effort feature.
126 return {
127 thread.ident
128 for thread in threading.enumerate()
129 if getattr(thread, "_ddtrace_profiling_ignore", False) and thread.ident is not None
130 }
131
132 def snapshot(self):
133 thread_id_ignore_set = self._get_thread_id_ignore_set()
134 return (
135 tuple(
136 MemoryHeapSampleEvent(
137 thread_id=thread_id,
138 thread_name=_threading.get_thread_name(thread_id),
139 thread_native_id=_threading.get_thread_native_id(thread_id),
140 frames=stack,
141 nframes=nframes,
142 size=size,
143 sample_size=self.heap_sample_size,
144 )
145 for (stack, nframes, thread_id), size in _memalloc.heap()
146 if not self.ignore_profiler or thread_id not in thread_id_ignore_set
147 ),
148 )
149
150 def collect(self):
151 events, count, alloc_count = _memalloc.iter_events()
152 capture_pct = 100 * count / alloc_count
153 thread_id_ignore_set = self._get_thread_id_ignore_set()
154 # TODO: The event timestamp is slightly off since it's going to be the time we copy the data from the
155 # _memalloc buffer to our Recorder. This is fine for now, but we might want to store the nanoseconds
156 # timestamp in C and then return it via iter_events.
157 return (
158 tuple(
159 MemoryAllocSampleEvent(
160 thread_id=thread_id,
161 thread_name=_threading.get_thread_name(thread_id),
162 thread_native_id=_threading.get_thread_native_id(thread_id),
163 frames=stack,
164 nframes=nframes,
165 size=size,
166 capture_pct=capture_pct,
167 nevents=alloc_count,
168 )
169 for (stack, nframes, thread_id), size, domain in events
170 if not self.ignore_profiler or thread_id not in thread_id_ignore_set
171 ),
172 )
173
[end of ddtrace/profiling/collector/memalloc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/ddtrace/profiling/collector/memalloc.py b/ddtrace/profiling/collector/memalloc.py
--- a/ddtrace/profiling/collector/memalloc.py
+++ b/ddtrace/profiling/collector/memalloc.py
@@ -148,7 +148,13 @@
)
def collect(self):
- events, count, alloc_count = _memalloc.iter_events()
+ try:
+ events, count, alloc_count = _memalloc.iter_events()
+ except RuntimeError:
+ # DEV: This can happen if either _memalloc has not been started or has been stopped.
+ LOG.debug("Unable to collect memory events from process %d", os.getpid(), exc_info=True)
+ return tuple()
+
capture_pct = 100 * count / alloc_count
thread_id_ignore_set = self._get_thread_id_ignore_set()
# TODO: The event timestamp is slightly off since it's going to be the time we copy the data from the
|
{"golden_diff": "diff --git a/ddtrace/profiling/collector/memalloc.py b/ddtrace/profiling/collector/memalloc.py\n--- a/ddtrace/profiling/collector/memalloc.py\n+++ b/ddtrace/profiling/collector/memalloc.py\n@@ -148,7 +148,13 @@\n )\n \n def collect(self):\n- events, count, alloc_count = _memalloc.iter_events()\n+ try:\n+ events, count, alloc_count = _memalloc.iter_events()\n+ except RuntimeError:\n+ # DEV: This can happen if either _memalloc has not been started or has been stopped.\n+ LOG.debug(\"Unable to collect memory events from process %d\", os.getpid(), exc_info=True)\n+ return tuple()\n+\n capture_pct = 100 * count / alloc_count\n thread_id_ignore_set = self._get_thread_id_ignore_set()\n # TODO: The event timestamp is slightly off since it's going to be the time we copy the data from the\n", "issue": "Intermittent `RuntimeError: the memalloc module was not started` error\n### Which version of dd-trace-py are you using?\r\n\r\n`ddtrace==0.57.0`\r\n\r\n### What is the result that you get?\r\n\r\n`RuntimeError: the memalloc module was not started`\r\n\r\n\r\n\r\n### What is the result that you expected?\r\n\r\nNo errors.\r\n\r\nThis seems to be happening a few times a day.\r\n\r\nWe have tried setting `DD_PROFILING_HEAP_ENABLED=False` and `DD_PROFILING_MEMALLOC=0` in the environment, but the errors continue to appear.\r\n\r\n\r\nConfiguration in Django:\r\n```\r\nimport os\r\nfrom ddtrace import config, tracer\r\n\r\n# DataDog Setup\r\ntracer.configure(hostname=os.environ.get(\"HOST_IP\"))\r\ntracer.configure(enabled=True)\r\ntracer.set_tags(\r\n {\"env\": os.environ.get(\"ENVIRONMENT\"), \"namespace\": os.environ.get(\"NAMESPACE\")}\r\n)\r\nconfig.django[\"analytics_enabled\"] = True\r\nconfig.django[\"cache_service_name\"] = \"xxx-cache\"\r\nconfig.django[\"database_service_name_prefix\"] = \"xxx\"\r\nconfig.django[\"distributed_tracing_enabled\"] = True\r\nconfig.django[\"instrument_middleware\"] = True\r\nconfig.django[\"service_name\"] = \"xxx\"\r\n\r\n```\r\n\n", "before_files": [{"content": "# -*- encoding: utf-8 -*-\nimport logging\nimport math\nimport os\nimport threading\nimport typing\n\nimport attr\n\n\ntry:\n from ddtrace.profiling.collector import _memalloc\nexcept ImportError:\n _memalloc = None # type: ignore[assignment]\n\nfrom ddtrace.internal.utils import attr as attr_utils\nfrom ddtrace.internal.utils import formats\nfrom ddtrace.profiling import _threading\nfrom ddtrace.profiling import collector\nfrom ddtrace.profiling import event\n\n\nLOG = logging.getLogger(__name__)\n\n\[email protected]_class\nclass MemoryAllocSampleEvent(event.StackBasedEvent):\n \"\"\"A sample storing memory allocation tracked.\"\"\"\n\n size = attr.ib(default=0, type=int)\n \"\"\"Allocation size in bytes.\"\"\"\n\n capture_pct = attr.ib(default=None, type=float)\n \"\"\"The capture percentage.\"\"\"\n\n nevents = attr.ib(default=0, type=int)\n \"\"\"The total number of allocation events sampled.\"\"\"\n\n\[email protected]_class\nclass MemoryHeapSampleEvent(event.StackBasedEvent):\n \"\"\"A sample storing memory allocation tracked.\"\"\"\n\n size = attr.ib(default=0, type=int)\n \"\"\"Allocation size in bytes.\"\"\"\n\n sample_size = attr.ib(default=0, type=int)\n \"\"\"The sampling size.\"\"\"\n\n\ndef _get_default_heap_sample_size(\n default_heap_sample_size=1024 * 1024, # type: int\n):\n # type: (...) -> int\n heap_sample_size = os.environ.get(\"DD_PROFILING_HEAP_SAMPLE_SIZE\")\n if heap_sample_size is not None:\n return int(heap_sample_size)\n\n if not formats.asbool(os.environ.get(\"DD_PROFILING_HEAP_ENABLED\", \"1\")):\n return 0\n\n try:\n from ddtrace.vendor import psutil\n\n total_mem = psutil.swap_memory().total + psutil.virtual_memory().total\n except Exception:\n LOG.warning(\n \"Unable to get total memory available, using default value of %d KB\",\n default_heap_sample_size / 1024,\n exc_info=True,\n )\n return default_heap_sample_size\n\n # This is TRACEBACK_ARRAY_MAX_COUNT\n max_samples = 2 ** 16\n\n return max(math.ceil(total_mem / max_samples), default_heap_sample_size)\n\n\[email protected]\nclass MemoryCollector(collector.PeriodicCollector):\n \"\"\"Memory allocation collector.\"\"\"\n\n _DEFAULT_MAX_EVENTS = 16\n _DEFAULT_INTERVAL = 0.5\n\n # Arbitrary interval to empty the _memalloc event buffer\n _interval = attr.ib(default=_DEFAULT_INTERVAL, repr=False)\n\n # TODO make this dynamic based on the 1. interval and 2. the max number of events allowed in the Recorder\n _max_events = attr.ib(\n factory=attr_utils.from_env(\n \"_DD_PROFILING_MEMORY_EVENTS_BUFFER\",\n _DEFAULT_MAX_EVENTS,\n int,\n )\n )\n max_nframe = attr.ib(factory=attr_utils.from_env(\"DD_PROFILING_MAX_FRAMES\", 64, int))\n heap_sample_size = attr.ib(type=int, factory=_get_default_heap_sample_size)\n ignore_profiler = attr.ib(factory=attr_utils.from_env(\"DD_PROFILING_IGNORE_PROFILER\", False, formats.asbool))\n\n def _start_service(self):\n # type: (...) -> None\n \"\"\"Start collecting memory profiles.\"\"\"\n if _memalloc is None:\n raise collector.CollectorUnavailable\n\n _memalloc.start(self.max_nframe, self._max_events, self.heap_sample_size)\n\n super(MemoryCollector, self)._start_service()\n\n def _stop_service(self):\n # type: (...) -> None\n super(MemoryCollector, self)._stop_service()\n\n if _memalloc is not None:\n try:\n _memalloc.stop()\n except RuntimeError:\n pass\n\n def _get_thread_id_ignore_set(self):\n # type: () -> typing.Set[int]\n # This method is not perfect and prone to race condition in theory, but very little in practice.\n # Anyhow it's not a big deal \u2014 it's a best effort feature.\n return {\n thread.ident\n for thread in threading.enumerate()\n if getattr(thread, \"_ddtrace_profiling_ignore\", False) and thread.ident is not None\n }\n\n def snapshot(self):\n thread_id_ignore_set = self._get_thread_id_ignore_set()\n return (\n tuple(\n MemoryHeapSampleEvent(\n thread_id=thread_id,\n thread_name=_threading.get_thread_name(thread_id),\n thread_native_id=_threading.get_thread_native_id(thread_id),\n frames=stack,\n nframes=nframes,\n size=size,\n sample_size=self.heap_sample_size,\n )\n for (stack, nframes, thread_id), size in _memalloc.heap()\n if not self.ignore_profiler or thread_id not in thread_id_ignore_set\n ),\n )\n\n def collect(self):\n events, count, alloc_count = _memalloc.iter_events()\n capture_pct = 100 * count / alloc_count\n thread_id_ignore_set = self._get_thread_id_ignore_set()\n # TODO: The event timestamp is slightly off since it's going to be the time we copy the data from the\n # _memalloc buffer to our Recorder. This is fine for now, but we might want to store the nanoseconds\n # timestamp in C and then return it via iter_events.\n return (\n tuple(\n MemoryAllocSampleEvent(\n thread_id=thread_id,\n thread_name=_threading.get_thread_name(thread_id),\n thread_native_id=_threading.get_thread_native_id(thread_id),\n frames=stack,\n nframes=nframes,\n size=size,\n capture_pct=capture_pct,\n nevents=alloc_count,\n )\n for (stack, nframes, thread_id), size, domain in events\n if not self.ignore_profiler or thread_id not in thread_id_ignore_set\n ),\n )\n", "path": "ddtrace/profiling/collector/memalloc.py"}]}
| 2,595 | 222 |
gh_patches_debug_42167
|
rasdani/github-patches
|
git_diff
|
sunpy__sunpy-3970
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
sunpy.data.manager does not allow for local path object in manager. override_file()
<!-- This comments are hidden when you submit the issue so you do not need to remove them!
Please be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst
Please be sure to check out our code of conduct:
https://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->
<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.
If a similar issue is closed, have a quick look to see if you are satisfied by the resolution.
If not please go ahead and open an issue! -->
### Description
It would be great if `sunpy.data.manager` could take a local file path
</issue>
<code>
[start of sunpy/data/data_manager/manager.py]
1 from typing import Dict
2 import pathlib
3 import functools
4 from contextlib import contextmanager
5 import warnings
6
7 from sunpy.util.util import hash_file
8 from sunpy.util.exceptions import SunpyUserWarning
9
10 __all__ = ['DataManager']
11
12
13 class DataManager:
14 """
15 This class provides a remote data manager for managing remote files.
16
17 Parameters
18 ----------
19 cache: `sunpy.data.data_manager.cache.Cache`
20 Cache object to be used by `~sunpy.data.data_manager.manager.DataManager`.
21 """
22
23 def __init__(self, cache):
24 self._cache = cache
25
26 self._file_cache = {}
27
28 self._skip_hash_check = False
29 self._skip_file: Dict[str, str] = {}
30
31 def require(self, name, urls, sha_hash):
32 """
33 Decorator for informing the data manager about the requirement of
34 a file by a function.
35
36 Parameters
37 ----------
38 name: `str`
39 The name to reference the file with.
40 urls: `list` or `str`
41 A list of urls to download the file from.
42 sha_hash: `str`
43 SHA-1 hash of file.
44 """
45 if isinstance(urls, str):
46 urls = [urls]
47
48 def decorator(func):
49 @functools.wraps(func)
50 def wrapper(*args, **kwargs):
51 replace = self._skip_file.get(name, None)
52 if replace:
53 if replace['uri'].startswith('file://'):
54 file_path = replace['uri'][len('file://'):]
55 file_hash = hash_file(file_path)
56 else:
57 file_path, file_hash, _ = self._cache._download_and_hash([replace['uri']])
58 if replace['hash'] and file_hash != replace['hash']:
59 # if hash provided to replace function doesn't match the hash of the file
60 # raise error
61 raise ValueError(
62 "Hash provided to override_file does not match hash of the file.")
63 elif self._skip_hash_check:
64 file_path = self._cache.download(urls, redownload=True)
65 else:
66 details = self._cache.get_by_hash(sha_hash)
67 if not details:
68 # In case we are matching by hash and file does not exist
69 # That might mean the wrong hash is supplied to decorator
70 # We match by urls to make sure that is not the case
71 if self._cache_has_file(urls):
72 raise ValueError(" Hash provided does not match the hash in database.")
73 file_path = self._cache.download(urls)
74 if hash_file(file_path) != sha_hash:
75 # the hash of the file downloaded does not match provided hash
76 # this means the file has changed on the server.
77 # the function should be updated to use the new hash. Raise an error to notify.
78 raise RuntimeError(
79 "Remote file on the server has changed. Update hash of the function.")
80 else:
81 # This is to handle the case when the local file appears to be tampered/corrupted
82 if hash_file(details['file_path']) != details['file_hash']:
83 warnings.warn("Hashes do not match, the file will be redownloaded (could be be tampered/corrupted)",
84 SunpyUserWarning)
85 file_path = self._cache.download(urls, redownload=True)
86 # Recheck the hash again, if this fails, we will exit.
87 if hash_file(file_path) != details['file_hash']:
88 raise RuntimeError("Redownloaded file also has the incorrect hash."
89 "The remote file on the server might have changed.")
90 else:
91 file_path = details['file_path']
92
93 self._file_cache[name] = file_path
94 return func(*args, **kwargs)
95 return wrapper
96
97 return decorator
98
99 @contextmanager
100 def override_file(self, name, uri, sha_hash=None):
101 """
102 Replaces the file by the name with the file provided by the url/path.
103
104 Parameters
105 ----------
106 name: `str`
107 Name of the file provided in the `require` decorator.
108 uri: `str`
109 URI of the file which replaces original file. Scheme should be
110 one of ``http``, ``https``, ``ftp`` or ``file``.
111 sha_hash: `str`, optional
112 SHA256 hash of the file to compared to after downloading.
113 """
114 try:
115 self._skip_file[name] = {
116 'uri': uri,
117 'hash': sha_hash,
118 }
119 yield
120 finally:
121 _ = self._skip_file.pop(name, None)
122
123 @contextmanager
124 def skip_hash_check(self):
125 """
126 Disables hash checking temporarily
127
128 Examples
129 --------
130 >>> with remote_data_manager.skip_hash_check(): # doctest: +SKIP
131 ... myfunction() # doctest: +SKIP
132 """
133 try:
134 self._skip_hash_check = True
135 yield
136 finally:
137 self._skip_hash_check = False
138
139 def get(self, name):
140 """
141 Get the file by name.
142
143 Parameters
144 ----------
145 name: `str`
146 Name of the file given to the data manager, same as the one provided
147 in `~sunpy.data.data_manager.manager.DataManager.require`.
148
149 Returns
150 -------
151 `pathlib.Path`
152 Path of the file.
153
154 Raises
155 ------
156 `KeyError`
157 If ``name`` is not in the cache.
158 """
159 return pathlib.Path(self._file_cache[name])
160
161 def _cache_has_file(self, urls):
162 for url in urls:
163 if self._cache._get_by_url(url):
164 return True
165 return False
166
[end of sunpy/data/data_manager/manager.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sunpy/data/data_manager/manager.py b/sunpy/data/data_manager/manager.py
--- a/sunpy/data/data_manager/manager.py
+++ b/sunpy/data/data_manager/manager.py
@@ -1,11 +1,12 @@
-from typing import Dict
import pathlib
+import warnings
import functools
+from typing import Dict
from contextlib import contextmanager
-import warnings
+from urllib.parse import urlparse
-from sunpy.util.util import hash_file
from sunpy.util.exceptions import SunpyUserWarning
+from sunpy.util.util import hash_file
__all__ = ['DataManager']
@@ -50,8 +51,14 @@
def wrapper(*args, **kwargs):
replace = self._skip_file.get(name, None)
if replace:
- if replace['uri'].startswith('file://'):
- file_path = replace['uri'][len('file://'):]
+ uri_parse = urlparse(replace['uri'])
+ if uri_parse.scheme in ("", "file"):
+ # If a relative file uri is specified (i.e.
+ # `file://sunpy/test`) this maintains compatibility
+ # with the original behaviour where this would be
+ # interpreted as `./sunpy/test` if no scheme is
+ # specified netloc will be '' by default.
+ file_path = uri_parse.netloc + uri_parse.path
file_hash = hash_file(file_path)
else:
file_path, file_hash, _ = self._cache._download_and_hash([replace['uri']])
@@ -74,11 +81,13 @@
if hash_file(file_path) != sha_hash:
# the hash of the file downloaded does not match provided hash
# this means the file has changed on the server.
- # the function should be updated to use the new hash. Raise an error to notify.
+ # the function should be updated to use the new
+ # hash. Raise an error to notify.
raise RuntimeError(
"Remote file on the server has changed. Update hash of the function.")
else:
- # This is to handle the case when the local file appears to be tampered/corrupted
+ # This is to handle the case when the local file
+ # appears to be tampered/corrupted
if hash_file(details['file_path']) != details['file_hash']:
warnings.warn("Hashes do not match, the file will be redownloaded (could be be tampered/corrupted)",
SunpyUserWarning)
@@ -106,8 +115,10 @@
name: `str`
Name of the file provided in the `require` decorator.
uri: `str`
- URI of the file which replaces original file. Scheme should be
- one of ``http``, ``https``, ``ftp`` or ``file``.
+ URI of the file which replaces original file. Scheme should be one
+ of ``http``, ``https``, ``ftp`` or ``file``. If no scheme is given
+ the uri will be interpreted as a local path. i.e.
+ ``file:///tmp/test`` and ``/tmp/test`` are the same.
sha_hash: `str`, optional
SHA256 hash of the file to compared to after downloading.
"""
|
{"golden_diff": "diff --git a/sunpy/data/data_manager/manager.py b/sunpy/data/data_manager/manager.py\n--- a/sunpy/data/data_manager/manager.py\n+++ b/sunpy/data/data_manager/manager.py\n@@ -1,11 +1,12 @@\n-from typing import Dict\n import pathlib\n+import warnings\n import functools\n+from typing import Dict\n from contextlib import contextmanager\n-import warnings\n+from urllib.parse import urlparse\n \n-from sunpy.util.util import hash_file\n from sunpy.util.exceptions import SunpyUserWarning\n+from sunpy.util.util import hash_file\n \n __all__ = ['DataManager']\n \n@@ -50,8 +51,14 @@\n def wrapper(*args, **kwargs):\n replace = self._skip_file.get(name, None)\n if replace:\n- if replace['uri'].startswith('file://'):\n- file_path = replace['uri'][len('file://'):]\n+ uri_parse = urlparse(replace['uri'])\n+ if uri_parse.scheme in (\"\", \"file\"):\n+ # If a relative file uri is specified (i.e.\n+ # `file://sunpy/test`) this maintains compatibility\n+ # with the original behaviour where this would be\n+ # interpreted as `./sunpy/test` if no scheme is\n+ # specified netloc will be '' by default.\n+ file_path = uri_parse.netloc + uri_parse.path\n file_hash = hash_file(file_path)\n else:\n file_path, file_hash, _ = self._cache._download_and_hash([replace['uri']])\n@@ -74,11 +81,13 @@\n if hash_file(file_path) != sha_hash:\n # the hash of the file downloaded does not match provided hash\n # this means the file has changed on the server.\n- # the function should be updated to use the new hash. Raise an error to notify.\n+ # the function should be updated to use the new\n+ # hash. Raise an error to notify.\n raise RuntimeError(\n \"Remote file on the server has changed. Update hash of the function.\")\n else:\n- # This is to handle the case when the local file appears to be tampered/corrupted\n+ # This is to handle the case when the local file\n+ # appears to be tampered/corrupted\n if hash_file(details['file_path']) != details['file_hash']:\n warnings.warn(\"Hashes do not match, the file will be redownloaded (could be be tampered/corrupted)\",\n SunpyUserWarning)\n@@ -106,8 +115,10 @@\n name: `str`\n Name of the file provided in the `require` decorator.\n uri: `str`\n- URI of the file which replaces original file. Scheme should be\n- one of ``http``, ``https``, ``ftp`` or ``file``.\n+ URI of the file which replaces original file. Scheme should be one\n+ of ``http``, ``https``, ``ftp`` or ``file``. If no scheme is given\n+ the uri will be interpreted as a local path. i.e.\n+ ``file:///tmp/test`` and ``/tmp/test`` are the same.\n sha_hash: `str`, optional\n SHA256 hash of the file to compared to after downloading.\n \"\"\"\n", "issue": "sunpy.data.manager does not allow for local path object in manager. override_file()\n<!-- This comments are hidden when you submit the issue so you do not need to remove them!\r\nPlease be sure to check out our contributing guidelines: https://github.com/sunpy/sunpy/blob/master/CONTRIBUTING.rst\r\nPlease be sure to check out our code of conduct:\r\nhttps://github.com/sunpy/sunpy/blob/master/CODE_OF_CONDUCT.rst -->\r\n\r\n<!-- Please have a search on our GitHub repository to see if a similar issue has already been posted.\r\nIf a similar issue is closed, have a quick look to see if you are satisfied by the resolution.\r\nIf not please go ahead and open an issue! -->\r\n\r\n### Description\r\nIt would be great if `sunpy.data.manager` could take a local file path\r\n\r\n\r\n\n", "before_files": [{"content": "from typing import Dict\nimport pathlib\nimport functools\nfrom contextlib import contextmanager\nimport warnings\n\nfrom sunpy.util.util import hash_file\nfrom sunpy.util.exceptions import SunpyUserWarning\n\n__all__ = ['DataManager']\n\n\nclass DataManager:\n \"\"\"\n This class provides a remote data manager for managing remote files.\n\n Parameters\n ----------\n cache: `sunpy.data.data_manager.cache.Cache`\n Cache object to be used by `~sunpy.data.data_manager.manager.DataManager`.\n \"\"\"\n\n def __init__(self, cache):\n self._cache = cache\n\n self._file_cache = {}\n\n self._skip_hash_check = False\n self._skip_file: Dict[str, str] = {}\n\n def require(self, name, urls, sha_hash):\n \"\"\"\n Decorator for informing the data manager about the requirement of\n a file by a function.\n\n Parameters\n ----------\n name: `str`\n The name to reference the file with.\n urls: `list` or `str`\n A list of urls to download the file from.\n sha_hash: `str`\n SHA-1 hash of file.\n \"\"\"\n if isinstance(urls, str):\n urls = [urls]\n\n def decorator(func):\n @functools.wraps(func)\n def wrapper(*args, **kwargs):\n replace = self._skip_file.get(name, None)\n if replace:\n if replace['uri'].startswith('file://'):\n file_path = replace['uri'][len('file://'):]\n file_hash = hash_file(file_path)\n else:\n file_path, file_hash, _ = self._cache._download_and_hash([replace['uri']])\n if replace['hash'] and file_hash != replace['hash']:\n # if hash provided to replace function doesn't match the hash of the file\n # raise error\n raise ValueError(\n \"Hash provided to override_file does not match hash of the file.\")\n elif self._skip_hash_check:\n file_path = self._cache.download(urls, redownload=True)\n else:\n details = self._cache.get_by_hash(sha_hash)\n if not details:\n # In case we are matching by hash and file does not exist\n # That might mean the wrong hash is supplied to decorator\n # We match by urls to make sure that is not the case\n if self._cache_has_file(urls):\n raise ValueError(\" Hash provided does not match the hash in database.\")\n file_path = self._cache.download(urls)\n if hash_file(file_path) != sha_hash:\n # the hash of the file downloaded does not match provided hash\n # this means the file has changed on the server.\n # the function should be updated to use the new hash. Raise an error to notify.\n raise RuntimeError(\n \"Remote file on the server has changed. Update hash of the function.\")\n else:\n # This is to handle the case when the local file appears to be tampered/corrupted\n if hash_file(details['file_path']) != details['file_hash']:\n warnings.warn(\"Hashes do not match, the file will be redownloaded (could be be tampered/corrupted)\",\n SunpyUserWarning)\n file_path = self._cache.download(urls, redownload=True)\n # Recheck the hash again, if this fails, we will exit.\n if hash_file(file_path) != details['file_hash']:\n raise RuntimeError(\"Redownloaded file also has the incorrect hash.\"\n \"The remote file on the server might have changed.\")\n else:\n file_path = details['file_path']\n\n self._file_cache[name] = file_path\n return func(*args, **kwargs)\n return wrapper\n\n return decorator\n\n @contextmanager\n def override_file(self, name, uri, sha_hash=None):\n \"\"\"\n Replaces the file by the name with the file provided by the url/path.\n\n Parameters\n ----------\n name: `str`\n Name of the file provided in the `require` decorator.\n uri: `str`\n URI of the file which replaces original file. Scheme should be\n one of ``http``, ``https``, ``ftp`` or ``file``.\n sha_hash: `str`, optional\n SHA256 hash of the file to compared to after downloading.\n \"\"\"\n try:\n self._skip_file[name] = {\n 'uri': uri,\n 'hash': sha_hash,\n }\n yield\n finally:\n _ = self._skip_file.pop(name, None)\n\n @contextmanager\n def skip_hash_check(self):\n \"\"\"\n Disables hash checking temporarily\n\n Examples\n --------\n >>> with remote_data_manager.skip_hash_check(): # doctest: +SKIP\n ... myfunction() # doctest: +SKIP\n \"\"\"\n try:\n self._skip_hash_check = True\n yield\n finally:\n self._skip_hash_check = False\n\n def get(self, name):\n \"\"\"\n Get the file by name.\n\n Parameters\n ----------\n name: `str`\n Name of the file given to the data manager, same as the one provided\n in `~sunpy.data.data_manager.manager.DataManager.require`.\n\n Returns\n -------\n `pathlib.Path`\n Path of the file.\n\n Raises\n ------\n `KeyError`\n If ``name`` is not in the cache.\n \"\"\"\n return pathlib.Path(self._file_cache[name])\n\n def _cache_has_file(self, urls):\n for url in urls:\n if self._cache._get_by_url(url):\n return True\n return False\n", "path": "sunpy/data/data_manager/manager.py"}]}
| 2,307 | 724 |
gh_patches_debug_57933
|
rasdani/github-patches
|
git_diff
|
scrapy__scrapy-3668
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
about the signal retry_complete
I didn't find the singnal in the singnal list,how can I use it
</issue>
<code>
[start of scrapy/downloadermiddlewares/retry.py]
1 """
2 An extension to retry failed requests that are potentially caused by temporary
3 problems such as a connection timeout or HTTP 500 error.
4
5 You can change the behaviour of this middleware by modifing the scraping settings:
6 RETRY_TIMES - how many times to retry a failed page
7 RETRY_HTTP_CODES - which HTTP response codes to retry
8
9 Failed pages are collected on the scraping process and rescheduled at the end,
10 once the spider has finished crawling all regular (non failed) pages. Once
11 there is no more failed pages to retry this middleware sends a signal
12 (retry_complete), so other extensions could connect to that signal.
13 """
14 import logging
15
16 from twisted.internet import defer
17 from twisted.internet.error import TimeoutError, DNSLookupError, \
18 ConnectionRefusedError, ConnectionDone, ConnectError, \
19 ConnectionLost, TCPTimedOutError
20 from twisted.web.client import ResponseFailed
21
22 from scrapy.exceptions import NotConfigured
23 from scrapy.utils.response import response_status_message
24 from scrapy.core.downloader.handlers.http11 import TunnelError
25 from scrapy.utils.python import global_object_name
26
27 logger = logging.getLogger(__name__)
28
29
30 class RetryMiddleware(object):
31
32 # IOError is raised by the HttpCompression middleware when trying to
33 # decompress an empty response
34 EXCEPTIONS_TO_RETRY = (defer.TimeoutError, TimeoutError, DNSLookupError,
35 ConnectionRefusedError, ConnectionDone, ConnectError,
36 ConnectionLost, TCPTimedOutError, ResponseFailed,
37 IOError, TunnelError)
38
39 def __init__(self, settings):
40 if not settings.getbool('RETRY_ENABLED'):
41 raise NotConfigured
42 self.max_retry_times = settings.getint('RETRY_TIMES')
43 self.retry_http_codes = set(int(x) for x in settings.getlist('RETRY_HTTP_CODES'))
44 self.priority_adjust = settings.getint('RETRY_PRIORITY_ADJUST')
45
46 @classmethod
47 def from_crawler(cls, crawler):
48 return cls(crawler.settings)
49
50 def process_response(self, request, response, spider):
51 if request.meta.get('dont_retry', False):
52 return response
53 if response.status in self.retry_http_codes:
54 reason = response_status_message(response.status)
55 return self._retry(request, reason, spider) or response
56 return response
57
58 def process_exception(self, request, exception, spider):
59 if isinstance(exception, self.EXCEPTIONS_TO_RETRY) \
60 and not request.meta.get('dont_retry', False):
61 return self._retry(request, exception, spider)
62
63 def _retry(self, request, reason, spider):
64 retries = request.meta.get('retry_times', 0) + 1
65
66 retry_times = self.max_retry_times
67
68 if 'max_retry_times' in request.meta:
69 retry_times = request.meta['max_retry_times']
70
71 stats = spider.crawler.stats
72 if retries <= retry_times:
73 logger.debug("Retrying %(request)s (failed %(retries)d times): %(reason)s",
74 {'request': request, 'retries': retries, 'reason': reason},
75 extra={'spider': spider})
76 retryreq = request.copy()
77 retryreq.meta['retry_times'] = retries
78 retryreq.dont_filter = True
79 retryreq.priority = request.priority + self.priority_adjust
80
81 if isinstance(reason, Exception):
82 reason = global_object_name(reason.__class__)
83
84 stats.inc_value('retry/count')
85 stats.inc_value('retry/reason_count/%s' % reason)
86 return retryreq
87 else:
88 stats.inc_value('retry/max_reached')
89 logger.debug("Gave up retrying %(request)s (failed %(retries)d times): %(reason)s",
90 {'request': request, 'retries': retries, 'reason': reason},
91 extra={'spider': spider})
92
[end of scrapy/downloadermiddlewares/retry.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/scrapy/downloadermiddlewares/retry.py b/scrapy/downloadermiddlewares/retry.py
--- a/scrapy/downloadermiddlewares/retry.py
+++ b/scrapy/downloadermiddlewares/retry.py
@@ -7,9 +7,7 @@
RETRY_HTTP_CODES - which HTTP response codes to retry
Failed pages are collected on the scraping process and rescheduled at the end,
-once the spider has finished crawling all regular (non failed) pages. Once
-there is no more failed pages to retry this middleware sends a signal
-(retry_complete), so other extensions could connect to that signal.
+once the spider has finished crawling all regular (non failed) pages.
"""
import logging
|
{"golden_diff": "diff --git a/scrapy/downloadermiddlewares/retry.py b/scrapy/downloadermiddlewares/retry.py\n--- a/scrapy/downloadermiddlewares/retry.py\n+++ b/scrapy/downloadermiddlewares/retry.py\n@@ -7,9 +7,7 @@\n RETRY_HTTP_CODES - which HTTP response codes to retry\n \n Failed pages are collected on the scraping process and rescheduled at the end,\n-once the spider has finished crawling all regular (non failed) pages. Once\n-there is no more failed pages to retry this middleware sends a signal\n-(retry_complete), so other extensions could connect to that signal.\n+once the spider has finished crawling all regular (non failed) pages.\n \"\"\"\n import logging\n", "issue": "about the signal retry_complete\nI didn't find the singnal in the singnal list,how can I use it\n", "before_files": [{"content": "\"\"\"\nAn extension to retry failed requests that are potentially caused by temporary\nproblems such as a connection timeout or HTTP 500 error.\n\nYou can change the behaviour of this middleware by modifing the scraping settings:\nRETRY_TIMES - how many times to retry a failed page\nRETRY_HTTP_CODES - which HTTP response codes to retry\n\nFailed pages are collected on the scraping process and rescheduled at the end,\nonce the spider has finished crawling all regular (non failed) pages. Once\nthere is no more failed pages to retry this middleware sends a signal\n(retry_complete), so other extensions could connect to that signal.\n\"\"\"\nimport logging\n\nfrom twisted.internet import defer\nfrom twisted.internet.error import TimeoutError, DNSLookupError, \\\n ConnectionRefusedError, ConnectionDone, ConnectError, \\\n ConnectionLost, TCPTimedOutError\nfrom twisted.web.client import ResponseFailed\n\nfrom scrapy.exceptions import NotConfigured\nfrom scrapy.utils.response import response_status_message\nfrom scrapy.core.downloader.handlers.http11 import TunnelError\nfrom scrapy.utils.python import global_object_name\n\nlogger = logging.getLogger(__name__)\n\n\nclass RetryMiddleware(object):\n\n # IOError is raised by the HttpCompression middleware when trying to\n # decompress an empty response\n EXCEPTIONS_TO_RETRY = (defer.TimeoutError, TimeoutError, DNSLookupError,\n ConnectionRefusedError, ConnectionDone, ConnectError,\n ConnectionLost, TCPTimedOutError, ResponseFailed,\n IOError, TunnelError)\n\n def __init__(self, settings):\n if not settings.getbool('RETRY_ENABLED'):\n raise NotConfigured\n self.max_retry_times = settings.getint('RETRY_TIMES')\n self.retry_http_codes = set(int(x) for x in settings.getlist('RETRY_HTTP_CODES'))\n self.priority_adjust = settings.getint('RETRY_PRIORITY_ADJUST')\n\n @classmethod\n def from_crawler(cls, crawler):\n return cls(crawler.settings)\n\n def process_response(self, request, response, spider):\n if request.meta.get('dont_retry', False):\n return response\n if response.status in self.retry_http_codes:\n reason = response_status_message(response.status)\n return self._retry(request, reason, spider) or response\n return response\n\n def process_exception(self, request, exception, spider):\n if isinstance(exception, self.EXCEPTIONS_TO_RETRY) \\\n and not request.meta.get('dont_retry', False):\n return self._retry(request, exception, spider)\n\n def _retry(self, request, reason, spider):\n retries = request.meta.get('retry_times', 0) + 1\n\n retry_times = self.max_retry_times\n\n if 'max_retry_times' in request.meta:\n retry_times = request.meta['max_retry_times']\n\n stats = spider.crawler.stats\n if retries <= retry_times:\n logger.debug(\"Retrying %(request)s (failed %(retries)d times): %(reason)s\",\n {'request': request, 'retries': retries, 'reason': reason},\n extra={'spider': spider})\n retryreq = request.copy()\n retryreq.meta['retry_times'] = retries\n retryreq.dont_filter = True\n retryreq.priority = request.priority + self.priority_adjust\n\n if isinstance(reason, Exception):\n reason = global_object_name(reason.__class__)\n\n stats.inc_value('retry/count')\n stats.inc_value('retry/reason_count/%s' % reason)\n return retryreq\n else:\n stats.inc_value('retry/max_reached')\n logger.debug(\"Gave up retrying %(request)s (failed %(retries)d times): %(reason)s\",\n {'request': request, 'retries': retries, 'reason': reason},\n extra={'spider': spider})\n", "path": "scrapy/downloadermiddlewares/retry.py"}]}
| 1,536 | 145 |
gh_patches_debug_38420
|
rasdani/github-patches
|
git_diff
|
mlflow__mlflow-9290
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[DOC-FIX] No documentation body for mlflow.search_model_versions
### Willingness to contribute
No. I cannot contribute a documentation fix at this time.
### URL(s) with the issue
(https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.search_model_versions)
### Description of proposal (what needs changing)
There is no documentation body for mlflow. search_model_versions() unlike for mlflow.mlflow.search_registered_models().
</issue>
<code>
[start of mlflow/tracking/_model_registry/fluent.py]
1 from mlflow.tracking.client import MlflowClient
2 from mlflow.exceptions import MlflowException
3 from mlflow.entities.model_registry import ModelVersion
4 from mlflow.entities.model_registry import RegisteredModel
5 from mlflow.protos.databricks_pb2 import RESOURCE_ALREADY_EXISTS, ALREADY_EXISTS, ErrorCode
6 from mlflow.store.artifact.runs_artifact_repo import RunsArtifactRepository
7 from mlflow.utils.logging_utils import eprint
8 from mlflow.utils import get_results_from_paginated_fn
9 from mlflow.tracking._model_registry import DEFAULT_AWAIT_MAX_SLEEP_SECONDS
10 from mlflow.store.model_registry import (
11 SEARCH_REGISTERED_MODEL_MAX_RESULTS_DEFAULT,
12 SEARCH_MODEL_VERSION_MAX_RESULTS_DEFAULT,
13 )
14 from typing import Any, Dict, Optional, List
15
16
17 def register_model(
18 model_uri,
19 name,
20 await_registration_for=DEFAULT_AWAIT_MAX_SLEEP_SECONDS,
21 *,
22 tags: Optional[Dict[str, Any]] = None,
23 ) -> ModelVersion:
24 """
25 Create a new model version in model registry for the model files specified by ``model_uri``.
26 Note that this method assumes the model registry backend URI is the same as that of the
27 tracking backend.
28
29 :param model_uri: URI referring to the MLmodel directory. Use a ``runs:/`` URI if you want to
30 record the run ID with the model in model registry. ``models:/`` URIs are
31 currently not supported.
32 :param name: Name of the registered model under which to create a new model version. If a
33 registered model with the given name does not exist, it will be created
34 automatically.
35 :param await_registration_for: Number of seconds to wait for the model version to finish
36 being created and is in ``READY`` status. By default, the function
37 waits for five minutes. Specify 0 or None to skip waiting.
38 :param tags: A dictionary of key-value pairs that are converted into
39 :py:class:`mlflow.entities.model_registry.ModelVersionTag` objects.
40 :return: Single :py:class:`mlflow.entities.model_registry.ModelVersion` object created by
41 backend.
42
43 .. test-code-block:: python
44 :caption: Example
45
46 import mlflow.sklearn
47 from mlflow.models import infer_signature
48 from sklearn.datasets import make_regression
49 from sklearn.ensemble import RandomForestRegressor
50
51 mlflow.set_tracking_uri("sqlite:////tmp/mlruns.db")
52 params = {"n_estimators": 3, "random_state": 42}
53 X, y = make_regression(n_features=4, n_informative=2, random_state=0, shuffle=False)
54
55 # Log MLflow entities
56 with mlflow.start_run() as run:
57 rfr = RandomForestRegressor(**params).fit(X, y)
58 signature = infer_signature(X, rfr.predict(X))
59 mlflow.log_params(params)
60 mlflow.sklearn.log_model(rfr, artifact_path="sklearn-model", signature=signature)
61
62 model_uri = "runs:/{}/sklearn-model".format(run.info.run_id)
63 mv = mlflow.register_model(model_uri, "RandomForestRegressionModel")
64 print("Name: {}".format(mv.name))
65 print("Version: {}".format(mv.version))
66
67 .. code-block:: text
68 :caption: Output
69
70 Name: RandomForestRegressionModel
71 Version: 1
72 """
73 return _register_model(
74 model_uri=model_uri, name=name, await_registration_for=await_registration_for, tags=tags
75 )
76
77
78 def _register_model(
79 model_uri,
80 name,
81 await_registration_for=DEFAULT_AWAIT_MAX_SLEEP_SECONDS,
82 *,
83 tags: Optional[Dict[str, Any]] = None,
84 local_model_path=None,
85 ) -> ModelVersion:
86 client = MlflowClient()
87 try:
88 create_model_response = client.create_registered_model(name)
89 eprint(f"Successfully registered model '{create_model_response.name}'.")
90 except MlflowException as e:
91 if e.error_code in (
92 ErrorCode.Name(RESOURCE_ALREADY_EXISTS),
93 ErrorCode.Name(ALREADY_EXISTS),
94 ):
95 eprint(
96 "Registered model '%s' already exists. Creating a new version of this model..."
97 % name
98 )
99 else:
100 raise e
101
102 run_id = None
103 source = model_uri
104 if RunsArtifactRepository.is_runs_uri(model_uri):
105 source = RunsArtifactRepository.get_underlying_uri(model_uri)
106 (run_id, _) = RunsArtifactRepository.parse_runs_uri(model_uri)
107
108 create_version_response = client._create_model_version(
109 name=name,
110 source=source,
111 run_id=run_id,
112 tags=tags,
113 await_creation_for=await_registration_for,
114 local_model_path=local_model_path,
115 )
116 eprint(
117 f"Created version '{create_version_response.version}' of model "
118 f"'{create_version_response.name}'."
119 )
120 return create_version_response
121
122
123 def search_registered_models(
124 max_results: Optional[int] = None,
125 filter_string: Optional[str] = None,
126 order_by: Optional[List[str]] = None,
127 ) -> List[RegisteredModel]:
128 """
129 Search for registered models that satisfy the filter criteria.
130
131 :param filter_string: Filter query string
132 (e.g., ``"name = 'a_model_name' and tag.key = 'value1'"``),
133 defaults to searching for all registered models. The following identifiers, comparators,
134 and logical operators are supported.
135
136 Identifiers
137 - ``name``: registered model name.
138 - ``tags.<tag_key>``: registered model tag. If ``tag_key`` contains spaces, it must be
139 wrapped with backticks (e.g., ``"tags.`extra key`"``).
140
141 Comparators
142 - ``=``: Equal to.
143 - ``!=``: Not equal to.
144 - ``LIKE``: Case-sensitive pattern match.
145 - ``ILIKE``: Case-insensitive pattern match.
146
147 Logical operators
148 - ``AND``: Combines two sub-queries and returns True if both of them are True.
149
150 :param max_results: If passed, specifies the maximum number of models desired. If not
151 passed, all models will be returned.
152 :param order_by: List of column names with ASC|DESC annotation, to be used for ordering
153 matching search results.
154 :return: A list of :py:class:`mlflow.entities.model_registry.RegisteredModel` objects
155 that satisfy the search expressions.
156
157 .. test-code-block:: python
158 :caption: Example
159
160 import mlflow
161 from sklearn.linear_model import LogisticRegression
162
163 with mlflow.start_run():
164 mlflow.sklearn.log_model(
165 LogisticRegression(),
166 "Cordoba",
167 registered_model_name="CordobaWeatherForecastModel",
168 )
169 mlflow.sklearn.log_model(
170 LogisticRegression(),
171 "Boston",
172 registered_model_name="BostonWeatherForecastModel",
173 )
174
175 # Get search results filtered by the registered model name
176 filter_string = "name = 'CordobaWeatherForecastModel'"
177 results = mlflow.search_registered_models(filter_string=filter_string)
178 print("-" * 80)
179 for res in results:
180 for mv in res.latest_versions:
181 print("name={}; run_id={}; version={}".format(mv.name, mv.run_id, mv.version))
182
183 # Get search results filtered by the registered model name that matches
184 # prefix pattern
185 filter_string = "name LIKE 'Boston%'"
186 results = mlflow.search_registered_models(filter_string=filter_string)
187 print("-" * 80)
188 for res in results:
189 for mv in res.latest_versions:
190 print("name={}; run_id={}; version={}".format(mv.name, mv.run_id, mv.version))
191
192 # Get all registered models and order them by ascending order of the names
193 results = mlflow.search_registered_models(order_by=["name ASC"])
194 print("-" * 80)
195 for res in results:
196 for mv in res.latest_versions:
197 print("name={}; run_id={}; version={}".format(mv.name, mv.run_id, mv.version))
198
199 .. code-block:: text
200 :caption: Output
201
202 --------------------------------------------------------------------------------
203 name=CordobaWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1
204 --------------------------------------------------------------------------------
205 name=BostonWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1
206 --------------------------------------------------------------------------------
207 name=BostonWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1
208 name=CordobaWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1
209
210 """
211
212 def pagination_wrapper_func(number_to_get, next_page_token):
213 return MlflowClient().search_registered_models(
214 max_results=number_to_get,
215 filter_string=filter_string,
216 order_by=order_by,
217 page_token=next_page_token,
218 )
219
220 return get_results_from_paginated_fn(
221 pagination_wrapper_func,
222 SEARCH_REGISTERED_MODEL_MAX_RESULTS_DEFAULT,
223 max_results,
224 )
225
226
227 def search_model_versions(
228 max_results: Optional[int] = None,
229 filter_string: Optional[str] = None,
230 order_by: Optional[List[str]] = None,
231 ) -> List[ModelVersion]:
232 def pagination_wrapper_func(number_to_get, next_page_token):
233 return MlflowClient().search_model_versions(
234 max_results=number_to_get,
235 filter_string=filter_string,
236 order_by=order_by,
237 page_token=next_page_token,
238 )
239
240 return get_results_from_paginated_fn(
241 paginated_fn=pagination_wrapper_func,
242 max_results_per_page=SEARCH_MODEL_VERSION_MAX_RESULTS_DEFAULT,
243 max_results=max_results,
244 )
245
[end of mlflow/tracking/_model_registry/fluent.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mlflow/tracking/_model_registry/fluent.py b/mlflow/tracking/_model_registry/fluent.py
--- a/mlflow/tracking/_model_registry/fluent.py
+++ b/mlflow/tracking/_model_registry/fluent.py
@@ -229,6 +229,77 @@
filter_string: Optional[str] = None,
order_by: Optional[List[str]] = None,
) -> List[ModelVersion]:
+ """
+ Search for model versions that satisfy the filter criteria.
+
+ :param filter_string: Filter query string
+ (e.g., ``"name = 'a_model_name' and tag.key = 'value1'"``),
+ defaults to searching for all model versions. The following identifiers, comparators,
+ and logical operators are supported.
+
+ Identifiers
+ - ``name``: model name.
+ - ``source_path``: model version source path.
+ - ``run_id``: The id of the mlflow run that generates the model version.
+ - ``tags.<tag_key>``: model version tag. If ``tag_key`` contains spaces, it must be
+ wrapped with backticks (e.g., ``"tags.`extra key`"``).
+
+ Comparators
+ - ``=``: Equal to.
+ - ``!=``: Not equal to.
+ - ``LIKE``: Case-sensitive pattern match.
+ - ``ILIKE``: Case-insensitive pattern match.
+ - ``IN``: In a value list. Only ``run_id`` identifier supports ``IN`` comparator.
+
+ Logical operators
+ - ``AND``: Combines two sub-queries and returns True if both of them are True.
+
+ :param max_results: If passed, specifies the maximum number of models desired. If not
+ passed, all models will be returned.
+ :param order_by: List of column names with ASC|DESC annotation, to be used for ordering
+ matching search results.
+ :return: A list of :py:class:`mlflow.entities.model_registry.ModelVersion` objects
+ that satisfy the search expressions.
+
+ .. test-code-block:: python
+ :caption: Example
+
+ import mlflow
+ from sklearn.linear_model import LogisticRegression
+
+ for _ in range(2):
+ with mlflow.start_run():
+ mlflow.sklearn.log_model(
+ LogisticRegression(),
+ "Cordoba",
+ registered_model_name="CordobaWeatherForecastModel",
+ )
+
+ # Get all versions of the model filtered by name
+ filter_string = "name = 'CordobaWeatherForecastModel'"
+ results = mlflow.search_model_versions(filter_string=filter_string)
+ print("-" * 80)
+ for res in results:
+ print("name={}; run_id={}; version={}".format(res.name, res.run_id, res.version))
+
+ # Get the version of the model filtered by run_id
+ filter_string = "run_id = 'ae9a606a12834c04a8ef1006d0cff779'"
+ results = mlflow.search_model_versions(filter_string=filter_string)
+ print("-" * 80)
+ for res in results:
+ print("name={}; run_id={}; version={}".format(res.name, res.run_id, res.version))
+
+ .. code-block:: text
+ :caption: Output
+
+ --------------------------------------------------------------------------------
+ name=CordobaWeatherForecastModel; run_id=ae9a606a12834c04a8ef1006d0cff779; version=2
+ name=CordobaWeatherForecastModel; run_id=d8f028b5fedf4faf8e458f7693dfa7ce; version=1
+ --------------------------------------------------------------------------------
+ name=CordobaWeatherForecastModel; run_id=ae9a606a12834c04a8ef1006d0cff779; version=2
+
+ """
+
def pagination_wrapper_func(number_to_get, next_page_token):
return MlflowClient().search_model_versions(
max_results=number_to_get,
|
{"golden_diff": "diff --git a/mlflow/tracking/_model_registry/fluent.py b/mlflow/tracking/_model_registry/fluent.py\n--- a/mlflow/tracking/_model_registry/fluent.py\n+++ b/mlflow/tracking/_model_registry/fluent.py\n@@ -229,6 +229,77 @@\n filter_string: Optional[str] = None,\n order_by: Optional[List[str]] = None,\n ) -> List[ModelVersion]:\n+ \"\"\"\n+ Search for model versions that satisfy the filter criteria.\n+\n+ :param filter_string: Filter query string\n+ (e.g., ``\"name = 'a_model_name' and tag.key = 'value1'\"``),\n+ defaults to searching for all model versions. The following identifiers, comparators,\n+ and logical operators are supported.\n+\n+ Identifiers\n+ - ``name``: model name.\n+ - ``source_path``: model version source path.\n+ - ``run_id``: The id of the mlflow run that generates the model version.\n+ - ``tags.<tag_key>``: model version tag. If ``tag_key`` contains spaces, it must be\n+ wrapped with backticks (e.g., ``\"tags.`extra key`\"``).\n+\n+ Comparators\n+ - ``=``: Equal to.\n+ - ``!=``: Not equal to.\n+ - ``LIKE``: Case-sensitive pattern match.\n+ - ``ILIKE``: Case-insensitive pattern match.\n+ - ``IN``: In a value list. Only ``run_id`` identifier supports ``IN`` comparator.\n+\n+ Logical operators\n+ - ``AND``: Combines two sub-queries and returns True if both of them are True.\n+\n+ :param max_results: If passed, specifies the maximum number of models desired. If not\n+ passed, all models will be returned.\n+ :param order_by: List of column names with ASC|DESC annotation, to be used for ordering\n+ matching search results.\n+ :return: A list of :py:class:`mlflow.entities.model_registry.ModelVersion` objects\n+ that satisfy the search expressions.\n+\n+ .. test-code-block:: python\n+ :caption: Example\n+\n+ import mlflow\n+ from sklearn.linear_model import LogisticRegression\n+\n+ for _ in range(2):\n+ with mlflow.start_run():\n+ mlflow.sklearn.log_model(\n+ LogisticRegression(),\n+ \"Cordoba\",\n+ registered_model_name=\"CordobaWeatherForecastModel\",\n+ )\n+\n+ # Get all versions of the model filtered by name\n+ filter_string = \"name = 'CordobaWeatherForecastModel'\"\n+ results = mlflow.search_model_versions(filter_string=filter_string)\n+ print(\"-\" * 80)\n+ for res in results:\n+ print(\"name={}; run_id={}; version={}\".format(res.name, res.run_id, res.version))\n+\n+ # Get the version of the model filtered by run_id\n+ filter_string = \"run_id = 'ae9a606a12834c04a8ef1006d0cff779'\"\n+ results = mlflow.search_model_versions(filter_string=filter_string)\n+ print(\"-\" * 80)\n+ for res in results:\n+ print(\"name={}; run_id={}; version={}\".format(res.name, res.run_id, res.version))\n+\n+ .. code-block:: text\n+ :caption: Output\n+\n+ --------------------------------------------------------------------------------\n+ name=CordobaWeatherForecastModel; run_id=ae9a606a12834c04a8ef1006d0cff779; version=2\n+ name=CordobaWeatherForecastModel; run_id=d8f028b5fedf4faf8e458f7693dfa7ce; version=1\n+ --------------------------------------------------------------------------------\n+ name=CordobaWeatherForecastModel; run_id=ae9a606a12834c04a8ef1006d0cff779; version=2\n+\n+ \"\"\"\n+\n def pagination_wrapper_func(number_to_get, next_page_token):\n return MlflowClient().search_model_versions(\n max_results=number_to_get,\n", "issue": "[DOC-FIX] No documentation body for mlflow.search_model_versions\n### Willingness to contribute\n\nNo. I cannot contribute a documentation fix at this time.\n\n### URL(s) with the issue\n\n(https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.search_model_versions)\n\n### Description of proposal (what needs changing)\n\nThere is no documentation body for mlflow. search_model_versions() unlike for mlflow.mlflow.search_registered_models().\n", "before_files": [{"content": "from mlflow.tracking.client import MlflowClient\nfrom mlflow.exceptions import MlflowException\nfrom mlflow.entities.model_registry import ModelVersion\nfrom mlflow.entities.model_registry import RegisteredModel\nfrom mlflow.protos.databricks_pb2 import RESOURCE_ALREADY_EXISTS, ALREADY_EXISTS, ErrorCode\nfrom mlflow.store.artifact.runs_artifact_repo import RunsArtifactRepository\nfrom mlflow.utils.logging_utils import eprint\nfrom mlflow.utils import get_results_from_paginated_fn\nfrom mlflow.tracking._model_registry import DEFAULT_AWAIT_MAX_SLEEP_SECONDS\nfrom mlflow.store.model_registry import (\n SEARCH_REGISTERED_MODEL_MAX_RESULTS_DEFAULT,\n SEARCH_MODEL_VERSION_MAX_RESULTS_DEFAULT,\n)\nfrom typing import Any, Dict, Optional, List\n\n\ndef register_model(\n model_uri,\n name,\n await_registration_for=DEFAULT_AWAIT_MAX_SLEEP_SECONDS,\n *,\n tags: Optional[Dict[str, Any]] = None,\n) -> ModelVersion:\n \"\"\"\n Create a new model version in model registry for the model files specified by ``model_uri``.\n Note that this method assumes the model registry backend URI is the same as that of the\n tracking backend.\n\n :param model_uri: URI referring to the MLmodel directory. Use a ``runs:/`` URI if you want to\n record the run ID with the model in model registry. ``models:/`` URIs are\n currently not supported.\n :param name: Name of the registered model under which to create a new model version. If a\n registered model with the given name does not exist, it will be created\n automatically.\n :param await_registration_for: Number of seconds to wait for the model version to finish\n being created and is in ``READY`` status. By default, the function\n waits for five minutes. Specify 0 or None to skip waiting.\n :param tags: A dictionary of key-value pairs that are converted into\n :py:class:`mlflow.entities.model_registry.ModelVersionTag` objects.\n :return: Single :py:class:`mlflow.entities.model_registry.ModelVersion` object created by\n backend.\n\n .. test-code-block:: python\n :caption: Example\n\n import mlflow.sklearn\n from mlflow.models import infer_signature\n from sklearn.datasets import make_regression\n from sklearn.ensemble import RandomForestRegressor\n\n mlflow.set_tracking_uri(\"sqlite:////tmp/mlruns.db\")\n params = {\"n_estimators\": 3, \"random_state\": 42}\n X, y = make_regression(n_features=4, n_informative=2, random_state=0, shuffle=False)\n\n # Log MLflow entities\n with mlflow.start_run() as run:\n rfr = RandomForestRegressor(**params).fit(X, y)\n signature = infer_signature(X, rfr.predict(X))\n mlflow.log_params(params)\n mlflow.sklearn.log_model(rfr, artifact_path=\"sklearn-model\", signature=signature)\n\n model_uri = \"runs:/{}/sklearn-model\".format(run.info.run_id)\n mv = mlflow.register_model(model_uri, \"RandomForestRegressionModel\")\n print(\"Name: {}\".format(mv.name))\n print(\"Version: {}\".format(mv.version))\n\n .. code-block:: text\n :caption: Output\n\n Name: RandomForestRegressionModel\n Version: 1\n \"\"\"\n return _register_model(\n model_uri=model_uri, name=name, await_registration_for=await_registration_for, tags=tags\n )\n\n\ndef _register_model(\n model_uri,\n name,\n await_registration_for=DEFAULT_AWAIT_MAX_SLEEP_SECONDS,\n *,\n tags: Optional[Dict[str, Any]] = None,\n local_model_path=None,\n) -> ModelVersion:\n client = MlflowClient()\n try:\n create_model_response = client.create_registered_model(name)\n eprint(f\"Successfully registered model '{create_model_response.name}'.\")\n except MlflowException as e:\n if e.error_code in (\n ErrorCode.Name(RESOURCE_ALREADY_EXISTS),\n ErrorCode.Name(ALREADY_EXISTS),\n ):\n eprint(\n \"Registered model '%s' already exists. Creating a new version of this model...\"\n % name\n )\n else:\n raise e\n\n run_id = None\n source = model_uri\n if RunsArtifactRepository.is_runs_uri(model_uri):\n source = RunsArtifactRepository.get_underlying_uri(model_uri)\n (run_id, _) = RunsArtifactRepository.parse_runs_uri(model_uri)\n\n create_version_response = client._create_model_version(\n name=name,\n source=source,\n run_id=run_id,\n tags=tags,\n await_creation_for=await_registration_for,\n local_model_path=local_model_path,\n )\n eprint(\n f\"Created version '{create_version_response.version}' of model \"\n f\"'{create_version_response.name}'.\"\n )\n return create_version_response\n\n\ndef search_registered_models(\n max_results: Optional[int] = None,\n filter_string: Optional[str] = None,\n order_by: Optional[List[str]] = None,\n) -> List[RegisteredModel]:\n \"\"\"\n Search for registered models that satisfy the filter criteria.\n\n :param filter_string: Filter query string\n (e.g., ``\"name = 'a_model_name' and tag.key = 'value1'\"``),\n defaults to searching for all registered models. The following identifiers, comparators,\n and logical operators are supported.\n\n Identifiers\n - ``name``: registered model name.\n - ``tags.<tag_key>``: registered model tag. If ``tag_key`` contains spaces, it must be\n wrapped with backticks (e.g., ``\"tags.`extra key`\"``).\n\n Comparators\n - ``=``: Equal to.\n - ``!=``: Not equal to.\n - ``LIKE``: Case-sensitive pattern match.\n - ``ILIKE``: Case-insensitive pattern match.\n\n Logical operators\n - ``AND``: Combines two sub-queries and returns True if both of them are True.\n\n :param max_results: If passed, specifies the maximum number of models desired. If not\n passed, all models will be returned.\n :param order_by: List of column names with ASC|DESC annotation, to be used for ordering\n matching search results.\n :return: A list of :py:class:`mlflow.entities.model_registry.RegisteredModel` objects\n that satisfy the search expressions.\n\n .. test-code-block:: python\n :caption: Example\n\n import mlflow\n from sklearn.linear_model import LogisticRegression\n\n with mlflow.start_run():\n mlflow.sklearn.log_model(\n LogisticRegression(),\n \"Cordoba\",\n registered_model_name=\"CordobaWeatherForecastModel\",\n )\n mlflow.sklearn.log_model(\n LogisticRegression(),\n \"Boston\",\n registered_model_name=\"BostonWeatherForecastModel\",\n )\n\n # Get search results filtered by the registered model name\n filter_string = \"name = 'CordobaWeatherForecastModel'\"\n results = mlflow.search_registered_models(filter_string=filter_string)\n print(\"-\" * 80)\n for res in results:\n for mv in res.latest_versions:\n print(\"name={}; run_id={}; version={}\".format(mv.name, mv.run_id, mv.version))\n\n # Get search results filtered by the registered model name that matches\n # prefix pattern\n filter_string = \"name LIKE 'Boston%'\"\n results = mlflow.search_registered_models(filter_string=filter_string)\n print(\"-\" * 80)\n for res in results:\n for mv in res.latest_versions:\n print(\"name={}; run_id={}; version={}\".format(mv.name, mv.run_id, mv.version))\n\n # Get all registered models and order them by ascending order of the names\n results = mlflow.search_registered_models(order_by=[\"name ASC\"])\n print(\"-\" * 80)\n for res in results:\n for mv in res.latest_versions:\n print(\"name={}; run_id={}; version={}\".format(mv.name, mv.run_id, mv.version))\n\n .. code-block:: text\n :caption: Output\n\n --------------------------------------------------------------------------------\n name=CordobaWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1\n --------------------------------------------------------------------------------\n name=BostonWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1\n --------------------------------------------------------------------------------\n name=BostonWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1\n name=CordobaWeatherForecastModel; run_id=248c66a666744b4887bdeb2f9cf7f1c6; version=1\n\n \"\"\"\n\n def pagination_wrapper_func(number_to_get, next_page_token):\n return MlflowClient().search_registered_models(\n max_results=number_to_get,\n filter_string=filter_string,\n order_by=order_by,\n page_token=next_page_token,\n )\n\n return get_results_from_paginated_fn(\n pagination_wrapper_func,\n SEARCH_REGISTERED_MODEL_MAX_RESULTS_DEFAULT,\n max_results,\n )\n\n\ndef search_model_versions(\n max_results: Optional[int] = None,\n filter_string: Optional[str] = None,\n order_by: Optional[List[str]] = None,\n) -> List[ModelVersion]:\n def pagination_wrapper_func(number_to_get, next_page_token):\n return MlflowClient().search_model_versions(\n max_results=number_to_get,\n filter_string=filter_string,\n order_by=order_by,\n page_token=next_page_token,\n )\n\n return get_results_from_paginated_fn(\n paginated_fn=pagination_wrapper_func,\n max_results_per_page=SEARCH_MODEL_VERSION_MAX_RESULTS_DEFAULT,\n max_results=max_results,\n )\n", "path": "mlflow/tracking/_model_registry/fluent.py"}]}
| 3,441 | 949 |
gh_patches_debug_43646
|
rasdani/github-patches
|
git_diff
|
feast-dev__feast-2610
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Keep labels in Field api
I found that new api 'Field' will take place of 'Feature' in 0.21+ feast. but `Field` only have 'name' and 'dtype' parameters. The parameter 'labels' is disappeared.
In my use case 'labels' is very import. 'labels' stores the default value, descriptions,and other things. for example
```python
comic_feature_view = FeatureView(
name="comic_featureV1",
entities=["item_id"],
ttl=Duration(seconds=86400 * 1),
features=[
Feature(name="channel_id", dtype=ValueType.INT32, labels={"default": "14", "desc":"channel"}),
Feature(name="keyword_weight", dtype=ValueType.FLOAT, labels={"default": "0.0", "desc":"keyword's weight"}),
Feature(name="comic_vectorv1", dtype=ValueType.FLOAT, labels={"default": ";".join(["0.0" for i in range(32)]), "desc":"deepwalk vector","faiss_index":"/data/faiss_index/comic_featureV1__comic_vectorv1.index"}),
Feature(name="comic_vectorv2", dtype=ValueType.FLOAT, labels={"default": ";".join(["0.0" for i in range(32)]), "desc":"word2vec vector","faiss_index":"/data/faiss_index/comic_featureV1__comic_vectorv2.index"}),
Feature(name="gender", dtype=ValueType.INT32, labels={"default": "0", "desc":" 0-femal 1-male"}),
Feature(name="pub_time", dtype=ValueType.STRING, labels={"default": "1970-01-01 00:00:00", "desc":"comic's publish time"}),
Feature(name="update_time", dtype=ValueType.STRING, labels={"default": "1970-01-01 00:00:00", "desc":"comic's update time"}),
Feature(name="view_cnt", dtype=ValueType.INT64, labels={"default": "0", "desc":"comic's hot score"}),
Feature(name="collect_cnt", dtype=ValueType.INT64, labels={"default": "0", "desc":"collect count"}),
Feature(name="source_id", dtype=ValueType.INT32, labels={"default": "0", "desc":"comic is from(0-unknown,1-japen,2-usa,3- other)"}),
```
So please keep the parameter 'labels' in Field api
</issue>
<code>
[start of sdk/python/feast/field.py]
1 # Copyright 2022 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from feast.feature import Feature
16 from feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto
17 from feast.types import FeastType, from_value_type
18 from feast.value_type import ValueType
19
20
21 class Field:
22 """
23 A Field represents a set of values with the same structure.
24
25 Attributes:
26 name: The name of the field.
27 dtype: The type of the field, such as string or float.
28 """
29
30 name: str
31 dtype: FeastType
32
33 def __init__(
34 self, *, name: str, dtype: FeastType,
35 ):
36 """
37 Creates a Field object.
38
39 Args:
40 name: The name of the field.
41 dtype: The type of the field, such as string or float.
42 """
43 self.name = name
44 self.dtype = dtype
45
46 def __eq__(self, other):
47 if self.name != other.name or self.dtype != other.dtype:
48 return False
49 return True
50
51 def __hash__(self):
52 return hash((self.name, hash(self.dtype)))
53
54 def __lt__(self, other):
55 return self.name < other.name
56
57 def __repr__(self):
58 return f"{self.name}-{self.dtype}"
59
60 def __str__(self):
61 return f"Field(name={self.name}, dtype={self.dtype})"
62
63 def to_proto(self) -> FieldProto:
64 """Converts a Field object to its protobuf representation."""
65 value_type = self.dtype.to_value_type()
66 return FieldProto(name=self.name, value_type=value_type.value)
67
68 @classmethod
69 def from_proto(cls, field_proto: FieldProto):
70 """
71 Creates a Field object from a protobuf representation.
72
73 Args:
74 field_proto: FieldProto protobuf object
75 """
76 value_type = ValueType(field_proto.value_type)
77 return cls(name=field_proto.name, dtype=from_value_type(value_type=value_type))
78
79 @classmethod
80 def from_feature(cls, feature: Feature):
81 """
82 Creates a Field object from a Feature object.
83
84 Args:
85 feature: Feature object to convert.
86 """
87 return cls(name=feature.name, dtype=from_value_type(feature.dtype))
88
[end of sdk/python/feast/field.py]
[start of sdk/python/feast/feature.py]
1 # Copyright 2020 The Feast Authors
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 from typing import Dict, Optional
16
17 from feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FeatureSpecProto
18 from feast.protos.feast.types.Value_pb2 import ValueType as ValueTypeProto
19 from feast.value_type import ValueType
20
21
22 class Feature:
23 """
24 A Feature represents a class of serveable feature.
25
26 Args:
27 name: Name of the feature.
28 dtype: The type of the feature, such as string or float.
29 labels (optional): User-defined metadata in dictionary form.
30 """
31
32 def __init__(
33 self, name: str, dtype: ValueType, labels: Optional[Dict[str, str]] = None,
34 ):
35 """Creates a Feature object."""
36 self._name = name
37 if not isinstance(dtype, ValueType):
38 raise ValueError("dtype is not a valid ValueType")
39 if dtype is ValueType.UNKNOWN:
40 raise ValueError(f"dtype cannot be {dtype}")
41 self._dtype = dtype
42 if labels is None:
43 self._labels = dict()
44 else:
45 self._labels = labels
46
47 def __eq__(self, other):
48 if self.name != other.name or self.dtype != other.dtype:
49 return False
50 return True
51
52 def __lt__(self, other):
53 return self.name < other.name
54
55 def __repr__(self):
56 # return string representation of the reference
57 return f"{self.name}-{self.dtype}"
58
59 def __str__(self):
60 # readable string of the reference
61 return f"Feature<{self.__repr__()}>"
62
63 @property
64 def name(self):
65 """
66 Gets the name of this feature.
67 """
68 return self._name
69
70 @property
71 def dtype(self) -> ValueType:
72 """
73 Gets the data type of this feature.
74 """
75 return self._dtype
76
77 @property
78 def labels(self) -> Dict[str, str]:
79 """
80 Gets the labels of this feature.
81 """
82 return self._labels
83
84 def to_proto(self) -> FeatureSpecProto:
85 """
86 Converts Feature object to its Protocol Buffer representation.
87
88 Returns:
89 A FeatureSpecProto protobuf.
90 """
91 value_type = ValueTypeProto.Enum.Value(self.dtype.name)
92
93 return FeatureSpecProto(
94 name=self.name, value_type=value_type, labels=self.labels,
95 )
96
97 @classmethod
98 def from_proto(cls, feature_proto: FeatureSpecProto):
99 """
100 Args:
101 feature_proto: FeatureSpecV2 protobuf object
102
103 Returns:
104 Feature object
105 """
106 feature = cls(
107 name=feature_proto.name,
108 dtype=ValueType(feature_proto.value_type),
109 labels=dict(feature_proto.labels),
110 )
111
112 return feature
113
[end of sdk/python/feast/feature.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py
--- a/sdk/python/feast/feature.py
+++ b/sdk/python/feast/feature.py
@@ -91,7 +91,7 @@
value_type = ValueTypeProto.Enum.Value(self.dtype.name)
return FeatureSpecProto(
- name=self.name, value_type=value_type, labels=self.labels,
+ name=self.name, value_type=value_type, tags=self.labels,
)
@classmethod
@@ -106,7 +106,7 @@
feature = cls(
name=feature_proto.name,
dtype=ValueType(feature_proto.value_type),
- labels=dict(feature_proto.labels),
+ labels=dict(feature_proto.tags),
)
return feature
diff --git a/sdk/python/feast/field.py b/sdk/python/feast/field.py
--- a/sdk/python/feast/field.py
+++ b/sdk/python/feast/field.py
@@ -12,6 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
+from typing import Dict, Optional
+
from feast.feature import Feature
from feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto
from feast.types import FeastType, from_value_type
@@ -25,13 +27,15 @@
Attributes:
name: The name of the field.
dtype: The type of the field, such as string or float.
+ tags: User-defined metadata in dictionary form.
"""
name: str
dtype: FeastType
+ tags: Dict[str, str]
def __init__(
- self, *, name: str, dtype: FeastType,
+ self, *, name: str, dtype: FeastType, tags: Optional[Dict[str, str]] = None,
):
"""
Creates a Field object.
@@ -39,12 +43,18 @@
Args:
name: The name of the field.
dtype: The type of the field, such as string or float.
+ tags (optional): User-defined metadata in dictionary form.
"""
self.name = name
self.dtype = dtype
+ self.tags = tags or {}
def __eq__(self, other):
- if self.name != other.name or self.dtype != other.dtype:
+ if (
+ self.name != other.name
+ or self.dtype != other.dtype
+ or self.tags != other.tags
+ ):
return False
return True
@@ -58,12 +68,12 @@
return f"{self.name}-{self.dtype}"
def __str__(self):
- return f"Field(name={self.name}, dtype={self.dtype})"
+ return f"Field(name={self.name}, dtype={self.dtype}, tags={self.tags})"
def to_proto(self) -> FieldProto:
"""Converts a Field object to its protobuf representation."""
value_type = self.dtype.to_value_type()
- return FieldProto(name=self.name, value_type=value_type.value)
+ return FieldProto(name=self.name, value_type=value_type.value, tags=self.tags)
@classmethod
def from_proto(cls, field_proto: FieldProto):
@@ -74,7 +84,11 @@
field_proto: FieldProto protobuf object
"""
value_type = ValueType(field_proto.value_type)
- return cls(name=field_proto.name, dtype=from_value_type(value_type=value_type))
+ return cls(
+ name=field_proto.name,
+ dtype=from_value_type(value_type=value_type),
+ tags=dict(field_proto.tags),
+ )
@classmethod
def from_feature(cls, feature: Feature):
@@ -84,4 +98,6 @@
Args:
feature: Feature object to convert.
"""
- return cls(name=feature.name, dtype=from_value_type(feature.dtype))
+ return cls(
+ name=feature.name, dtype=from_value_type(feature.dtype), tags=feature.labels
+ )
|
{"golden_diff": "diff --git a/sdk/python/feast/feature.py b/sdk/python/feast/feature.py\n--- a/sdk/python/feast/feature.py\n+++ b/sdk/python/feast/feature.py\n@@ -91,7 +91,7 @@\n value_type = ValueTypeProto.Enum.Value(self.dtype.name)\n \n return FeatureSpecProto(\n- name=self.name, value_type=value_type, labels=self.labels,\n+ name=self.name, value_type=value_type, tags=self.labels,\n )\n \n @classmethod\n@@ -106,7 +106,7 @@\n feature = cls(\n name=feature_proto.name,\n dtype=ValueType(feature_proto.value_type),\n- labels=dict(feature_proto.labels),\n+ labels=dict(feature_proto.tags),\n )\n \n return feature\ndiff --git a/sdk/python/feast/field.py b/sdk/python/feast/field.py\n--- a/sdk/python/feast/field.py\n+++ b/sdk/python/feast/field.py\n@@ -12,6 +12,8 @@\n # See the License for the specific language governing permissions and\n # limitations under the License.\n \n+from typing import Dict, Optional\n+\n from feast.feature import Feature\n from feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto\n from feast.types import FeastType, from_value_type\n@@ -25,13 +27,15 @@\n Attributes:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n+ tags: User-defined metadata in dictionary form.\n \"\"\"\n \n name: str\n dtype: FeastType\n+ tags: Dict[str, str]\n \n def __init__(\n- self, *, name: str, dtype: FeastType,\n+ self, *, name: str, dtype: FeastType, tags: Optional[Dict[str, str]] = None,\n ):\n \"\"\"\n Creates a Field object.\n@@ -39,12 +43,18 @@\n Args:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n+ tags (optional): User-defined metadata in dictionary form.\n \"\"\"\n self.name = name\n self.dtype = dtype\n+ self.tags = tags or {}\n \n def __eq__(self, other):\n- if self.name != other.name or self.dtype != other.dtype:\n+ if (\n+ self.name != other.name\n+ or self.dtype != other.dtype\n+ or self.tags != other.tags\n+ ):\n return False\n return True\n \n@@ -58,12 +68,12 @@\n return f\"{self.name}-{self.dtype}\"\n \n def __str__(self):\n- return f\"Field(name={self.name}, dtype={self.dtype})\"\n+ return f\"Field(name={self.name}, dtype={self.dtype}, tags={self.tags})\"\n \n def to_proto(self) -> FieldProto:\n \"\"\"Converts a Field object to its protobuf representation.\"\"\"\n value_type = self.dtype.to_value_type()\n- return FieldProto(name=self.name, value_type=value_type.value)\n+ return FieldProto(name=self.name, value_type=value_type.value, tags=self.tags)\n \n @classmethod\n def from_proto(cls, field_proto: FieldProto):\n@@ -74,7 +84,11 @@\n field_proto: FieldProto protobuf object\n \"\"\"\n value_type = ValueType(field_proto.value_type)\n- return cls(name=field_proto.name, dtype=from_value_type(value_type=value_type))\n+ return cls(\n+ name=field_proto.name,\n+ dtype=from_value_type(value_type=value_type),\n+ tags=dict(field_proto.tags),\n+ )\n \n @classmethod\n def from_feature(cls, feature: Feature):\n@@ -84,4 +98,6 @@\n Args:\n feature: Feature object to convert.\n \"\"\"\n- return cls(name=feature.name, dtype=from_value_type(feature.dtype))\n+ return cls(\n+ name=feature.name, dtype=from_value_type(feature.dtype), tags=feature.labels\n+ )\n", "issue": "Keep labels in Field api\nI found that new api 'Field' will take place of 'Feature' in 0.21+ feast. but `Field` only have 'name' and 'dtype' parameters. The parameter 'labels' is disappeared. \r\nIn my use case 'labels' is very import. 'labels' stores the default value, descriptions,and other things. for example\r\n\r\n```python\r\ncomic_feature_view = FeatureView(\r\n name=\"comic_featureV1\",\r\n entities=[\"item_id\"],\r\n ttl=Duration(seconds=86400 * 1),\r\n features=[\r\n Feature(name=\"channel_id\", dtype=ValueType.INT32, labels={\"default\": \"14\", \"desc\":\"channel\"}),\r\n Feature(name=\"keyword_weight\", dtype=ValueType.FLOAT, labels={\"default\": \"0.0\", \"desc\":\"keyword's weight\"}),\r\n Feature(name=\"comic_vectorv1\", dtype=ValueType.FLOAT, labels={\"default\": \";\".join([\"0.0\" for i in range(32)]), \"desc\":\"deepwalk vector\",\"faiss_index\":\"/data/faiss_index/comic_featureV1__comic_vectorv1.index\"}),\r\n Feature(name=\"comic_vectorv2\", dtype=ValueType.FLOAT, labels={\"default\": \";\".join([\"0.0\" for i in range(32)]), \"desc\":\"word2vec vector\",\"faiss_index\":\"/data/faiss_index/comic_featureV1__comic_vectorv2.index\"}),\r\n Feature(name=\"gender\", dtype=ValueType.INT32, labels={\"default\": \"0\", \"desc\":\" 0-femal 1-male\"}),\r\n Feature(name=\"pub_time\", dtype=ValueType.STRING, labels={\"default\": \"1970-01-01 00:00:00\", \"desc\":\"comic's publish time\"}),\r\n Feature(name=\"update_time\", dtype=ValueType.STRING, labels={\"default\": \"1970-01-01 00:00:00\", \"desc\":\"comic's update time\"}),\r\n Feature(name=\"view_cnt\", dtype=ValueType.INT64, labels={\"default\": \"0\", \"desc\":\"comic's hot score\"}),\r\n Feature(name=\"collect_cnt\", dtype=ValueType.INT64, labels={\"default\": \"0\", \"desc\":\"collect count\"}),\r\n Feature(name=\"source_id\", dtype=ValueType.INT32, labels={\"default\": \"0\", \"desc\":\"comic is from(0-unknown\uff0c1-japen\uff0c2-usa\uff0c3- other)\"}),\r\n```\r\n\r\nSo please keep the parameter 'labels' in Field api\r\n\n", "before_files": [{"content": "# Copyright 2022 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom feast.feature import Feature\nfrom feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FieldProto\nfrom feast.types import FeastType, from_value_type\nfrom feast.value_type import ValueType\n\n\nclass Field:\n \"\"\"\n A Field represents a set of values with the same structure.\n\n Attributes:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n \"\"\"\n\n name: str\n dtype: FeastType\n\n def __init__(\n self, *, name: str, dtype: FeastType,\n ):\n \"\"\"\n Creates a Field object.\n\n Args:\n name: The name of the field.\n dtype: The type of the field, such as string or float.\n \"\"\"\n self.name = name\n self.dtype = dtype\n\n def __eq__(self, other):\n if self.name != other.name or self.dtype != other.dtype:\n return False\n return True\n\n def __hash__(self):\n return hash((self.name, hash(self.dtype)))\n\n def __lt__(self, other):\n return self.name < other.name\n\n def __repr__(self):\n return f\"{self.name}-{self.dtype}\"\n\n def __str__(self):\n return f\"Field(name={self.name}, dtype={self.dtype})\"\n\n def to_proto(self) -> FieldProto:\n \"\"\"Converts a Field object to its protobuf representation.\"\"\"\n value_type = self.dtype.to_value_type()\n return FieldProto(name=self.name, value_type=value_type.value)\n\n @classmethod\n def from_proto(cls, field_proto: FieldProto):\n \"\"\"\n Creates a Field object from a protobuf representation.\n\n Args:\n field_proto: FieldProto protobuf object\n \"\"\"\n value_type = ValueType(field_proto.value_type)\n return cls(name=field_proto.name, dtype=from_value_type(value_type=value_type))\n\n @classmethod\n def from_feature(cls, feature: Feature):\n \"\"\"\n Creates a Field object from a Feature object.\n\n Args:\n feature: Feature object to convert.\n \"\"\"\n return cls(name=feature.name, dtype=from_value_type(feature.dtype))\n", "path": "sdk/python/feast/field.py"}, {"content": "# Copyright 2020 The Feast Authors\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom typing import Dict, Optional\n\nfrom feast.protos.feast.core.Feature_pb2 import FeatureSpecV2 as FeatureSpecProto\nfrom feast.protos.feast.types.Value_pb2 import ValueType as ValueTypeProto\nfrom feast.value_type import ValueType\n\n\nclass Feature:\n \"\"\"\n A Feature represents a class of serveable feature.\n\n Args:\n name: Name of the feature.\n dtype: The type of the feature, such as string or float.\n labels (optional): User-defined metadata in dictionary form.\n \"\"\"\n\n def __init__(\n self, name: str, dtype: ValueType, labels: Optional[Dict[str, str]] = None,\n ):\n \"\"\"Creates a Feature object.\"\"\"\n self._name = name\n if not isinstance(dtype, ValueType):\n raise ValueError(\"dtype is not a valid ValueType\")\n if dtype is ValueType.UNKNOWN:\n raise ValueError(f\"dtype cannot be {dtype}\")\n self._dtype = dtype\n if labels is None:\n self._labels = dict()\n else:\n self._labels = labels\n\n def __eq__(self, other):\n if self.name != other.name or self.dtype != other.dtype:\n return False\n return True\n\n def __lt__(self, other):\n return self.name < other.name\n\n def __repr__(self):\n # return string representation of the reference\n return f\"{self.name}-{self.dtype}\"\n\n def __str__(self):\n # readable string of the reference\n return f\"Feature<{self.__repr__()}>\"\n\n @property\n def name(self):\n \"\"\"\n Gets the name of this feature.\n \"\"\"\n return self._name\n\n @property\n def dtype(self) -> ValueType:\n \"\"\"\n Gets the data type of this feature.\n \"\"\"\n return self._dtype\n\n @property\n def labels(self) -> Dict[str, str]:\n \"\"\"\n Gets the labels of this feature.\n \"\"\"\n return self._labels\n\n def to_proto(self) -> FeatureSpecProto:\n \"\"\"\n Converts Feature object to its Protocol Buffer representation.\n\n Returns:\n A FeatureSpecProto protobuf.\n \"\"\"\n value_type = ValueTypeProto.Enum.Value(self.dtype.name)\n\n return FeatureSpecProto(\n name=self.name, value_type=value_type, labels=self.labels,\n )\n\n @classmethod\n def from_proto(cls, feature_proto: FeatureSpecProto):\n \"\"\"\n Args:\n feature_proto: FeatureSpecV2 protobuf object\n\n Returns:\n Feature object\n \"\"\"\n feature = cls(\n name=feature_proto.name,\n dtype=ValueType(feature_proto.value_type),\n labels=dict(feature_proto.labels),\n )\n\n return feature\n", "path": "sdk/python/feast/feature.py"}]}
| 2,816 | 891 |
gh_patches_debug_35104
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-912
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
"git add -N" prevents restoring stashed changes.
To reproduce, start with this simple `pre-commit-config.yaml` in an otherwise empty repo:
```yaml
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: master
hooks:
- id: end-of-file-fixer
```
The hook used doesn't really matter. end-of-file-fixer is just an example.
Run the following:
```bash
echo "new" > newfile
echo "\n\n\n" > needs-fixing
git add -N newfile
# newfile is now staged as an empty file
git add needs-fixing
git commit -m "fix"
```
The following output is generated:
```
[WARNING] Unstaged files detected.
[INFO] Stashing unstaged files to /home/henniss/.cache/pre-commit/patch1544663784.
Fix End of Files.........................................................Failed
hookid: end-of-file-fixer
Files were modified by this hook. Additional output:
Fixing needs-fixing
[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...
An unexpected error has occurred: CalledProcessError: Command: ('/usr/lib/git-core/git', '-c', 'core.autocrlf=false', 'apply', '--whitespace=nowarn', '/home/henniss/.cache/pre-commit/patch1544663784')
Return code: 1
Expected return code: 0
Output: (none)
Errors:
error: newfile: already exists in working directory
Check the log at /home/henniss/.cache/pre-commit/pre-commit.log
```
`cat newfile` now shows that it is empty. The unstaged changes aren't restored.
</issue>
<code>
[start of pre_commit/git.py]
1 from __future__ import unicode_literals
2
3 import logging
4 import os.path
5 import sys
6
7 from pre_commit.util import cmd_output
8
9
10 logger = logging.getLogger(__name__)
11
12
13 def zsplit(s):
14 s = s.strip('\0')
15 if s:
16 return s.split('\0')
17 else:
18 return []
19
20
21 def no_git_env():
22 # Too many bugs dealing with environment variables and GIT:
23 # https://github.com/pre-commit/pre-commit/issues/300
24 # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running
25 # pre-commit hooks
26 # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE
27 # while running pre-commit hooks in submodules.
28 # GIT_DIR: Causes git clone to clone wrong thing
29 # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit
30 return {
31 k: v for k, v in os.environ.items()
32 if not k.startswith('GIT_') or k in {'GIT_SSH'}
33 }
34
35
36 def get_root():
37 return cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()
38
39
40 def get_git_dir(git_root='.'):
41 opts = ('--git-common-dir', '--git-dir')
42 _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)
43 for line, opt in zip(out.splitlines(), opts):
44 if line != opt: # pragma: no branch (git < 2.5)
45 return os.path.normpath(os.path.join(git_root, line))
46 else:
47 raise AssertionError('unreachable: no git dir')
48
49
50 def get_remote_url(git_root):
51 ret = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)[1]
52 return ret.strip()
53
54
55 def is_in_merge_conflict():
56 git_dir = get_git_dir('.')
57 return (
58 os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and
59 os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))
60 )
61
62
63 def parse_merge_msg_for_conflicts(merge_msg):
64 # Conflicted files start with tabs
65 return [
66 line.lstrip(b'#').strip().decode('UTF-8')
67 for line in merge_msg.splitlines()
68 # '#\t' for git 2.4.1
69 if line.startswith((b'\t', b'#\t'))
70 ]
71
72
73 def get_conflicted_files():
74 logger.info('Checking merge-conflict files only.')
75 # Need to get the conflicted files from the MERGE_MSG because they could
76 # have resolved the conflict by choosing one side or the other
77 with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:
78 merge_msg = f.read()
79 merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)
80
81 # This will get the rest of the changes made after the merge.
82 # If they resolved the merge conflict by choosing a mesh of both sides
83 # this will also include the conflicted files
84 tree_hash = cmd_output('git', 'write-tree')[1].strip()
85 merge_diff_filenames = zsplit(cmd_output(
86 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
87 '-m', tree_hash, 'HEAD', 'MERGE_HEAD',
88 )[1])
89 return set(merge_conflict_filenames) | set(merge_diff_filenames)
90
91
92 def get_staged_files():
93 return zsplit(cmd_output(
94 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',
95 # Everything except for D
96 '--diff-filter=ACMRTUXB',
97 )[1])
98
99
100 def get_all_files():
101 return zsplit(cmd_output('git', 'ls-files', '-z')[1])
102
103
104 def get_changed_files(new, old):
105 return zsplit(cmd_output(
106 'git', 'diff', '--name-only', '--no-ext-diff', '-z',
107 '{}...{}'.format(old, new),
108 )[1])
109
110
111 def head_rev(remote):
112 _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')
113 return out.split()[0]
114
115
116 def has_diff(*args, **kwargs):
117 repo = kwargs.pop('repo', '.')
118 assert not kwargs, kwargs
119 cmd = ('git', 'diff', '--quiet', '--no-ext-diff') + args
120 return cmd_output(*cmd, cwd=repo, retcode=None)[0]
121
122
123 def commit(repo='.'):
124 env = no_git_env()
125 name, email = 'pre-commit', '[email protected]'
126 env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name
127 env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email
128 cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')
129 cmd_output(*cmd, cwd=repo, env=env)
130
131
132 def git_path(name, repo='.'):
133 _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)
134 return os.path.join(repo, out.strip())
135
136
137 def check_for_cygwin_mismatch():
138 """See https://github.com/pre-commit/pre-commit/issues/354"""
139 if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)
140 is_cygwin_python = sys.platform == 'cygwin'
141 toplevel = cmd_output('git', 'rev-parse', '--show-toplevel')[1]
142 is_cygwin_git = toplevel.startswith('/')
143
144 if is_cygwin_python ^ is_cygwin_git:
145 exe_type = {True: '(cygwin)', False: '(windows)'}
146 logger.warn(
147 'pre-commit has detected a mix of cygwin python / git\n'
148 'This combination is not supported, it is likely you will '
149 'receive an error later in the program.\n'
150 'Make sure to use cygwin git+python while using cygwin\n'
151 'These can be installed through the cygwin installer.\n'
152 ' - python {}\n'
153 ' - git {}\n'.format(
154 exe_type[is_cygwin_python], exe_type[is_cygwin_git],
155 ),
156 )
157
[end of pre_commit/git.py]
[start of pre_commit/staged_files_only.py]
1 from __future__ import unicode_literals
2
3 import contextlib
4 import io
5 import logging
6 import os.path
7 import time
8
9 from pre_commit.util import CalledProcessError
10 from pre_commit.util import cmd_output
11 from pre_commit.util import mkdirp
12
13
14 logger = logging.getLogger('pre_commit')
15
16
17 def _git_apply(patch):
18 args = ('apply', '--whitespace=nowarn', patch)
19 try:
20 cmd_output('git', *args, encoding=None)
21 except CalledProcessError:
22 # Retry with autocrlf=false -- see #570
23 cmd_output('git', '-c', 'core.autocrlf=false', *args, encoding=None)
24
25
26 @contextlib.contextmanager
27 def staged_files_only(patch_dir):
28 """Clear any unstaged changes from the git working directory inside this
29 context.
30 """
31 # Determine if there are unstaged files
32 tree = cmd_output('git', 'write-tree')[1].strip()
33 retcode, diff_stdout_binary, _ = cmd_output(
34 'git', 'diff-index', '--ignore-submodules', '--binary',
35 '--exit-code', '--no-color', '--no-ext-diff', tree, '--',
36 retcode=None,
37 encoding=None,
38 )
39 if retcode and diff_stdout_binary.strip():
40 patch_filename = 'patch{}'.format(int(time.time()))
41 patch_filename = os.path.join(patch_dir, patch_filename)
42 logger.warning('Unstaged files detected.')
43 logger.info(
44 'Stashing unstaged files to {}.'.format(patch_filename),
45 )
46 # Save the current unstaged changes as a patch
47 mkdirp(patch_dir)
48 with io.open(patch_filename, 'wb') as patch_file:
49 patch_file.write(diff_stdout_binary)
50
51 # Clear the working directory of unstaged changes
52 cmd_output('git', 'checkout', '--', '.')
53 try:
54 yield
55 finally:
56 # Try to apply the patch we saved
57 try:
58 _git_apply(patch_filename)
59 except CalledProcessError:
60 logger.warning(
61 'Stashed changes conflicted with hook auto-fixes... '
62 'Rolling back fixes...',
63 )
64 # We failed to apply the patch, presumably due to fixes made
65 # by hooks.
66 # Roll back the changes made by hooks.
67 cmd_output('git', 'checkout', '--', '.')
68 _git_apply(patch_filename)
69 logger.info('Restored changes from {}.'.format(patch_filename))
70 else:
71 # There weren't any staged files so we don't need to do anything
72 # special
73 yield
74
[end of pre_commit/staged_files_only.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/git.py b/pre_commit/git.py
--- a/pre_commit/git.py
+++ b/pre_commit/git.py
@@ -97,6 +97,20 @@
)[1])
+def intent_to_add_files():
+ _, stdout_binary, _ = cmd_output('git', 'status', '--porcelain', '-z')
+ parts = list(reversed(zsplit(stdout_binary)))
+ intent_to_add = []
+ while parts:
+ line = parts.pop()
+ status, filename = line[:3], line[3:]
+ if status[0] in {'C', 'R'}: # renames / moves have an additional arg
+ parts.pop()
+ if status[1] == 'A':
+ intent_to_add.append(filename)
+ return intent_to_add
+
+
def get_all_files():
return zsplit(cmd_output('git', 'ls-files', '-z')[1])
diff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py
--- a/pre_commit/staged_files_only.py
+++ b/pre_commit/staged_files_only.py
@@ -6,9 +6,11 @@
import os.path
import time
+from pre_commit import git
from pre_commit.util import CalledProcessError
from pre_commit.util import cmd_output
from pre_commit.util import mkdirp
+from pre_commit.xargs import xargs
logger = logging.getLogger('pre_commit')
@@ -24,11 +26,22 @@
@contextlib.contextmanager
-def staged_files_only(patch_dir):
- """Clear any unstaged changes from the git working directory inside this
- context.
- """
- # Determine if there are unstaged files
+def _intent_to_add_cleared():
+ intent_to_add = git.intent_to_add_files()
+ if intent_to_add:
+ logger.warning('Unstaged intent-to-add files detected.')
+
+ xargs(('git', 'rm', '--cached', '--'), intent_to_add)
+ try:
+ yield
+ finally:
+ xargs(('git', 'add', '--intent-to-add', '--'), intent_to_add)
+ else:
+ yield
+
+
[email protected]
+def _unstaged_changes_cleared(patch_dir):
tree = cmd_output('git', 'write-tree')[1].strip()
retcode, diff_stdout_binary, _ = cmd_output(
'git', 'diff-index', '--ignore-submodules', '--binary',
@@ -71,3 +84,12 @@
# There weren't any staged files so we don't need to do anything
# special
yield
+
+
[email protected]
+def staged_files_only(patch_dir):
+ """Clear any unstaged changes from the git working directory inside this
+ context.
+ """
+ with _intent_to_add_cleared(), _unstaged_changes_cleared(patch_dir):
+ yield
|
{"golden_diff": "diff --git a/pre_commit/git.py b/pre_commit/git.py\n--- a/pre_commit/git.py\n+++ b/pre_commit/git.py\n@@ -97,6 +97,20 @@\n )[1])\n \n \n+def intent_to_add_files():\n+ _, stdout_binary, _ = cmd_output('git', 'status', '--porcelain', '-z')\n+ parts = list(reversed(zsplit(stdout_binary)))\n+ intent_to_add = []\n+ while parts:\n+ line = parts.pop()\n+ status, filename = line[:3], line[3:]\n+ if status[0] in {'C', 'R'}: # renames / moves have an additional arg\n+ parts.pop()\n+ if status[1] == 'A':\n+ intent_to_add.append(filename)\n+ return intent_to_add\n+\n+\n def get_all_files():\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n \ndiff --git a/pre_commit/staged_files_only.py b/pre_commit/staged_files_only.py\n--- a/pre_commit/staged_files_only.py\n+++ b/pre_commit/staged_files_only.py\n@@ -6,9 +6,11 @@\n import os.path\n import time\n \n+from pre_commit import git\n from pre_commit.util import CalledProcessError\n from pre_commit.util import cmd_output\n from pre_commit.util import mkdirp\n+from pre_commit.xargs import xargs\n \n \n logger = logging.getLogger('pre_commit')\n@@ -24,11 +26,22 @@\n \n \n @contextlib.contextmanager\n-def staged_files_only(patch_dir):\n- \"\"\"Clear any unstaged changes from the git working directory inside this\n- context.\n- \"\"\"\n- # Determine if there are unstaged files\n+def _intent_to_add_cleared():\n+ intent_to_add = git.intent_to_add_files()\n+ if intent_to_add:\n+ logger.warning('Unstaged intent-to-add files detected.')\n+\n+ xargs(('git', 'rm', '--cached', '--'), intent_to_add)\n+ try:\n+ yield\n+ finally:\n+ xargs(('git', 'add', '--intent-to-add', '--'), intent_to_add)\n+ else:\n+ yield\n+\n+\[email protected]\n+def _unstaged_changes_cleared(patch_dir):\n tree = cmd_output('git', 'write-tree')[1].strip()\n retcode, diff_stdout_binary, _ = cmd_output(\n 'git', 'diff-index', '--ignore-submodules', '--binary',\n@@ -71,3 +84,12 @@\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n+\n+\[email protected]\n+def staged_files_only(patch_dir):\n+ \"\"\"Clear any unstaged changes from the git working directory inside this\n+ context.\n+ \"\"\"\n+ with _intent_to_add_cleared(), _unstaged_changes_cleared(patch_dir):\n+ yield\n", "issue": "\"git add -N\" prevents restoring stashed changes.\nTo reproduce, start with this simple `pre-commit-config.yaml` in an otherwise empty repo:\r\n\r\n```yaml\r\nrepos:\r\n- repo: https://github.com/pre-commit/pre-commit-hooks\r\n rev: master\r\n hooks: \r\n - id: end-of-file-fixer\r\n```\r\n\r\nThe hook used doesn't really matter. end-of-file-fixer is just an example. \r\n\r\nRun the following:\r\n```bash\r\necho \"new\" > newfile\r\necho \"\\n\\n\\n\" > needs-fixing\r\ngit add -N newfile\r\n# newfile is now staged as an empty file\r\ngit add needs-fixing\r\ngit commit -m \"fix\"\r\n```\r\n\r\nThe following output is generated: \r\n\r\n```\r\n[WARNING] Unstaged files detected.\r\n[INFO] Stashing unstaged files to /home/henniss/.cache/pre-commit/patch1544663784.\r\nFix End of Files.........................................................Failed\r\nhookid: end-of-file-fixer\r\n\r\nFiles were modified by this hook. Additional output:\r\n\r\nFixing needs-fixing\r\n\r\n[WARNING] Stashed changes conflicted with hook auto-fixes... Rolling back fixes...\r\nAn unexpected error has occurred: CalledProcessError: Command: ('/usr/lib/git-core/git', '-c', 'core.autocrlf=false', 'apply', '--whitespace=nowarn', '/home/henniss/.cache/pre-commit/patch1544663784')\r\nReturn code: 1\r\nExpected return code: 0\r\nOutput: (none)\r\nErrors:\r\n error: newfile: already exists in working directory\r\n\r\n\r\nCheck the log at /home/henniss/.cache/pre-commit/pre-commit.log\r\n```\r\n\r\n`cat newfile` now shows that it is empty. The unstaged changes aren't restored.\n", "before_files": [{"content": "from __future__ import unicode_literals\n\nimport logging\nimport os.path\nimport sys\n\nfrom pre_commit.util import cmd_output\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef zsplit(s):\n s = s.strip('\\0')\n if s:\n return s.split('\\0')\n else:\n return []\n\n\ndef no_git_env():\n # Too many bugs dealing with environment variables and GIT:\n # https://github.com/pre-commit/pre-commit/issues/300\n # In git 2.6.3 (maybe others), git exports GIT_WORK_TREE while running\n # pre-commit hooks\n # In git 1.9.1 (maybe others), git exports GIT_DIR and GIT_INDEX_FILE\n # while running pre-commit hooks in submodules.\n # GIT_DIR: Causes git clone to clone wrong thing\n # GIT_INDEX_FILE: Causes 'error invalid object ...' during commit\n return {\n k: v for k, v in os.environ.items()\n if not k.startswith('GIT_') or k in {'GIT_SSH'}\n }\n\n\ndef get_root():\n return cmd_output('git', 'rev-parse', '--show-toplevel')[1].strip()\n\n\ndef get_git_dir(git_root='.'):\n opts = ('--git-common-dir', '--git-dir')\n _, out, _ = cmd_output('git', 'rev-parse', *opts, cwd=git_root)\n for line, opt in zip(out.splitlines(), opts):\n if line != opt: # pragma: no branch (git < 2.5)\n return os.path.normpath(os.path.join(git_root, line))\n else:\n raise AssertionError('unreachable: no git dir')\n\n\ndef get_remote_url(git_root):\n ret = cmd_output('git', 'config', 'remote.origin.url', cwd=git_root)[1]\n return ret.strip()\n\n\ndef is_in_merge_conflict():\n git_dir = get_git_dir('.')\n return (\n os.path.exists(os.path.join(git_dir, 'MERGE_MSG')) and\n os.path.exists(os.path.join(git_dir, 'MERGE_HEAD'))\n )\n\n\ndef parse_merge_msg_for_conflicts(merge_msg):\n # Conflicted files start with tabs\n return [\n line.lstrip(b'#').strip().decode('UTF-8')\n for line in merge_msg.splitlines()\n # '#\\t' for git 2.4.1\n if line.startswith((b'\\t', b'#\\t'))\n ]\n\n\ndef get_conflicted_files():\n logger.info('Checking merge-conflict files only.')\n # Need to get the conflicted files from the MERGE_MSG because they could\n # have resolved the conflict by choosing one side or the other\n with open(os.path.join(get_git_dir('.'), 'MERGE_MSG'), 'rb') as f:\n merge_msg = f.read()\n merge_conflict_filenames = parse_merge_msg_for_conflicts(merge_msg)\n\n # This will get the rest of the changes made after the merge.\n # If they resolved the merge conflict by choosing a mesh of both sides\n # this will also include the conflicted files\n tree_hash = cmd_output('git', 'write-tree')[1].strip()\n merge_diff_filenames = zsplit(cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '-m', tree_hash, 'HEAD', 'MERGE_HEAD',\n )[1])\n return set(merge_conflict_filenames) | set(merge_diff_filenames)\n\n\ndef get_staged_files():\n return zsplit(cmd_output(\n 'git', 'diff', '--staged', '--name-only', '--no-ext-diff', '-z',\n # Everything except for D\n '--diff-filter=ACMRTUXB',\n )[1])\n\n\ndef get_all_files():\n return zsplit(cmd_output('git', 'ls-files', '-z')[1])\n\n\ndef get_changed_files(new, old):\n return zsplit(cmd_output(\n 'git', 'diff', '--name-only', '--no-ext-diff', '-z',\n '{}...{}'.format(old, new),\n )[1])\n\n\ndef head_rev(remote):\n _, out, _ = cmd_output('git', 'ls-remote', '--exit-code', remote, 'HEAD')\n return out.split()[0]\n\n\ndef has_diff(*args, **kwargs):\n repo = kwargs.pop('repo', '.')\n assert not kwargs, kwargs\n cmd = ('git', 'diff', '--quiet', '--no-ext-diff') + args\n return cmd_output(*cmd, cwd=repo, retcode=None)[0]\n\n\ndef commit(repo='.'):\n env = no_git_env()\n name, email = 'pre-commit', '[email protected]'\n env['GIT_AUTHOR_NAME'] = env['GIT_COMMITTER_NAME'] = name\n env['GIT_AUTHOR_EMAIL'] = env['GIT_COMMITTER_EMAIL'] = email\n cmd = ('git', 'commit', '--no-edit', '--no-gpg-sign', '-n', '-minit')\n cmd_output(*cmd, cwd=repo, env=env)\n\n\ndef git_path(name, repo='.'):\n _, out, _ = cmd_output('git', 'rev-parse', '--git-path', name, cwd=repo)\n return os.path.join(repo, out.strip())\n\n\ndef check_for_cygwin_mismatch():\n \"\"\"See https://github.com/pre-commit/pre-commit/issues/354\"\"\"\n if sys.platform in ('cygwin', 'win32'): # pragma: no cover (windows)\n is_cygwin_python = sys.platform == 'cygwin'\n toplevel = cmd_output('git', 'rev-parse', '--show-toplevel')[1]\n is_cygwin_git = toplevel.startswith('/')\n\n if is_cygwin_python ^ is_cygwin_git:\n exe_type = {True: '(cygwin)', False: '(windows)'}\n logger.warn(\n 'pre-commit has detected a mix of cygwin python / git\\n'\n 'This combination is not supported, it is likely you will '\n 'receive an error later in the program.\\n'\n 'Make sure to use cygwin git+python while using cygwin\\n'\n 'These can be installed through the cygwin installer.\\n'\n ' - python {}\\n'\n ' - git {}\\n'.format(\n exe_type[is_cygwin_python], exe_type[is_cygwin_git],\n ),\n )\n", "path": "pre_commit/git.py"}, {"content": "from __future__ import unicode_literals\n\nimport contextlib\nimport io\nimport logging\nimport os.path\nimport time\n\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import mkdirp\n\n\nlogger = logging.getLogger('pre_commit')\n\n\ndef _git_apply(patch):\n args = ('apply', '--whitespace=nowarn', patch)\n try:\n cmd_output('git', *args, encoding=None)\n except CalledProcessError:\n # Retry with autocrlf=false -- see #570\n cmd_output('git', '-c', 'core.autocrlf=false', *args, encoding=None)\n\n\[email protected]\ndef staged_files_only(patch_dir):\n \"\"\"Clear any unstaged changes from the git working directory inside this\n context.\n \"\"\"\n # Determine if there are unstaged files\n tree = cmd_output('git', 'write-tree')[1].strip()\n retcode, diff_stdout_binary, _ = cmd_output(\n 'git', 'diff-index', '--ignore-submodules', '--binary',\n '--exit-code', '--no-color', '--no-ext-diff', tree, '--',\n retcode=None,\n encoding=None,\n )\n if retcode and diff_stdout_binary.strip():\n patch_filename = 'patch{}'.format(int(time.time()))\n patch_filename = os.path.join(patch_dir, patch_filename)\n logger.warning('Unstaged files detected.')\n logger.info(\n 'Stashing unstaged files to {}.'.format(patch_filename),\n )\n # Save the current unstaged changes as a patch\n mkdirp(patch_dir)\n with io.open(patch_filename, 'wb') as patch_file:\n patch_file.write(diff_stdout_binary)\n\n # Clear the working directory of unstaged changes\n cmd_output('git', 'checkout', '--', '.')\n try:\n yield\n finally:\n # Try to apply the patch we saved\n try:\n _git_apply(patch_filename)\n except CalledProcessError:\n logger.warning(\n 'Stashed changes conflicted with hook auto-fixes... '\n 'Rolling back fixes...',\n )\n # We failed to apply the patch, presumably due to fixes made\n # by hooks.\n # Roll back the changes made by hooks.\n cmd_output('git', 'checkout', '--', '.')\n _git_apply(patch_filename)\n logger.info('Restored changes from {}.'.format(patch_filename))\n else:\n # There weren't any staged files so we don't need to do anything\n # special\n yield\n", "path": "pre_commit/staged_files_only.py"}]}
| 3,394 | 645 |
gh_patches_debug_3648
|
rasdani/github-patches
|
git_diff
|
joke2k__faker-1132
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
invalid pt_BR cellphone numbers being generated
* Faker version: 4.0.2
* OS: Ubuntu 16.04.6 LTS
If I got [MSISDN](https://en.wikipedia.org/wiki/MSISDN) right, and it is possible I did it wrong since I know nothing about telecom, they are just meant to cellphones and not landline phones. In Brazil cellphones started now to have a 9 in front of its digits. This was implemented by @rodrigondec on 941e06693ff8771d715d2f9f37d79a7f1b8fa8f4 but he added `5511########` on `msisdn_formats`.
If I got the mobile and not landline thing right all the following lines are generating invalid cellphone numbers:
```
'5511########',
'5521########',
'5531########',
'5541########',
'5551########',
'5561########',
'5571########',
'5581########',
'5584########',
```
### Steps to reproduce
1. Instantiate faker: `faker = Faker()`
2. call `len(faker.msisdn)`
### Expected behavior
The length should always return 13 for pt_BR locales.
From ANATEL, the telecom national agency in Brazil: https://www.anatel.gov.br/Portal/exibirPortalPaginaEspecial.do;jsessionid=4CF5489B6943AFF3E2BDA192CC1B5220.site1?org.apache.struts.taglib.html.TOKEN=bbe01b15d1c58d2f938580db5547cb8e&acao=carregaPasta&codItemCanal=1722&pastaSelecionada=2831
> 1. Por que os números dos telefones celulares terão o nono dígito?
> Os números dos telefones celulares estão recebendo mais um dígito para atender à crescente demanda pelo serviço móvel no Brasil(....)
> 2. O nono dígito será adicionado aos números de todo o Brasil?
> O nono dígito será implementado em todo o País até o fim de 2016(...)
Translates to:
1. Why the cell phone numbers will have a 9th digit?
The cell phone numbers are receiving one more digit to address the demand growth of mobile service in Brazil...
2. The 9th digit will be added to all numbers in Brazil?
The 9th digit will be implemented in the whole country by the end of 2016...
### Actual behavior
the length sometimes is 12
</issue>
<code>
[start of faker/providers/phone_number/pt_BR/__init__.py]
1 from .. import Provider as PhoneNumberProvider
2
3
4 class Provider(PhoneNumberProvider):
5 formats = (
6 '+55 (011) #### ####',
7 '+55 (021) #### ####',
8 '+55 (031) #### ####',
9 '+55 (041) #### ####',
10 '+55 (051) #### ####',
11 '+55 (061) #### ####',
12 '+55 (071) #### ####',
13 '+55 (081) #### ####',
14 '+55 (084) #### ####',
15 '+55 11 #### ####',
16 '+55 21 #### ####',
17 '+55 31 #### ####',
18 '+55 41 #### ####',
19 '+55 51 ### ####',
20 '+55 61 #### ####',
21 '+55 71 #### ####',
22 '+55 81 #### ####',
23 '+55 84 #### ####',
24 '+55 (011) ####-####',
25 '+55 (021) ####-####',
26 '+55 (031) ####-####',
27 '+55 (041) ####-####',
28 '+55 (051) ####-####',
29 '+55 (061) ####-####',
30 '+55 (071) ####-####',
31 '+55 (081) ####-####',
32 '+55 (084) ####-####',
33 '+55 11 ####-####',
34 '+55 21 ####-####',
35 '+55 31 ####-####',
36 '+55 41 ####-####',
37 '+55 51 ### ####',
38 '+55 61 ####-####',
39 '+55 71 ####-####',
40 '+55 81 ####-####',
41 '+55 84 ####-####',
42 '(011) #### ####',
43 '(021) #### ####',
44 '(031) #### ####',
45 '(041) #### ####',
46 '(051) #### ####',
47 '(061) #### ####',
48 '(071) #### ####',
49 '(081) #### ####',
50 '(084) #### ####',
51 '11 #### ####',
52 '21 #### ####',
53 '31 #### ####',
54 '41 #### ####',
55 '51 ### ####',
56 '61 #### ####',
57 '71 #### ####',
58 '81 #### ####',
59 '84 #### ####',
60 '(011) ####-####',
61 '(021) ####-####',
62 '(031) ####-####',
63 '(041) ####-####',
64 '(051) ####-####',
65 '(061) ####-####',
66 '(071) ####-####',
67 '(081) ####-####',
68 '(084) ####-####',
69 '11 ####-####',
70 '21 ####-####',
71 '31 ####-####',
72 '41 ####-####',
73 '51 ### ####',
74 '61 ####-####',
75 '71 ####-####',
76 '81 ####-####',
77 '84 ####-####',
78 )
79
80 msisdn_formats = (
81 '5511########',
82 '5521########',
83 '5531########',
84 '5541########',
85 '5551########',
86 '5561########',
87 '5571########',
88 '5581########',
89 '5584########',
90 '55119########',
91 '55219########',
92 '55319########',
93 '55419########',
94 '55519########',
95 '55619########',
96 '55719########',
97 '55819########',
98 '55849########',
99 )
100
101 cellphone_formats = (
102 '+55 ## 9#### ####',
103 '+55 ## 9 #### ####',
104 '+55 (0##) 9#### ####',
105 '+55 (##) 9#### ####',
106 '+55 (##) 9 #### ####',
107 '+55 ## 9####-####',
108 '+55 ## 9 ####-####',
109 '+55 (0##) 9####-####',
110 '+55 (##) 9####-####',
111 '+55 (##) 9 ####-####',
112 )
113
114 def cellphone_number(self):
115 pattern = self.random_element(self.cellphone_formats)
116 return self.numerify(self.generator.parse(pattern))
117
[end of faker/providers/phone_number/pt_BR/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/faker/providers/phone_number/pt_BR/__init__.py b/faker/providers/phone_number/pt_BR/__init__.py
--- a/faker/providers/phone_number/pt_BR/__init__.py
+++ b/faker/providers/phone_number/pt_BR/__init__.py
@@ -78,15 +78,6 @@
)
msisdn_formats = (
- '5511########',
- '5521########',
- '5531########',
- '5541########',
- '5551########',
- '5561########',
- '5571########',
- '5581########',
- '5584########',
'55119########',
'55219########',
'55319########',
|
{"golden_diff": "diff --git a/faker/providers/phone_number/pt_BR/__init__.py b/faker/providers/phone_number/pt_BR/__init__.py\n--- a/faker/providers/phone_number/pt_BR/__init__.py\n+++ b/faker/providers/phone_number/pt_BR/__init__.py\n@@ -78,15 +78,6 @@\n )\n \n msisdn_formats = (\n- '5511########',\n- '5521########',\n- '5531########',\n- '5541########',\n- '5551########',\n- '5561########',\n- '5571########',\n- '5581########',\n- '5584########',\n '55119########',\n '55219########',\n '55319########',\n", "issue": "invalid pt_BR cellphone numbers being generated\n* Faker version: 4.0.2\r\n* OS: Ubuntu 16.04.6 LTS\r\n\r\nIf I got [MSISDN](https://en.wikipedia.org/wiki/MSISDN) right, and it is possible I did it wrong since I know nothing about telecom, they are just meant to cellphones and not landline phones. In Brazil cellphones started now to have a 9 in front of its digits. This was implemented by @rodrigondec on 941e06693ff8771d715d2f9f37d79a7f1b8fa8f4 but he added `5511########` on `msisdn_formats`.\r\n\r\nIf I got the mobile and not landline thing right all the following lines are generating invalid cellphone numbers:\r\n```\r\n'5511########',\r\n'5521########',\r\n'5531########',\r\n'5541########',\r\n'5551########',\r\n'5561########',\r\n'5571########',\r\n'5581########',\r\n'5584########',\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Instantiate faker: `faker = Faker()`\r\n2. call `len(faker.msisdn)`\r\n\r\n### Expected behavior\r\n\r\nThe length should always return 13 for pt_BR locales.\r\n\r\nFrom ANATEL, the telecom national agency in Brazil: https://www.anatel.gov.br/Portal/exibirPortalPaginaEspecial.do;jsessionid=4CF5489B6943AFF3E2BDA192CC1B5220.site1?org.apache.struts.taglib.html.TOKEN=bbe01b15d1c58d2f938580db5547cb8e&acao=carregaPasta&codItemCanal=1722&pastaSelecionada=2831\r\n> 1. Por que os n\u00fameros dos telefones celulares ter\u00e3o o nono d\u00edgito?\r\n> Os n\u00fameros dos telefones celulares est\u00e3o recebendo mais um d\u00edgito para atender \u00e0 crescente demanda pelo servi\u00e7o m\u00f3vel no Brasil(....)\r\n> 2. O nono d\u00edgito ser\u00e1 adicionado aos n\u00fameros de todo o Brasil?\r\n> O nono d\u00edgito ser\u00e1 implementado em todo o Pa\u00eds at\u00e9 o fim de 2016(...)\r\n\r\nTranslates to:\r\n1. Why the cell phone numbers will have a 9th digit?\r\nThe cell phone numbers are receiving one more digit to address the demand growth of mobile service in Brazil...\r\n2. The 9th digit will be added to all numbers in Brazil?\r\nThe 9th digit will be implemented in the whole country by the end of 2016...\r\n\r\n### Actual behavior\r\n\r\nthe length sometimes is 12\r\n\n", "before_files": [{"content": "from .. import Provider as PhoneNumberProvider\n\n\nclass Provider(PhoneNumberProvider):\n formats = (\n '+55 (011) #### ####',\n '+55 (021) #### ####',\n '+55 (031) #### ####',\n '+55 (041) #### ####',\n '+55 (051) #### ####',\n '+55 (061) #### ####',\n '+55 (071) #### ####',\n '+55 (081) #### ####',\n '+55 (084) #### ####',\n '+55 11 #### ####',\n '+55 21 #### ####',\n '+55 31 #### ####',\n '+55 41 #### ####',\n '+55 51 ### ####',\n '+55 61 #### ####',\n '+55 71 #### ####',\n '+55 81 #### ####',\n '+55 84 #### ####',\n '+55 (011) ####-####',\n '+55 (021) ####-####',\n '+55 (031) ####-####',\n '+55 (041) ####-####',\n '+55 (051) ####-####',\n '+55 (061) ####-####',\n '+55 (071) ####-####',\n '+55 (081) ####-####',\n '+55 (084) ####-####',\n '+55 11 ####-####',\n '+55 21 ####-####',\n '+55 31 ####-####',\n '+55 41 ####-####',\n '+55 51 ### ####',\n '+55 61 ####-####',\n '+55 71 ####-####',\n '+55 81 ####-####',\n '+55 84 ####-####',\n '(011) #### ####',\n '(021) #### ####',\n '(031) #### ####',\n '(041) #### ####',\n '(051) #### ####',\n '(061) #### ####',\n '(071) #### ####',\n '(081) #### ####',\n '(084) #### ####',\n '11 #### ####',\n '21 #### ####',\n '31 #### ####',\n '41 #### ####',\n '51 ### ####',\n '61 #### ####',\n '71 #### ####',\n '81 #### ####',\n '84 #### ####',\n '(011) ####-####',\n '(021) ####-####',\n '(031) ####-####',\n '(041) ####-####',\n '(051) ####-####',\n '(061) ####-####',\n '(071) ####-####',\n '(081) ####-####',\n '(084) ####-####',\n '11 ####-####',\n '21 ####-####',\n '31 ####-####',\n '41 ####-####',\n '51 ### ####',\n '61 ####-####',\n '71 ####-####',\n '81 ####-####',\n '84 ####-####',\n )\n\n msisdn_formats = (\n '5511########',\n '5521########',\n '5531########',\n '5541########',\n '5551########',\n '5561########',\n '5571########',\n '5581########',\n '5584########',\n '55119########',\n '55219########',\n '55319########',\n '55419########',\n '55519########',\n '55619########',\n '55719########',\n '55819########',\n '55849########',\n )\n\n cellphone_formats = (\n '+55 ## 9#### ####',\n '+55 ## 9 #### ####',\n '+55 (0##) 9#### ####',\n '+55 (##) 9#### ####',\n '+55 (##) 9 #### ####',\n '+55 ## 9####-####',\n '+55 ## 9 ####-####',\n '+55 (0##) 9####-####',\n '+55 (##) 9####-####',\n '+55 (##) 9 ####-####',\n )\n\n def cellphone_number(self):\n pattern = self.random_element(self.cellphone_formats)\n return self.numerify(self.generator.parse(pattern))\n", "path": "faker/providers/phone_number/pt_BR/__init__.py"}]}
| 2,473 | 195 |
gh_patches_debug_23607
|
rasdani/github-patches
|
git_diff
|
vaexio__vaex-217
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Pandas dependency
We now depends on Pandas:
https://github.com/vaexio/vaex/blob/255ccbc192d54c619a273de21a05f919da8ffadf/packages/vaex-core/vaex/formatting.py
Introduced in https://github.com/vaexio/vaex/pull/192
We should not depend on pandas, it is not a dependency of vaex-core and should not become, we might also grow to large to run on AWS Lambda.
</issue>
<code>
[start of packages/vaex-core/vaex/formatting.py]
1 import numpy as np
2 import numbers
3 import six
4 import pandas as pd
5
6
7 MAX_LENGTH = 50
8
9
10 def _format_value(value):
11 if isinstance(value, six.string_types):
12 value = str(value)
13 elif isinstance(value, bytes):
14 value = repr(value)
15 elif isinstance(value, np.ma.core.MaskedConstant):
16 value = str(value)
17 if isinstance(value, np.datetime64):
18 value = str(pd.to_datetime(value))
19 if isinstance(value, np.timedelta64):
20 value = str(pd.to_timedelta(value))
21 elif not isinstance(value, numbers.Number):
22 value = str(value)
23 if isinstance(value, float):
24 value = repr(value)
25 if isinstance(value, (str, bytes)):
26 if len(value) > MAX_LENGTH:
27 value = repr(value[:MAX_LENGTH-3])[:-1] + '...'
28 return value
29
[end of packages/vaex-core/vaex/formatting.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/packages/vaex-core/vaex/formatting.py b/packages/vaex-core/vaex/formatting.py
--- a/packages/vaex-core/vaex/formatting.py
+++ b/packages/vaex-core/vaex/formatting.py
@@ -1,7 +1,7 @@
import numpy as np
import numbers
import six
-import pandas as pd
+import datetime
MAX_LENGTH = 50
@@ -15,9 +15,24 @@
elif isinstance(value, np.ma.core.MaskedConstant):
value = str(value)
if isinstance(value, np.datetime64):
- value = str(pd.to_datetime(value))
+ if np.isnat(value):
+ value = 'NaT'
+ else:
+ value = ' '.join(str(value).split('T'))
if isinstance(value, np.timedelta64):
- value = str(pd.to_timedelta(value))
+ if np.isnat(value):
+ value = 'NaT'
+ else:
+ tmp = datetime.timedelta(seconds=value / np.timedelta64(1, 's'))
+ ms = tmp.microseconds
+ s = np.mod(tmp.seconds, 60)
+ m = np.mod(tmp.seconds//60, 60)
+ h = tmp.seconds // 3600
+ d = tmp.days
+ if ms:
+ value = str('%i days %+02i:%02i:%02i.%i' % (d,h,m,s,ms))
+ else:
+ value = str('%i days %+02i:%02i:%02i' % (d,h,m,s))
elif not isinstance(value, numbers.Number):
value = str(value)
if isinstance(value, float):
|
{"golden_diff": "diff --git a/packages/vaex-core/vaex/formatting.py b/packages/vaex-core/vaex/formatting.py\n--- a/packages/vaex-core/vaex/formatting.py\n+++ b/packages/vaex-core/vaex/formatting.py\n@@ -1,7 +1,7 @@\n import numpy as np\n import numbers\n import six\n-import pandas as pd\n+import datetime\n \n \n MAX_LENGTH = 50\n@@ -15,9 +15,24 @@\n elif isinstance(value, np.ma.core.MaskedConstant):\n value = str(value)\n if isinstance(value, np.datetime64):\n- value = str(pd.to_datetime(value))\n+ if np.isnat(value):\n+ value = 'NaT'\n+ else:\n+ value = ' '.join(str(value).split('T'))\n if isinstance(value, np.timedelta64):\n- value = str(pd.to_timedelta(value))\n+ if np.isnat(value):\n+ value = 'NaT'\n+ else:\n+ tmp = datetime.timedelta(seconds=value / np.timedelta64(1, 's'))\n+ ms = tmp.microseconds\n+ s = np.mod(tmp.seconds, 60)\n+ m = np.mod(tmp.seconds//60, 60)\n+ h = tmp.seconds // 3600\n+ d = tmp.days\n+ if ms:\n+ value = str('%i days %+02i:%02i:%02i.%i' % (d,h,m,s,ms))\n+ else:\n+ value = str('%i days %+02i:%02i:%02i' % (d,h,m,s))\n elif not isinstance(value, numbers.Number):\n value = str(value)\n if isinstance(value, float):\n", "issue": "Pandas dependency\nWe now depends on Pandas:\r\nhttps://github.com/vaexio/vaex/blob/255ccbc192d54c619a273de21a05f919da8ffadf/packages/vaex-core/vaex/formatting.py\r\n\r\nIntroduced in https://github.com/vaexio/vaex/pull/192\r\n\r\nWe should not depend on pandas, it is not a dependency of vaex-core and should not become, we might also grow to large to run on AWS Lambda.\n", "before_files": [{"content": "import numpy as np\nimport numbers\nimport six\nimport pandas as pd\n\n\nMAX_LENGTH = 50\n\n\ndef _format_value(value):\n if isinstance(value, six.string_types):\n value = str(value)\n elif isinstance(value, bytes):\n value = repr(value)\n elif isinstance(value, np.ma.core.MaskedConstant):\n value = str(value)\n if isinstance(value, np.datetime64):\n value = str(pd.to_datetime(value))\n if isinstance(value, np.timedelta64):\n value = str(pd.to_timedelta(value))\n elif not isinstance(value, numbers.Number):\n value = str(value)\n if isinstance(value, float):\n value = repr(value)\n if isinstance(value, (str, bytes)):\n if len(value) > MAX_LENGTH:\n value = repr(value[:MAX_LENGTH-3])[:-1] + '...'\n return value\n", "path": "packages/vaex-core/vaex/formatting.py"}]}
| 904 | 387 |
gh_patches_debug_176
|
rasdani/github-patches
|
git_diff
|
searxng__searxng-471
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[SIMPLE THEME]: Reddit search engine breaks Simple Theme "Image" tab Style.
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Powered by searxng - 1.0.0-999-e4025cd1
**How did you install SearXNG?**
SearXNG docker image with docker-compose.
**What happened?**
<!-- A clear and concise description of what the bug is. -->
If you turn on reddit search engine from settings.yml it gets enabled for several categories including "Images." However, things get a little funny with the images tab as far as the formatting goes. As you can see in the image below, the results don't encompass the entire canvas but only a portion like they do with "General" tab. I believe this might be due to reddit returning search results vs images when you're in the image tab (image 2 below). You'll see these search results if you keep scrolling down.
**How To Reproduce**
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
1. Make sure reddit search engine is turned on for images category in settings or globally via settings.yml.
2. Search for something and go to images tab.
3. Notice the behavior where images only take up the left-hand side of the canvas.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Images should use the entire canvas like they do when reddit search engine is turned off (image 3) and search should only include images or gifs etc.
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->



**Alternatives**
Remove Reddit search engine from images category by default so it doesn't get enabled from settings.yml.
[SIMPLE THEME]: Reddit search engine breaks Simple Theme "Image" tab Style.
<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->
**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**
Powered by searxng - 1.0.0-999-e4025cd1
**How did you install SearXNG?**
SearXNG docker image with docker-compose.
**What happened?**
<!-- A clear and concise description of what the bug is. -->
If you turn on reddit search engine from settings.yml it gets enabled for several categories including "Images." However, things get a little funny with the images tab as far as the formatting goes. As you can see in the image below, the results don't encompass the entire canvas but only a portion like they do with "General" tab. I believe this might be due to reddit returning search results vs images when you're in the image tab (image 2 below). You'll see these search results if you keep scrolling down.
**How To Reproduce**
<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->
1. Make sure reddit search engine is turned on for images category in settings or globally via settings.yml.
2. Search for something and go to images tab.
3. Notice the behavior where images only take up the left-hand side of the canvas.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
Images should use the entire canvas like they do when reddit search engine is turned off (image 3) and search should only include images or gifs etc.
**Screenshots & Logs**
<!-- If applicable, add screenshots, logs to help explain your problem. -->



**Alternatives**
Remove Reddit search engine from images category by default so it doesn't get enabled from settings.yml.
</issue>
<code>
[start of searx/engines/reddit.py]
1 # SPDX-License-Identifier: AGPL-3.0-or-later
2 """
3 Reddit
4 """
5
6 import json
7 from datetime import datetime
8 from urllib.parse import urlencode, urljoin, urlparse
9
10 # about
11 about = {
12 "website": 'https://www.reddit.com/',
13 "wikidata_id": 'Q1136',
14 "official_api_documentation": 'https://www.reddit.com/dev/api',
15 "use_official_api": True,
16 "require_api_key": False,
17 "results": 'JSON',
18 }
19
20 # engine dependent config
21 categories = ['general', 'images', 'news', 'social media']
22 page_size = 25
23
24 # search-url
25 base_url = 'https://www.reddit.com/'
26 search_url = base_url + 'search.json?{query}'
27
28
29 # do search-request
30 def request(query, params):
31 query = urlencode({'q': query, 'limit': page_size})
32 params['url'] = search_url.format(query=query)
33
34 return params
35
36
37 # get response from search-request
38 def response(resp):
39 img_results = []
40 text_results = []
41
42 search_results = json.loads(resp.text)
43
44 # return empty array if there are no results
45 if 'data' not in search_results:
46 return []
47
48 posts = search_results.get('data', {}).get('children', [])
49
50 # process results
51 for post in posts:
52 data = post['data']
53
54 # extract post information
55 params = {
56 'url': urljoin(base_url, data['permalink']),
57 'title': data['title']
58 }
59
60 # if thumbnail field contains a valid URL, we need to change template
61 thumbnail = data['thumbnail']
62 url_info = urlparse(thumbnail)
63 # netloc & path
64 if url_info[1] != '' and url_info[2] != '':
65 params['img_src'] = data['url']
66 params['thumbnail_src'] = thumbnail
67 params['template'] = 'images.html'
68 img_results.append(params)
69 else:
70 created = datetime.fromtimestamp(data['created_utc'])
71 content = data['selftext']
72 if len(content) > 500:
73 content = content[:500] + '...'
74 params['content'] = content
75 params['publishedDate'] = created
76 text_results.append(params)
77
78 # show images first and text results second
79 return img_results + text_results
80
[end of searx/engines/reddit.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/searx/engines/reddit.py b/searx/engines/reddit.py
--- a/searx/engines/reddit.py
+++ b/searx/engines/reddit.py
@@ -18,7 +18,7 @@
}
# engine dependent config
-categories = ['general', 'images', 'news', 'social media']
+categories = ['social media']
page_size = 25
# search-url
|
{"golden_diff": "diff --git a/searx/engines/reddit.py b/searx/engines/reddit.py\n--- a/searx/engines/reddit.py\n+++ b/searx/engines/reddit.py\n@@ -18,7 +18,7 @@\n }\n \n # engine dependent config\n-categories = ['general', 'images', 'news', 'social media']\n+categories = ['social media']\n page_size = 25\n \n # search-url\n", "issue": "[SIMPLE THEME]: Reddit search engine breaks Simple Theme \"Image\" tab Style.\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nPowered by searxng - 1.0.0-999-e4025cd1\r\n\r\n**How did you install SearXNG?**\r\nSearXNG docker image with docker-compose.\r\n\r\n**What happened?**\r\n<!-- A clear and concise description of what the bug is. -->\r\nIf you turn on reddit search engine from settings.yml it gets enabled for several categories including \"Images.\" However, things get a little funny with the images tab as far as the formatting goes. As you can see in the image below, the results don't encompass the entire canvas but only a portion like they do with \"General\" tab. I believe this might be due to reddit returning search results vs images when you're in the image tab (image 2 below). You'll see these search results if you keep scrolling down.\r\n\r\n**How To Reproduce**\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n1. Make sure reddit search engine is turned on for images category in settings or globally via settings.yml.\r\n2. Search for something and go to images tab.\r\n3. Notice the behavior where images only take up the left-hand side of the canvas.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nImages should use the entire canvas like they do when reddit search engine is turned off (image 3) and search should only include images or gifs etc.\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n\r\n\r\n\r\n**Alternatives**\r\nRemove Reddit search engine from images category by default so it doesn't get enabled from settings.yml.\n[SIMPLE THEME]: Reddit search engine breaks Simple Theme \"Image\" tab Style.\n<!-- PLEASE FILL THESE FIELDS, IT REALLY HELPS THE MAINTAINERS OF SearXNG -->\r\n\r\n**Version of SearXNG, commit number if you are using on master branch and stipulate if you forked SearXNG**\r\nPowered by searxng - 1.0.0-999-e4025cd1\r\n\r\n**How did you install SearXNG?**\r\nSearXNG docker image with docker-compose.\r\n\r\n**What happened?**\r\n<!-- A clear and concise description of what the bug is. -->\r\nIf you turn on reddit search engine from settings.yml it gets enabled for several categories including \"Images.\" However, things get a little funny with the images tab as far as the formatting goes. As you can see in the image below, the results don't encompass the entire canvas but only a portion like they do with \"General\" tab. I believe this might be due to reddit returning search results vs images when you're in the image tab (image 2 below). You'll see these search results if you keep scrolling down.\r\n\r\n**How To Reproduce**\r\n<!-- How can we reproduce this issue? (as minimally and as precisely as possible) -->\r\n1. Make sure reddit search engine is turned on for images category in settings or globally via settings.yml.\r\n2. Search for something and go to images tab.\r\n3. Notice the behavior where images only take up the left-hand side of the canvas.\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\nImages should use the entire canvas like they do when reddit search engine is turned off (image 3) and search should only include images or gifs etc.\r\n\r\n**Screenshots & Logs**\r\n<!-- If applicable, add screenshots, logs to help explain your problem. -->\r\n\r\n\r\n\r\n\r\n**Alternatives**\r\nRemove Reddit search engine from images category by default so it doesn't get enabled from settings.yml.\n", "before_files": [{"content": "# SPDX-License-Identifier: AGPL-3.0-or-later\n\"\"\"\n Reddit\n\"\"\"\n\nimport json\nfrom datetime import datetime\nfrom urllib.parse import urlencode, urljoin, urlparse\n\n# about\nabout = {\n \"website\": 'https://www.reddit.com/',\n \"wikidata_id\": 'Q1136',\n \"official_api_documentation\": 'https://www.reddit.com/dev/api',\n \"use_official_api\": True,\n \"require_api_key\": False,\n \"results\": 'JSON',\n}\n\n# engine dependent config\ncategories = ['general', 'images', 'news', 'social media']\npage_size = 25\n\n# search-url\nbase_url = 'https://www.reddit.com/'\nsearch_url = base_url + 'search.json?{query}'\n\n\n# do search-request\ndef request(query, params):\n query = urlencode({'q': query, 'limit': page_size})\n params['url'] = search_url.format(query=query)\n\n return params\n\n\n# get response from search-request\ndef response(resp):\n img_results = []\n text_results = []\n\n search_results = json.loads(resp.text)\n\n # return empty array if there are no results\n if 'data' not in search_results:\n return []\n\n posts = search_results.get('data', {}).get('children', [])\n\n # process results\n for post in posts:\n data = post['data']\n\n # extract post information\n params = {\n 'url': urljoin(base_url, data['permalink']),\n 'title': data['title']\n }\n\n # if thumbnail field contains a valid URL, we need to change template\n thumbnail = data['thumbnail']\n url_info = urlparse(thumbnail)\n # netloc & path\n if url_info[1] != '' and url_info[2] != '':\n params['img_src'] = data['url']\n params['thumbnail_src'] = thumbnail\n params['template'] = 'images.html'\n img_results.append(params)\n else:\n created = datetime.fromtimestamp(data['created_utc'])\n content = data['selftext']\n if len(content) > 500:\n content = content[:500] + '...'\n params['content'] = content\n params['publishedDate'] = created\n text_results.append(params)\n\n # show images first and text results second\n return img_results + text_results\n", "path": "searx/engines/reddit.py"}]}
| 2,389 | 100 |
gh_patches_debug_23234
|
rasdani/github-patches
|
git_diff
|
streamlink__streamlink-1394
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
July 17 Douyu.com error
0.7.0
streamlink https://www.douyu.com/17732 source -o "PATH & FILENAME"
[cli][info] Found matching plugin douyutv for URL https://www.douyu.com/17732
error: Unable to open URL: https://www.douyu.com/lapi/live/getPlay/17732 (500 Server Error: Internal Server Error for url: https://www.douyu.com/lapi/live/getPlay/17732)
@fozzysec @steven7851
</issue>
<code>
[start of src/streamlink/plugins/douyutv.py]
1 import re
2 import time
3 import hashlib
4
5 from requests.adapters import HTTPAdapter
6
7 from streamlink.plugin import Plugin
8 from streamlink.plugin.api import http, validate, useragents
9 from streamlink.stream import HTTPStream, HLSStream, RTMPStream
10
11 API_URL = "https://capi.douyucdn.cn/api/v1/{0}&auth={1}"
12 VAPI_URL = "https://vmobile.douyu.com/video/getInfo?vid={0}"
13 API_SECRET = "Y237pxTx2In5ayGz"
14 SHOW_STATUS_ONLINE = 1
15 SHOW_STATUS_OFFLINE = 2
16 STREAM_WEIGHTS = {
17 "low": 540,
18 "medium": 720,
19 "source": 1080
20 }
21
22 _url_re = re.compile(r"""
23 http(s)?://
24 (?:
25 (?P<subdomain>.+)
26 \.
27 )?
28 douyu.com/
29 (?:
30 show/(?P<vid>[^/&?]+)|
31 (?P<channel>[^/&?]+)
32 )
33 """, re.VERBOSE)
34
35 _room_id_re = re.compile(r'"room_id\\*"\s*:\s*(\d+),')
36 _room_id_alt_re = re.compile(r'data-onlineid=(\d+)')
37
38 _room_id_schema = validate.Schema(
39 validate.all(
40 validate.transform(_room_id_re.search),
41 validate.any(
42 None,
43 validate.all(
44 validate.get(1),
45 validate.transform(int)
46 )
47 )
48 )
49 )
50
51 _room_id_alt_schema = validate.Schema(
52 validate.all(
53 validate.transform(_room_id_alt_re.search),
54 validate.any(
55 None,
56 validate.all(
57 validate.get(1),
58 validate.transform(int)
59 )
60 )
61 )
62 )
63
64 _room_schema = validate.Schema(
65 {
66 "data": validate.any(None, {
67 "show_status": validate.all(
68 validate.text,
69 validate.transform(int)
70 ),
71 "rtmp_url": validate.text,
72 "rtmp_live": validate.text,
73 "hls_url": validate.text,
74 "rtmp_multi_bitrate": validate.all(
75 validate.any([], {
76 validate.text: validate.text
77 }),
78 validate.transform(dict)
79 )
80 })
81 },
82 validate.get("data")
83 )
84
85 _vapi_schema = validate.Schema(
86 {
87 "data": validate.any(None, {
88 "video_url": validate.text
89 })
90 },
91 validate.get("data")
92 )
93
94
95 class Douyutv(Plugin):
96 @classmethod
97 def can_handle_url(cls, url):
98 return _url_re.match(url)
99
100 @classmethod
101 def stream_weight(cls, stream):
102 if stream in STREAM_WEIGHTS:
103 return STREAM_WEIGHTS[stream], "douyutv"
104 return Plugin.stream_weight(stream)
105
106 def _get_streams(self):
107 match = _url_re.match(self.url)
108 subdomain = match.group("subdomain")
109
110 http.verify = False
111 http.mount('https://', HTTPAdapter(max_retries=99))
112
113 if subdomain == 'v':
114 vid = match.group("vid")
115 headers = {
116 "User-Agent": useragents.ANDROID,
117 "X-Requested-With": "XMLHttpRequest"
118 }
119 res = http.get(VAPI_URL.format(vid), headers=headers)
120 room = http.json(res, schema=_vapi_schema)
121 yield "source", HLSStream(self.session, room["video_url"])
122 return
123
124 channel = match.group("channel")
125 try:
126 channel = int(channel)
127 except ValueError:
128 channel = http.get(self.url, schema=_room_id_schema)
129 if channel is None:
130 channel = http.get(self.url, schema=_room_id_alt_schema)
131
132 http.headers.update({'User-Agent': useragents.ANDROID})
133 cdns = ["ws", "tct", "ws2", "dl"]
134 ts = int(time.time())
135 suffix = "room/{0}?aid=androidhd1&cdn={1}&client_sys=android&time={2}".format(channel, cdns[0], ts)
136 sign = hashlib.md5((suffix + API_SECRET).encode()).hexdigest()
137
138 res = http.get(API_URL.format(suffix, sign))
139 room = http.json(res, schema=_room_schema)
140 if not room:
141 self.logger.info("Not a valid room url.")
142 return
143
144 if room["show_status"] != SHOW_STATUS_ONLINE:
145 self.logger.info("Stream currently unavailable.")
146 return
147
148 url = room["hls_url"]
149 yield "source", HLSStream(self.session, url)
150
151 url = "{room[rtmp_url]}/{room[rtmp_live]}".format(room=room)
152 if 'rtmp:' in url:
153 stream = RTMPStream(self.session, {
154 "rtmp": url,
155 "live": True
156 })
157 yield "source", stream
158 else:
159 yield "source", HTTPStream(self.session, url)
160
161 multi_streams = {
162 "middle": "low",
163 "middle2": "medium"
164 }
165 for name, url in room["rtmp_multi_bitrate"].items():
166 url = "{room[rtmp_url]}/{url}".format(room=room, url=url)
167 name = multi_streams[name]
168 if 'rtmp:' in url:
169 stream = RTMPStream(self.session, {
170 "rtmp": url,
171 "live": True
172 })
173 yield name, stream
174 else:
175 yield name, HTTPStream(self.session, url)
176
177
178 __plugin__ = Douyutv
179
[end of src/streamlink/plugins/douyutv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/streamlink/plugins/douyutv.py b/src/streamlink/plugins/douyutv.py
--- a/src/streamlink/plugins/douyutv.py
+++ b/src/streamlink/plugins/douyutv.py
@@ -10,7 +10,7 @@
API_URL = "https://capi.douyucdn.cn/api/v1/{0}&auth={1}"
VAPI_URL = "https://vmobile.douyu.com/video/getInfo?vid={0}"
-API_SECRET = "Y237pxTx2In5ayGz"
+API_SECRET = "zNzMV1y4EMxOHS6I5WKm"
SHOW_STATUS_ONLINE = 1
SHOW_STATUS_OFFLINE = 2
STREAM_WEIGHTS = {
@@ -129,10 +129,10 @@
if channel is None:
channel = http.get(self.url, schema=_room_id_alt_schema)
- http.headers.update({'User-Agent': useragents.ANDROID})
+ http.headers.update({'User-Agent': useragents.WINDOWS_PHONE_8})
cdns = ["ws", "tct", "ws2", "dl"]
ts = int(time.time())
- suffix = "room/{0}?aid=androidhd1&cdn={1}&client_sys=android&time={2}".format(channel, cdns[0], ts)
+ suffix = "room/{0}?aid=wp&cdn={1}&client_sys=wp&time={2}".format(channel, cdns[0], ts)
sign = hashlib.md5((suffix + API_SECRET).encode()).hexdigest()
res = http.get(API_URL.format(suffix, sign))
|
{"golden_diff": "diff --git a/src/streamlink/plugins/douyutv.py b/src/streamlink/plugins/douyutv.py\n--- a/src/streamlink/plugins/douyutv.py\n+++ b/src/streamlink/plugins/douyutv.py\n@@ -10,7 +10,7 @@\n \n API_URL = \"https://capi.douyucdn.cn/api/v1/{0}&auth={1}\"\n VAPI_URL = \"https://vmobile.douyu.com/video/getInfo?vid={0}\"\n-API_SECRET = \"Y237pxTx2In5ayGz\"\n+API_SECRET = \"zNzMV1y4EMxOHS6I5WKm\"\n SHOW_STATUS_ONLINE = 1\n SHOW_STATUS_OFFLINE = 2\n STREAM_WEIGHTS = {\n@@ -129,10 +129,10 @@\n if channel is None:\n channel = http.get(self.url, schema=_room_id_alt_schema)\n \n- http.headers.update({'User-Agent': useragents.ANDROID})\n+ http.headers.update({'User-Agent': useragents.WINDOWS_PHONE_8})\n cdns = [\"ws\", \"tct\", \"ws2\", \"dl\"]\n ts = int(time.time())\n- suffix = \"room/{0}?aid=androidhd1&cdn={1}&client_sys=android&time={2}\".format(channel, cdns[0], ts)\n+ suffix = \"room/{0}?aid=wp&cdn={1}&client_sys=wp&time={2}\".format(channel, cdns[0], ts)\n sign = hashlib.md5((suffix + API_SECRET).encode()).hexdigest()\n \n res = http.get(API_URL.format(suffix, sign))\n", "issue": "July 17 Douyu.com error\n0.7.0\r\nstreamlink https://www.douyu.com/17732 source -o \"PATH & FILENAME\"\r\n[cli][info] Found matching plugin douyutv for URL https://www.douyu.com/17732\r\nerror: Unable to open URL: https://www.douyu.com/lapi/live/getPlay/17732 (500 Server Error: Internal Server Error for url: https://www.douyu.com/lapi/live/getPlay/17732)\r\n@fozzysec @steven7851\n", "before_files": [{"content": "import re\nimport time\nimport hashlib\n\nfrom requests.adapters import HTTPAdapter\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import http, validate, useragents\nfrom streamlink.stream import HTTPStream, HLSStream, RTMPStream\n\nAPI_URL = \"https://capi.douyucdn.cn/api/v1/{0}&auth={1}\"\nVAPI_URL = \"https://vmobile.douyu.com/video/getInfo?vid={0}\"\nAPI_SECRET = \"Y237pxTx2In5ayGz\"\nSHOW_STATUS_ONLINE = 1\nSHOW_STATUS_OFFLINE = 2\nSTREAM_WEIGHTS = {\n \"low\": 540,\n \"medium\": 720,\n \"source\": 1080\n }\n\n_url_re = re.compile(r\"\"\"\n http(s)?://\n (?:\n (?P<subdomain>.+)\n \\.\n )?\n douyu.com/\n (?:\n show/(?P<vid>[^/&?]+)|\n (?P<channel>[^/&?]+)\n )\n\"\"\", re.VERBOSE)\n\n_room_id_re = re.compile(r'\"room_id\\\\*\"\\s*:\\s*(\\d+),')\n_room_id_alt_re = re.compile(r'data-onlineid=(\\d+)')\n\n_room_id_schema = validate.Schema(\n validate.all(\n validate.transform(_room_id_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(int)\n )\n )\n )\n)\n\n_room_id_alt_schema = validate.Schema(\n validate.all(\n validate.transform(_room_id_alt_re.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.transform(int)\n )\n )\n )\n)\n\n_room_schema = validate.Schema(\n {\n \"data\": validate.any(None, {\n \"show_status\": validate.all(\n validate.text,\n validate.transform(int)\n ),\n \"rtmp_url\": validate.text,\n \"rtmp_live\": validate.text,\n \"hls_url\": validate.text,\n \"rtmp_multi_bitrate\": validate.all(\n validate.any([], {\n validate.text: validate.text\n }),\n validate.transform(dict)\n )\n })\n },\n validate.get(\"data\")\n)\n\n_vapi_schema = validate.Schema(\n {\n \"data\": validate.any(None, {\n \"video_url\": validate.text\n })\n },\n validate.get(\"data\")\n)\n\n\nclass Douyutv(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n @classmethod\n def stream_weight(cls, stream):\n if stream in STREAM_WEIGHTS:\n return STREAM_WEIGHTS[stream], \"douyutv\"\n return Plugin.stream_weight(stream)\n\n def _get_streams(self):\n match = _url_re.match(self.url)\n subdomain = match.group(\"subdomain\")\n\n http.verify = False\n http.mount('https://', HTTPAdapter(max_retries=99))\n\n if subdomain == 'v':\n vid = match.group(\"vid\")\n headers = {\n \"User-Agent\": useragents.ANDROID,\n \"X-Requested-With\": \"XMLHttpRequest\"\n }\n res = http.get(VAPI_URL.format(vid), headers=headers)\n room = http.json(res, schema=_vapi_schema)\n yield \"source\", HLSStream(self.session, room[\"video_url\"])\n return\n\n channel = match.group(\"channel\")\n try:\n channel = int(channel)\n except ValueError:\n channel = http.get(self.url, schema=_room_id_schema)\n if channel is None:\n channel = http.get(self.url, schema=_room_id_alt_schema)\n\n http.headers.update({'User-Agent': useragents.ANDROID})\n cdns = [\"ws\", \"tct\", \"ws2\", \"dl\"]\n ts = int(time.time())\n suffix = \"room/{0}?aid=androidhd1&cdn={1}&client_sys=android&time={2}\".format(channel, cdns[0], ts)\n sign = hashlib.md5((suffix + API_SECRET).encode()).hexdigest()\n\n res = http.get(API_URL.format(suffix, sign))\n room = http.json(res, schema=_room_schema)\n if not room:\n self.logger.info(\"Not a valid room url.\")\n return\n\n if room[\"show_status\"] != SHOW_STATUS_ONLINE:\n self.logger.info(\"Stream currently unavailable.\")\n return\n\n url = room[\"hls_url\"]\n yield \"source\", HLSStream(self.session, url)\n\n url = \"{room[rtmp_url]}/{room[rtmp_live]}\".format(room=room)\n if 'rtmp:' in url:\n stream = RTMPStream(self.session, {\n \"rtmp\": url,\n \"live\": True\n })\n yield \"source\", stream\n else:\n yield \"source\", HTTPStream(self.session, url)\n\n multi_streams = {\n \"middle\": \"low\",\n \"middle2\": \"medium\"\n }\n for name, url in room[\"rtmp_multi_bitrate\"].items():\n url = \"{room[rtmp_url]}/{url}\".format(room=room, url=url)\n name = multi_streams[name]\n if 'rtmp:' in url:\n stream = RTMPStream(self.session, {\n \"rtmp\": url,\n \"live\": True\n })\n yield name, stream\n else:\n yield name, HTTPStream(self.session, url)\n\n\n__plugin__ = Douyutv\n", "path": "src/streamlink/plugins/douyutv.py"}]}
| 2,314 | 369 |
gh_patches_debug_8623
|
rasdani/github-patches
|
git_diff
|
archlinux__archinstall-262
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Awesome profile installation failed with no such file or directory on xdg-mime


Resolve #261 and related issues
Closes #262.
🚨 PR Guidelines:
# New features *(v2.2.0)*
Merge new features in to `torxed-v2.2.0`.<br>
This branch is designated for potential breaking changes, added complexity and new functionality.
# Bug fixes *(v2.1.4)*
Merge against `master` for bug fixes and anything that improves stability and quality of life.<br>
This excludes:
* New functionality
* Added complexity
* Breaking changes
Any changes to `master` automatically gets pulled in to `torxed-v2.2.0` to avoid merge hell.
# Describe your PR
If the changes has been discussed in an Issue, please tag it so we can backtrace from the Issue later on.<br>
If the PR is larger than ~20 lines, please describe it here unless described in an issue.
# Testing
Any new feature or stability improvement should be tested if possible.
Please follow the test instructions at the bottom of the README.
*These PR guidelines will change after 2021-05-01, which is when `v2.1.4` gets onto the new ISO*
</issue>
<code>
[start of profiles/desktop.py]
1 # A desktop environment selector.
2
3 import archinstall, os
4
5 is_top_level_profile = True
6
7 # New way of defining packages for a profile, which is iterable and can be used out side
8 # of the profile to get a list of "what packages will be installed".
9 __packages__ = ['nano', 'vim', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools']
10
11 def _prep_function(*args, **kwargs):
12 """
13 Magic function called by the importing installer
14 before continuing any further. It also avoids executing any
15 other code in this stage. So it's a safe way to ask the user
16 for more input before any other installer steps start.
17 """
18
19 supported_desktops = ['gnome', 'kde', 'awesome', 'sway', 'cinnamon', 'xfce4', 'lxqt', 'i3', 'budgie']
20 desktop = archinstall.generic_select(supported_desktops, 'Select your desired desktop environment: ')
21
22 # Temporarily store the selected desktop profile
23 # in a session-safe location, since this module will get reloaded
24 # the next time it gets executed.
25 archinstall.storage['_desktop_profile'] = desktop
26
27 profile = archinstall.Profile(None, desktop)
28 # Loading the instructions with a custom namespace, ensures that a __name__ comparison is never triggered.
29 with profile.load_instructions(namespace=f"{desktop}.py") as imported:
30 if hasattr(imported, '_prep_function'):
31 return imported._prep_function()
32 else:
33 print(f"Deprecated (??): {desktop} profile has no _prep_function() anymore")
34
35 if __name__ == 'desktop':
36 """
37 This "profile" is a meta-profile.
38 There are no desktop-specific steps, it simply routes
39 the installer to whichever desktop environment/window manager was chosen.
40
41 Maybe in the future, a network manager or similar things *could* be added here.
42 We should honor that Arch Linux does not officially endorse a desktop-setup, nor is
43 it trying to be a turn-key desktop distribution.
44
45 There are plenty of desktop-turn-key-solutions based on Arch Linux,
46 this is therefore just a helper to get started
47 """
48
49 # Install common packages for all desktop environments
50 installation.add_additional_packages(__packages__)
51
52 # TODO: Remove magic variable 'installation' and place it
53 # in archinstall.storage or archinstall.session/archinstall.installation
54 installation.install_profile(archinstall.storage['_desktop_profile'])
55
56
[end of profiles/desktop.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/profiles/desktop.py b/profiles/desktop.py
--- a/profiles/desktop.py
+++ b/profiles/desktop.py
@@ -6,7 +6,7 @@
# New way of defining packages for a profile, which is iterable and can be used out side
# of the profile to get a list of "what packages will be installed".
-__packages__ = ['nano', 'vim', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools']
+__packages__ = ['nano', 'vim', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools', 'xdg-utils']
def _prep_function(*args, **kwargs):
"""
|
{"golden_diff": "diff --git a/profiles/desktop.py b/profiles/desktop.py\n--- a/profiles/desktop.py\n+++ b/profiles/desktop.py\n@@ -6,7 +6,7 @@\n \n # New way of defining packages for a profile, which is iterable and can be used out side\n # of the profile to get a list of \"what packages will be installed\".\n-__packages__ = ['nano', 'vim', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools']\n+__packages__ = ['nano', 'vim', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools', 'xdg-utils']\n \n def _prep_function(*args, **kwargs):\n \t\"\"\"\n", "issue": "Awesome profile installation failed with no such file or directory on xdg-mime\n\r\n\r\n\nResolve #261 and related issues\nCloses #262.\r\n\r\n\ud83d\udea8 PR Guidelines:\r\n\r\n# New features *(v2.2.0)*\r\n\r\nMerge new features in to `torxed-v2.2.0`.<br>\r\nThis branch is designated for potential breaking changes, added complexity and new functionality.\r\n\r\n# Bug fixes *(v2.1.4)*\r\n\r\nMerge against `master` for bug fixes and anything that improves stability and quality of life.<br>\r\nThis excludes:\r\n * New functionality\r\n * Added complexity\r\n * Breaking changes\r\n\r\nAny changes to `master` automatically gets pulled in to `torxed-v2.2.0` to avoid merge hell.\r\n\r\n# Describe your PR\r\n\r\nIf the changes has been discussed in an Issue, please tag it so we can backtrace from the Issue later on.<br>\r\nIf the PR is larger than ~20 lines, please describe it here unless described in an issue.\r\n\r\n# Testing\r\n\r\nAny new feature or stability improvement should be tested if possible.\r\nPlease follow the test instructions at the bottom of the README.\r\n\r\n*These PR guidelines will change after 2021-05-01, which is when `v2.1.4` gets onto the new ISO*\r\n\n", "before_files": [{"content": "# A desktop environment selector.\n\nimport archinstall, os\n\nis_top_level_profile = True\n\n# New way of defining packages for a profile, which is iterable and can be used out side\n# of the profile to get a list of \"what packages will be installed\".\n__packages__ = ['nano', 'vim', 'openssh', 'htop', 'wget', 'iwd', 'wireless_tools', 'wpa_supplicant', 'smartmontools']\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\tsupported_desktops = ['gnome', 'kde', 'awesome', 'sway', 'cinnamon', 'xfce4', 'lxqt', 'i3', 'budgie']\n\tdesktop = archinstall.generic_select(supported_desktops, 'Select your desired desktop environment: ')\n\t\n\t# Temporarily store the selected desktop profile\n\t# in a session-safe location, since this module will get reloaded\n\t# the next time it gets executed.\n\tarchinstall.storage['_desktop_profile'] = desktop\n\n\tprofile = archinstall.Profile(None, desktop)\n\t# Loading the instructions with a custom namespace, ensures that a __name__ comparison is never triggered.\n\twith profile.load_instructions(namespace=f\"{desktop}.py\") as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint(f\"Deprecated (??): {desktop} profile has no _prep_function() anymore\")\n\nif __name__ == 'desktop':\n\t\"\"\"\n\tThis \"profile\" is a meta-profile.\n\tThere are no desktop-specific steps, it simply routes\n\tthe installer to whichever desktop environment/window manager was chosen.\n\n\tMaybe in the future, a network manager or similar things *could* be added here.\n\tWe should honor that Arch Linux does not officially endorse a desktop-setup, nor is\n\tit trying to be a turn-key desktop distribution.\n\n\tThere are plenty of desktop-turn-key-solutions based on Arch Linux,\n\tthis is therefore just a helper to get started\n\t\"\"\"\n\t\n\t# Install common packages for all desktop environments\n\tinstallation.add_additional_packages(__packages__)\n\n\t# TODO: Remove magic variable 'installation' and place it\n\t# in archinstall.storage or archinstall.session/archinstall.installation\n\tinstallation.install_profile(archinstall.storage['_desktop_profile'])\n\n", "path": "profiles/desktop.py"}]}
| 1,641 | 178 |
gh_patches_debug_64233
|
rasdani/github-patches
|
git_diff
|
optuna__optuna-1717
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
LightGBMTunerCV not working for regression objective
The script https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_cv.py runs just fine as it is. However, I get a `KeyError: 'mse-mean'` if I change the `objective` to `regression` and `metric` to `mse`. Similar erro happens to other metrics as well when the `objective` is set to `regression`.
## Environment
- Optuna version: 2.0.0
- Python version: 3.7
- OS: MacOS Catalina
## Error messages, stack traces, or logs
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-11-7753103b8251> in <module>
15 )
16
---> 17 tuner.run()
18
19 print("Best score:", tuner.best_score)
/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in run(self)
461 self.sample_train_set()
462
--> 463 self.tune_feature_fraction()
464 self.tune_num_leaves()
465 self.tune_bagging()
/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in tune_feature_fraction(self, n_trials)
486
487 sampler = optuna.samplers.GridSampler({param_name: param_values})
--> 488 self._tune_params([param_name], len(param_values), sampler, "feature_fraction")
489
490 def tune_num_leaves(self, n_trials: int = 20) -> None:
/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _tune_params(self, target_param_names, n_trials, sampler, step_name)
567 timeout=_timeout,
568 catch=(),
--> 569 callbacks=self._optuna_callbacks,
570 )
571
/usr/local/lib/python3.7/site-packages/optuna/study.py in optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)
290 if n_jobs == 1:
291 self._optimize_sequential(
--> 292 func, n_trials, timeout, catch, callbacks, gc_after_trial, None
293 )
294 else:
/usr/local/lib/python3.7/site-packages/optuna/study.py in _optimize_sequential(self, func, n_trials, timeout, catch, callbacks, gc_after_trial, time_start)
652 break
653
--> 654 self._run_trial_and_callbacks(func, catch, callbacks, gc_after_trial)
655
656 self._progress_bar.update((datetime.datetime.now() - time_start).total_seconds())
/usr/local/lib/python3.7/site-packages/optuna/study.py in _run_trial_and_callbacks(self, func, catch, callbacks, gc_after_trial)
683 # type: (...) -> None
684
--> 685 trial = self._run_trial(func, catch, gc_after_trial)
686 if callbacks is not None:
687 frozen_trial = copy.deepcopy(self._storage.get_trial(trial._trial_id))
/usr/local/lib/python3.7/site-packages/optuna/study.py in _run_trial(self, func, catch, gc_after_trial)
707
708 try:
--> 709 result = func(trial)
710 except exceptions.TrialPruned as e:
711 message = "Trial {} pruned. {}".format(trial_number, str(e))
/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in __call__(self, trial)
302 cv_results = lgb.cv(self.lgbm_params, self.train_set, **self.lgbm_kwargs)
303
--> 304 val_scores = self._get_cv_scores(cv_results)
305 val_score = val_scores[-1]
306 elapsed_secs = time.time() - start_time
/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _get_cv_scores(self, cv_results)
292
293 metric = self._get_metric_for_objective()
--> 294 val_scores = cv_results["{}-mean".format(metric)]
295 return val_scores
296
KeyError: 'mse-mean'
```
## Steps to reproduce
1. Run [this script](https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_cv.py) with objective = regression and metric = mse.
</issue>
<code>
[start of optuna/integration/_lightgbm_tuner/alias.py]
1 from typing import Any
2 from typing import Dict
3 from typing import List # NOQA
4
5
6 _ALIAS_GROUP_LIST = [
7 {"param_name": "bagging_fraction", "alias_names": ["sub_row", "subsample", "bagging"]},
8 {"param_name": "learning_rate", "alias_names": ["shrinkage_rate", "eta"]},
9 {
10 "param_name": "min_data_in_leaf",
11 "alias_names": ["min_data_per_leaf", "min_data", "min_child_samples"],
12 },
13 {
14 "param_name": "min_sum_hessian_in_leaf",
15 "alias_names": [
16 "min_sum_hessian_per_leaf",
17 "min_sum_hessian",
18 "min_hessian",
19 "min_child_weight",
20 ],
21 },
22 {"param_name": "bagging_freq", "alias_names": ["subsample_freq"]},
23 {"param_name": "feature_fraction", "alias_names": ["sub_feature", "colsample_bytree"]},
24 {"param_name": "lambda_l1", "alias_names": ["reg_alpha"]},
25 {"param_name": "lambda_l2", "alias_names": ["reg_lambda", "lambda"]},
26 {"param_name": "min_gain_to_split", "alias_names": ["min_split_gain"]},
27 ] # type: List[Dict[str, Any]]
28
29
30 def _handling_alias_parameters(lgbm_params: Dict[str, Any]) -> None:
31 """Handling alias parameters."""
32
33 for alias_group in _ALIAS_GROUP_LIST:
34 param_name = alias_group["param_name"]
35 alias_names = alias_group["alias_names"]
36
37 for alias_name in alias_names:
38 if alias_name in lgbm_params:
39 lgbm_params[param_name] = lgbm_params[alias_name]
40 del lgbm_params[alias_name]
41
42
43 _ALIAS_METRIC_LIST = [
44 {
45 "metric_name": "ndcg",
46 "alias_names": [
47 "lambdarank",
48 "rank_xendcg",
49 "xendcg",
50 "xe_ndcg",
51 "xe_ndcg_mart",
52 "xendcg_mart",
53 ],
54 },
55 {"metric_name": "map", "alias_names": ["mean_average_precision"]},
56 ] # type: List[Dict[str, Any]]
57
58
59 def _handling_alias_metrics(lgbm_params: Dict[str, Any]) -> None:
60 """Handling alias metrics."""
61
62 if "metric" not in lgbm_params.keys():
63 return
64
65 for metric in _ALIAS_METRIC_LIST:
66 metric_name = metric["metric_name"]
67 alias_names = metric["alias_names"]
68
69 for alias_name in alias_names:
70 if lgbm_params["metric"] == alias_name:
71 lgbm_params["metric"] = metric_name
72 break
73
[end of optuna/integration/_lightgbm_tuner/alias.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/optuna/integration/_lightgbm_tuner/alias.py b/optuna/integration/_lightgbm_tuner/alias.py
--- a/optuna/integration/_lightgbm_tuner/alias.py
+++ b/optuna/integration/_lightgbm_tuner/alias.py
@@ -53,6 +53,10 @@
],
},
{"metric_name": "map", "alias_names": ["mean_average_precision"]},
+ {
+ "metric_name": "l2",
+ "alias_names": ["regression", "regression_l2", "l2", "mean_squared_error", "mse"],
+ },
] # type: List[Dict[str, Any]]
|
{"golden_diff": "diff --git a/optuna/integration/_lightgbm_tuner/alias.py b/optuna/integration/_lightgbm_tuner/alias.py\n--- a/optuna/integration/_lightgbm_tuner/alias.py\n+++ b/optuna/integration/_lightgbm_tuner/alias.py\n@@ -53,6 +53,10 @@\n ],\n },\n {\"metric_name\": \"map\", \"alias_names\": [\"mean_average_precision\"]},\n+ {\n+ \"metric_name\": \"l2\",\n+ \"alias_names\": [\"regression\", \"regression_l2\", \"l2\", \"mean_squared_error\", \"mse\"],\n+ },\n ] # type: List[Dict[str, Any]]\n", "issue": "LightGBMTunerCV not working for regression objective\nThe script https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_cv.py runs just fine as it is. However, I get a `KeyError: 'mse-mean'` if I change the `objective` to `regression` and `metric` to `mse`. Similar erro happens to other metrics as well when the `objective` is set to `regression`.\r\n\r\n## Environment\r\n\r\n- Optuna version: 2.0.0\r\n- Python version: 3.7\r\n- OS: MacOS Catalina\r\n\r\n## Error messages, stack traces, or logs\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nKeyError Traceback (most recent call last)\r\n<ipython-input-11-7753103b8251> in <module>\r\n 15 )\r\n 16 \r\n---> 17 tuner.run()\r\n 18 \r\n 19 print(\"Best score:\", tuner.best_score)\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in run(self)\r\n 461 self.sample_train_set()\r\n 462 \r\n--> 463 self.tune_feature_fraction()\r\n 464 self.tune_num_leaves()\r\n 465 self.tune_bagging()\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in tune_feature_fraction(self, n_trials)\r\n 486 \r\n 487 sampler = optuna.samplers.GridSampler({param_name: param_values})\r\n--> 488 self._tune_params([param_name], len(param_values), sampler, \"feature_fraction\")\r\n 489 \r\n 490 def tune_num_leaves(self, n_trials: int = 20) -> None:\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _tune_params(self, target_param_names, n_trials, sampler, step_name)\r\n 567 timeout=_timeout,\r\n 568 catch=(),\r\n--> 569 callbacks=self._optuna_callbacks,\r\n 570 )\r\n 571 \r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/study.py in optimize(self, func, n_trials, timeout, n_jobs, catch, callbacks, gc_after_trial, show_progress_bar)\r\n 290 if n_jobs == 1:\r\n 291 self._optimize_sequential(\r\n--> 292 func, n_trials, timeout, catch, callbacks, gc_after_trial, None\r\n 293 )\r\n 294 else:\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/study.py in _optimize_sequential(self, func, n_trials, timeout, catch, callbacks, gc_after_trial, time_start)\r\n 652 break\r\n 653 \r\n--> 654 self._run_trial_and_callbacks(func, catch, callbacks, gc_after_trial)\r\n 655 \r\n 656 self._progress_bar.update((datetime.datetime.now() - time_start).total_seconds())\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/study.py in _run_trial_and_callbacks(self, func, catch, callbacks, gc_after_trial)\r\n 683 # type: (...) -> None\r\n 684 \r\n--> 685 trial = self._run_trial(func, catch, gc_after_trial)\r\n 686 if callbacks is not None:\r\n 687 frozen_trial = copy.deepcopy(self._storage.get_trial(trial._trial_id))\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/study.py in _run_trial(self, func, catch, gc_after_trial)\r\n 707 \r\n 708 try:\r\n--> 709 result = func(trial)\r\n 710 except exceptions.TrialPruned as e:\r\n 711 message = \"Trial {} pruned. {}\".format(trial_number, str(e))\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in __call__(self, trial)\r\n 302 cv_results = lgb.cv(self.lgbm_params, self.train_set, **self.lgbm_kwargs)\r\n 303 \r\n--> 304 val_scores = self._get_cv_scores(cv_results)\r\n 305 val_score = val_scores[-1]\r\n 306 elapsed_secs = time.time() - start_time\r\n\r\n/usr/local/lib/python3.7/site-packages/optuna/integration/_lightgbm_tuner/optimize.py in _get_cv_scores(self, cv_results)\r\n 292 \r\n 293 metric = self._get_metric_for_objective()\r\n--> 294 val_scores = cv_results[\"{}-mean\".format(metric)]\r\n 295 return val_scores\r\n 296 \r\n\r\nKeyError: 'mse-mean'\r\n```\r\n\r\n## Steps to reproduce\r\n\r\n1. Run [this script](https://github.com/optuna/optuna/blob/master/examples/lightgbm_tuner_cv.py) with objective = regression and metric = mse. \r\n\n", "before_files": [{"content": "from typing import Any\nfrom typing import Dict\nfrom typing import List # NOQA\n\n\n_ALIAS_GROUP_LIST = [\n {\"param_name\": \"bagging_fraction\", \"alias_names\": [\"sub_row\", \"subsample\", \"bagging\"]},\n {\"param_name\": \"learning_rate\", \"alias_names\": [\"shrinkage_rate\", \"eta\"]},\n {\n \"param_name\": \"min_data_in_leaf\",\n \"alias_names\": [\"min_data_per_leaf\", \"min_data\", \"min_child_samples\"],\n },\n {\n \"param_name\": \"min_sum_hessian_in_leaf\",\n \"alias_names\": [\n \"min_sum_hessian_per_leaf\",\n \"min_sum_hessian\",\n \"min_hessian\",\n \"min_child_weight\",\n ],\n },\n {\"param_name\": \"bagging_freq\", \"alias_names\": [\"subsample_freq\"]},\n {\"param_name\": \"feature_fraction\", \"alias_names\": [\"sub_feature\", \"colsample_bytree\"]},\n {\"param_name\": \"lambda_l1\", \"alias_names\": [\"reg_alpha\"]},\n {\"param_name\": \"lambda_l2\", \"alias_names\": [\"reg_lambda\", \"lambda\"]},\n {\"param_name\": \"min_gain_to_split\", \"alias_names\": [\"min_split_gain\"]},\n] # type: List[Dict[str, Any]]\n\n\ndef _handling_alias_parameters(lgbm_params: Dict[str, Any]) -> None:\n \"\"\"Handling alias parameters.\"\"\"\n\n for alias_group in _ALIAS_GROUP_LIST:\n param_name = alias_group[\"param_name\"]\n alias_names = alias_group[\"alias_names\"]\n\n for alias_name in alias_names:\n if alias_name in lgbm_params:\n lgbm_params[param_name] = lgbm_params[alias_name]\n del lgbm_params[alias_name]\n\n\n_ALIAS_METRIC_LIST = [\n {\n \"metric_name\": \"ndcg\",\n \"alias_names\": [\n \"lambdarank\",\n \"rank_xendcg\",\n \"xendcg\",\n \"xe_ndcg\",\n \"xe_ndcg_mart\",\n \"xendcg_mart\",\n ],\n },\n {\"metric_name\": \"map\", \"alias_names\": [\"mean_average_precision\"]},\n] # type: List[Dict[str, Any]]\n\n\ndef _handling_alias_metrics(lgbm_params: Dict[str, Any]) -> None:\n \"\"\"Handling alias metrics.\"\"\"\n\n if \"metric\" not in lgbm_params.keys():\n return\n\n for metric in _ALIAS_METRIC_LIST:\n metric_name = metric[\"metric_name\"]\n alias_names = metric[\"alias_names\"]\n\n for alias_name in alias_names:\n if lgbm_params[\"metric\"] == alias_name:\n lgbm_params[\"metric\"] = metric_name\n break\n", "path": "optuna/integration/_lightgbm_tuner/alias.py"}]}
| 2,436 | 155 |
gh_patches_debug_9456
|
rasdani/github-patches
|
git_diff
|
pypa__pip-4661
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pip un-vendored support is broken
* Pip version: 9.0.1-465-g841f5dfb
* Python version: 2.7.13
* Operating system: Arch Linux x86_64
### What I've run:
```python
> ./.tox/py27-novendor/bin/pip search test
Traceback (most recent call last):
File "./.tox/py27-novendor/bin/pip", line 7, in <module>
from pip import main
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/__init__.py", line 46, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/vcs/mercurial.py", line 8, in <module>
from pip.download import path_to_url
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/download.py", line 28, in <module>
from pip._vendor.six.moves.urllib.parse import unquote as urllib_unquote
ImportError: No module named parse
```
and after fixing that one:
```python
Traceback (most recent call last):
File "./.tox/py27-novendor/bin/pip", line 7, in <module>
from pip import main
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/__init__.py", line 46, in <module>
from pip.vcs import git, mercurial, subversion, bazaar # noqa
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/vcs/subversion.py", line 9, in <module>
from pip.index import Link
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/index.py", line 39, in <module>
from pip.wheel import Wheel, wheel_ext
File ".tox/py27-novendor/lib/python2.7/site-packages/pip/wheel.py", line 21, in <module>
from pip._vendor import pkg_resources, pytoml
ImportError: cannot import name pytoml
```
</issue>
<code>
[start of src/pip/_vendor/__init__.py]
1 """
2 pip._vendor is for vendoring dependencies of pip to prevent needing pip to
3 depend on something external.
4
5 Files inside of pip._vendor should be considered immutable and should only be
6 updated to versions from upstream.
7 """
8 from __future__ import absolute_import
9
10 import glob
11 import os.path
12 import sys
13
14 # Downstream redistributors which have debundled our dependencies should also
15 # patch this value to be true. This will trigger the additional patching
16 # to cause things like "six" to be available as pip.
17 DEBUNDLED = False
18
19 # By default, look in this directory for a bunch of .whl files which we will
20 # add to the beginning of sys.path before attempting to import anything. This
21 # is done to support downstream re-distributors like Debian and Fedora who
22 # wish to create their own Wheels for our dependencies to aid in debundling.
23 WHEEL_DIR = os.path.abspath(os.path.dirname(__file__))
24
25
26 # Define a small helper function to alias our vendored modules to the real ones
27 # if the vendored ones do not exist. This idea of this was taken from
28 # https://github.com/kennethreitz/requests/pull/2567.
29 def vendored(modulename):
30 vendored_name = "{0}.{1}".format(__name__, modulename)
31
32 try:
33 __import__(vendored_name, globals(), locals(), level=0)
34 except ImportError:
35 try:
36 __import__(modulename, globals(), locals(), level=0)
37 except ImportError:
38 # We can just silently allow import failures to pass here. If we
39 # got to this point it means that ``import pip._vendor.whatever``
40 # failed and so did ``import whatever``. Since we're importing this
41 # upfront in an attempt to alias imports, not erroring here will
42 # just mean we get a regular import error whenever pip *actually*
43 # tries to import one of these modules to use it, which actually
44 # gives us a better error message than we would have otherwise
45 # gotten.
46 pass
47 else:
48 sys.modules[vendored_name] = sys.modules[modulename]
49 base, head = vendored_name.rsplit(".", 1)
50 setattr(sys.modules[base], head, sys.modules[modulename])
51
52
53 # If we're operating in a debundled setup, then we want to go ahead and trigger
54 # the aliasing of our vendored libraries as well as looking for wheels to add
55 # to our sys.path. This will cause all of this code to be a no-op typically
56 # however downstream redistributors can enable it in a consistent way across
57 # all platforms.
58 if DEBUNDLED:
59 # Actually look inside of WHEEL_DIR to find .whl files and add them to the
60 # front of our sys.path.
61 sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, "*.whl")) + sys.path
62
63 # Actually alias all of our vendored dependencies.
64 vendored("cachecontrol")
65 vendored("colorama")
66 vendored("distlib")
67 vendored("distro")
68 vendored("html5lib")
69 vendored("lockfile")
70 vendored("six")
71 vendored("six.moves")
72 vendored("six.moves.urllib")
73 vendored("packaging")
74 vendored("packaging.version")
75 vendored("packaging.specifiers")
76 vendored("pkg_resources")
77 vendored("progress")
78 vendored("retrying")
79 vendored("requests")
80 vendored("requests.packages")
81 vendored("requests.packages.urllib3")
82 vendored("requests.packages.urllib3._collections")
83 vendored("requests.packages.urllib3.connection")
84 vendored("requests.packages.urllib3.connectionpool")
85 vendored("requests.packages.urllib3.contrib")
86 vendored("requests.packages.urllib3.contrib.ntlmpool")
87 vendored("requests.packages.urllib3.contrib.pyopenssl")
88 vendored("requests.packages.urllib3.exceptions")
89 vendored("requests.packages.urllib3.fields")
90 vendored("requests.packages.urllib3.filepost")
91 vendored("requests.packages.urllib3.packages")
92 vendored("requests.packages.urllib3.packages.ordered_dict")
93 vendored("requests.packages.urllib3.packages.six")
94 vendored("requests.packages.urllib3.packages.ssl_match_hostname")
95 vendored("requests.packages.urllib3.packages.ssl_match_hostname."
96 "_implementation")
97 vendored("requests.packages.urllib3.poolmanager")
98 vendored("requests.packages.urllib3.request")
99 vendored("requests.packages.urllib3.response")
100 vendored("requests.packages.urllib3.util")
101 vendored("requests.packages.urllib3.util.connection")
102 vendored("requests.packages.urllib3.util.request")
103 vendored("requests.packages.urllib3.util.response")
104 vendored("requests.packages.urllib3.util.retry")
105 vendored("requests.packages.urllib3.util.ssl_")
106 vendored("requests.packages.urllib3.util.timeout")
107 vendored("requests.packages.urllib3.util.url")
108
[end of src/pip/_vendor/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pip/_vendor/__init__.py b/src/pip/_vendor/__init__.py
--- a/src/pip/_vendor/__init__.py
+++ b/src/pip/_vendor/__init__.py
@@ -70,11 +70,13 @@
vendored("six")
vendored("six.moves")
vendored("six.moves.urllib")
+ vendored("six.moves.urllib.parse")
vendored("packaging")
vendored("packaging.version")
vendored("packaging.specifiers")
vendored("pkg_resources")
vendored("progress")
+ vendored("pytoml")
vendored("retrying")
vendored("requests")
vendored("requests.packages")
|
{"golden_diff": "diff --git a/src/pip/_vendor/__init__.py b/src/pip/_vendor/__init__.py\n--- a/src/pip/_vendor/__init__.py\n+++ b/src/pip/_vendor/__init__.py\n@@ -70,11 +70,13 @@\n vendored(\"six\")\n vendored(\"six.moves\")\n vendored(\"six.moves.urllib\")\n+ vendored(\"six.moves.urllib.parse\")\n vendored(\"packaging\")\n vendored(\"packaging.version\")\n vendored(\"packaging.specifiers\")\n vendored(\"pkg_resources\")\n vendored(\"progress\")\n+ vendored(\"pytoml\")\n vendored(\"retrying\")\n vendored(\"requests\")\n vendored(\"requests.packages\")\n", "issue": "pip un-vendored support is broken\n* Pip version: 9.0.1-465-g841f5dfb\r\n* Python version: 2.7.13\r\n* Operating system: Arch Linux x86_64\r\n\r\n### What I've run:\r\n\r\n```python\r\n> ./.tox/py27-novendor/bin/pip search test\r\nTraceback (most recent call last):\r\n File \"./.tox/py27-novendor/bin/pip\", line 7, in <module>\r\n from pip import main\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/__init__.py\", line 46, in <module>\r\n from pip.vcs import git, mercurial, subversion, bazaar # noqa\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/vcs/mercurial.py\", line 8, in <module>\r\n from pip.download import path_to_url\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/download.py\", line 28, in <module>\r\n from pip._vendor.six.moves.urllib.parse import unquote as urllib_unquote\r\nImportError: No module named parse\r\n```\r\n\r\nand after fixing that one:\r\n\r\n```python\r\nTraceback (most recent call last):\r\n File \"./.tox/py27-novendor/bin/pip\", line 7, in <module>\r\n from pip import main\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/__init__.py\", line 46, in <module>\r\n from pip.vcs import git, mercurial, subversion, bazaar # noqa\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/vcs/subversion.py\", line 9, in <module>\r\n from pip.index import Link\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/index.py\", line 39, in <module>\r\n from pip.wheel import Wheel, wheel_ext\r\n File \".tox/py27-novendor/lib/python2.7/site-packages/pip/wheel.py\", line 21, in <module>\r\n from pip._vendor import pkg_resources, pytoml\r\nImportError: cannot import name pytoml\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\npip._vendor is for vendoring dependencies of pip to prevent needing pip to\ndepend on something external.\n\nFiles inside of pip._vendor should be considered immutable and should only be\nupdated to versions from upstream.\n\"\"\"\nfrom __future__ import absolute_import\n\nimport glob\nimport os.path\nimport sys\n\n# Downstream redistributors which have debundled our dependencies should also\n# patch this value to be true. This will trigger the additional patching\n# to cause things like \"six\" to be available as pip.\nDEBUNDLED = False\n\n# By default, look in this directory for a bunch of .whl files which we will\n# add to the beginning of sys.path before attempting to import anything. This\n# is done to support downstream re-distributors like Debian and Fedora who\n# wish to create their own Wheels for our dependencies to aid in debundling.\nWHEEL_DIR = os.path.abspath(os.path.dirname(__file__))\n\n\n# Define a small helper function to alias our vendored modules to the real ones\n# if the vendored ones do not exist. This idea of this was taken from\n# https://github.com/kennethreitz/requests/pull/2567.\ndef vendored(modulename):\n vendored_name = \"{0}.{1}\".format(__name__, modulename)\n\n try:\n __import__(vendored_name, globals(), locals(), level=0)\n except ImportError:\n try:\n __import__(modulename, globals(), locals(), level=0)\n except ImportError:\n # We can just silently allow import failures to pass here. If we\n # got to this point it means that ``import pip._vendor.whatever``\n # failed and so did ``import whatever``. Since we're importing this\n # upfront in an attempt to alias imports, not erroring here will\n # just mean we get a regular import error whenever pip *actually*\n # tries to import one of these modules to use it, which actually\n # gives us a better error message than we would have otherwise\n # gotten.\n pass\n else:\n sys.modules[vendored_name] = sys.modules[modulename]\n base, head = vendored_name.rsplit(\".\", 1)\n setattr(sys.modules[base], head, sys.modules[modulename])\n\n\n# If we're operating in a debundled setup, then we want to go ahead and trigger\n# the aliasing of our vendored libraries as well as looking for wheels to add\n# to our sys.path. This will cause all of this code to be a no-op typically\n# however downstream redistributors can enable it in a consistent way across\n# all platforms.\nif DEBUNDLED:\n # Actually look inside of WHEEL_DIR to find .whl files and add them to the\n # front of our sys.path.\n sys.path[:] = glob.glob(os.path.join(WHEEL_DIR, \"*.whl\")) + sys.path\n\n # Actually alias all of our vendored dependencies.\n vendored(\"cachecontrol\")\n vendored(\"colorama\")\n vendored(\"distlib\")\n vendored(\"distro\")\n vendored(\"html5lib\")\n vendored(\"lockfile\")\n vendored(\"six\")\n vendored(\"six.moves\")\n vendored(\"six.moves.urllib\")\n vendored(\"packaging\")\n vendored(\"packaging.version\")\n vendored(\"packaging.specifiers\")\n vendored(\"pkg_resources\")\n vendored(\"progress\")\n vendored(\"retrying\")\n vendored(\"requests\")\n vendored(\"requests.packages\")\n vendored(\"requests.packages.urllib3\")\n vendored(\"requests.packages.urllib3._collections\")\n vendored(\"requests.packages.urllib3.connection\")\n vendored(\"requests.packages.urllib3.connectionpool\")\n vendored(\"requests.packages.urllib3.contrib\")\n vendored(\"requests.packages.urllib3.contrib.ntlmpool\")\n vendored(\"requests.packages.urllib3.contrib.pyopenssl\")\n vendored(\"requests.packages.urllib3.exceptions\")\n vendored(\"requests.packages.urllib3.fields\")\n vendored(\"requests.packages.urllib3.filepost\")\n vendored(\"requests.packages.urllib3.packages\")\n vendored(\"requests.packages.urllib3.packages.ordered_dict\")\n vendored(\"requests.packages.urllib3.packages.six\")\n vendored(\"requests.packages.urllib3.packages.ssl_match_hostname\")\n vendored(\"requests.packages.urllib3.packages.ssl_match_hostname.\"\n \"_implementation\")\n vendored(\"requests.packages.urllib3.poolmanager\")\n vendored(\"requests.packages.urllib3.request\")\n vendored(\"requests.packages.urllib3.response\")\n vendored(\"requests.packages.urllib3.util\")\n vendored(\"requests.packages.urllib3.util.connection\")\n vendored(\"requests.packages.urllib3.util.request\")\n vendored(\"requests.packages.urllib3.util.response\")\n vendored(\"requests.packages.urllib3.util.retry\")\n vendored(\"requests.packages.urllib3.util.ssl_\")\n vendored(\"requests.packages.urllib3.util.timeout\")\n vendored(\"requests.packages.urllib3.util.url\")\n", "path": "src/pip/_vendor/__init__.py"}]}
| 2,403 | 162 |
gh_patches_debug_31470
|
rasdani/github-patches
|
git_diff
|
cowrie__cowrie-1093
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
greynoise should catch timeout error
```2019-04-08T03:12:05.460833Z [twisted.internet.defer#critical] Unhandled error in Deferred:
2019-04-08T03:12:05.462257Z [twisted.internet.defer#critical]
Traceback (most recent call last):
--- <exception caught here> ---
File "/home/cowrie/cowrie/src/cowrie/output/greynoise.py", line 65, in scanip
headers=headers)
twisted.internet.error.TimeoutError: User timeout caused connection failure.
```
</issue>
<code>
[start of src/cowrie/output/greynoise.py]
1 """
2 Send attackers IP to GreyNoise
3 """
4
5 from __future__ import absolute_import, division
6
7 import treq
8
9 from twisted.internet import defer
10 from twisted.python import log
11
12 import cowrie.core.output
13 from cowrie.core.config import CONFIG
14
15 COWRIE_USER_AGENT = 'Cowrie Honeypot'
16 GNAPI_URL = 'http://api.greynoise.io:8888/v1/'
17
18
19 class Output(cowrie.core.output.Output):
20
21 def __init__(self):
22 self.apiKey = CONFIG.get('output_greynoise', 'api_key', fallback=None)
23 self.tags = CONFIG.get('output_greynoise', 'tags', fallback="all").split(",")
24 self.debug = CONFIG.getboolean('output_greynoise', 'debug', fallback=False)
25 cowrie.core.output.Output.__init__(self)
26
27 def start(self):
28 """
29 Start output plugin
30 """
31
32 def stop(self):
33 """
34 Stop output plugin
35 """
36 pass
37
38 def write(self, entry):
39 if entry['eventid'] == "cowrie.session.connect":
40 self.scanip(entry)
41
42 @defer.inlineCallbacks
43 def scanip(self, entry):
44 """
45 Scan IP againt Greynoise API
46 """
47 def message(query):
48 log.msg(
49 eventid='cowrie.greynoise.result',
50 format='greynoise: Scan for %(IP)s with %(tag)s have %(conf)s confidence'
51 ' along with the following %(meta)s metadata',
52 IP=entry['src_ip'],
53 tag=query['name'],
54 conf=query['confidence'],
55 meta=query['metadata']
56 )
57
58 gnUrl = '{0}query/ip'.format(GNAPI_URL).encode('utf8')
59 headers = ({'User-Agent': [COWRIE_USER_AGENT]})
60 fields = {'key': self.apiKey, 'ip': entry['src_ip']}
61
62 response = yield treq.post(
63 url=gnUrl,
64 data=fields,
65 headers=headers)
66
67 if response.code != 200:
68 rsp = yield response.text()
69 log.error("greynoise: got error {}".format(rsp))
70 return
71
72 j = yield response.json()
73 if self.debug:
74 log.msg("greynoise: debug: "+repr(j))
75 if j['status'] == "ok":
76 if "all" not in self.tags:
77 for query in j['records']:
78 if query['name'] in self.tags:
79 message(query)
80 else:
81 for query in j['records']:
82 message(query)
83 else:
84 log.msg("greynoise: no results for for IP {0}".format(entry['src_ip']))
85
[end of src/cowrie/output/greynoise.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/cowrie/output/greynoise.py b/src/cowrie/output/greynoise.py
--- a/src/cowrie/output/greynoise.py
+++ b/src/cowrie/output/greynoise.py
@@ -6,7 +6,7 @@
import treq
-from twisted.internet import defer
+from twisted.internet import defer, error
from twisted.python import log
import cowrie.core.output
@@ -59,10 +59,15 @@
headers = ({'User-Agent': [COWRIE_USER_AGENT]})
fields = {'key': self.apiKey, 'ip': entry['src_ip']}
- response = yield treq.post(
- url=gnUrl,
- data=fields,
- headers=headers)
+ try:
+ response = yield treq.post(
+ url=gnUrl,
+ data=fields,
+ headers=headers,
+ timeout=10)
+ except (defer.CancelledError, error.ConnectingCancelledError, error.DNSLookupError):
+ log.msg("GreyNoise requests timeout")
+ return
if response.code != 200:
rsp = yield response.text()
@@ -72,13 +77,14 @@
j = yield response.json()
if self.debug:
log.msg("greynoise: debug: "+repr(j))
- if j['status'] == "ok":
- if "all" not in self.tags:
- for query in j['records']:
- if query['name'] in self.tags:
- message(query)
- else:
- for query in j['records']:
+
+ if j['status'] == "ok":
+ if "all" not in self.tags:
+ for query in j['records']:
+ if query['name'] in self.tags:
message(query)
else:
- log.msg("greynoise: no results for for IP {0}".format(entry['src_ip']))
+ for query in j['records']:
+ message(query)
+ else:
+ log.msg("greynoise: no results for for IP {0}".format(entry['src_ip']))
|
{"golden_diff": "diff --git a/src/cowrie/output/greynoise.py b/src/cowrie/output/greynoise.py\n--- a/src/cowrie/output/greynoise.py\n+++ b/src/cowrie/output/greynoise.py\n@@ -6,7 +6,7 @@\n \n import treq\n \n-from twisted.internet import defer\n+from twisted.internet import defer, error\n from twisted.python import log\n \n import cowrie.core.output\n@@ -59,10 +59,15 @@\n headers = ({'User-Agent': [COWRIE_USER_AGENT]})\n fields = {'key': self.apiKey, 'ip': entry['src_ip']}\n \n- response = yield treq.post(\n- url=gnUrl,\n- data=fields,\n- headers=headers)\n+ try:\n+ response = yield treq.post(\n+ url=gnUrl,\n+ data=fields,\n+ headers=headers,\n+ timeout=10)\n+ except (defer.CancelledError, error.ConnectingCancelledError, error.DNSLookupError):\n+ log.msg(\"GreyNoise requests timeout\")\n+ return\n \n if response.code != 200:\n rsp = yield response.text()\n@@ -72,13 +77,14 @@\n j = yield response.json()\n if self.debug:\n log.msg(\"greynoise: debug: \"+repr(j))\n- if j['status'] == \"ok\":\n- if \"all\" not in self.tags:\n- for query in j['records']:\n- if query['name'] in self.tags:\n- message(query)\n- else:\n- for query in j['records']:\n+\n+ if j['status'] == \"ok\":\n+ if \"all\" not in self.tags:\n+ for query in j['records']:\n+ if query['name'] in self.tags:\n message(query)\n else:\n- log.msg(\"greynoise: no results for for IP {0}\".format(entry['src_ip']))\n+ for query in j['records']:\n+ message(query)\n+ else:\n+ log.msg(\"greynoise: no results for for IP {0}\".format(entry['src_ip']))\n", "issue": "greynoise should catch timeout error\n```2019-04-08T03:12:05.460833Z [twisted.internet.defer#critical] Unhandled error in Deferred:\r\n2019-04-08T03:12:05.462257Z [twisted.internet.defer#critical]\r\n Traceback (most recent call last):\r\n --- <exception caught here> ---\r\n File \"/home/cowrie/cowrie/src/cowrie/output/greynoise.py\", line 65, in scanip\r\n headers=headers)\r\n twisted.internet.error.TimeoutError: User timeout caused connection failure.\r\n```\r\n\n", "before_files": [{"content": "\"\"\"\nSend attackers IP to GreyNoise\n\"\"\"\n\nfrom __future__ import absolute_import, division\n\nimport treq\n\nfrom twisted.internet import defer\nfrom twisted.python import log\n\nimport cowrie.core.output\nfrom cowrie.core.config import CONFIG\n\nCOWRIE_USER_AGENT = 'Cowrie Honeypot'\nGNAPI_URL = 'http://api.greynoise.io:8888/v1/'\n\n\nclass Output(cowrie.core.output.Output):\n\n def __init__(self):\n self.apiKey = CONFIG.get('output_greynoise', 'api_key', fallback=None)\n self.tags = CONFIG.get('output_greynoise', 'tags', fallback=\"all\").split(\",\")\n self.debug = CONFIG.getboolean('output_greynoise', 'debug', fallback=False)\n cowrie.core.output.Output.__init__(self)\n\n def start(self):\n \"\"\"\n Start output plugin\n \"\"\"\n\n def stop(self):\n \"\"\"\n Stop output plugin\n \"\"\"\n pass\n\n def write(self, entry):\n if entry['eventid'] == \"cowrie.session.connect\":\n self.scanip(entry)\n\n @defer.inlineCallbacks\n def scanip(self, entry):\n \"\"\"\n Scan IP againt Greynoise API\n \"\"\"\n def message(query):\n log.msg(\n eventid='cowrie.greynoise.result',\n format='greynoise: Scan for %(IP)s with %(tag)s have %(conf)s confidence'\n ' along with the following %(meta)s metadata',\n IP=entry['src_ip'],\n tag=query['name'],\n conf=query['confidence'],\n meta=query['metadata']\n )\n\n gnUrl = '{0}query/ip'.format(GNAPI_URL).encode('utf8')\n headers = ({'User-Agent': [COWRIE_USER_AGENT]})\n fields = {'key': self.apiKey, 'ip': entry['src_ip']}\n\n response = yield treq.post(\n url=gnUrl,\n data=fields,\n headers=headers)\n\n if response.code != 200:\n rsp = yield response.text()\n log.error(\"greynoise: got error {}\".format(rsp))\n return\n\n j = yield response.json()\n if self.debug:\n log.msg(\"greynoise: debug: \"+repr(j))\n if j['status'] == \"ok\":\n if \"all\" not in self.tags:\n for query in j['records']:\n if query['name'] in self.tags:\n message(query)\n else:\n for query in j['records']:\n message(query)\n else:\n log.msg(\"greynoise: no results for for IP {0}\".format(entry['src_ip']))\n", "path": "src/cowrie/output/greynoise.py"}]}
| 1,439 | 474 |
gh_patches_debug_3627
|
rasdani/github-patches
|
git_diff
|
ethereum__web3.py-912
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade to or add support for websockets v5
### What was wrong?
We are currently using the `websockets` library's v4 line. The v5 line is out.
### How can it be fixed?
Look into adding support for both v4 and v5.
If this is too cumbersome, we can simply upgrade to requiring `>=v5`
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from setuptools import (
4 find_packages,
5 setup,
6 )
7
8
9 setup(
10 name='web3',
11 # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.
12 version='4.3.0',
13 description="""Web3.py""",
14 long_description_markdown_filename='README.md',
15 author='Piper Merriam',
16 author_email='[email protected]',
17 url='https://github.com/ethereum/web3.py',
18 include_package_data=True,
19 install_requires=[
20 "toolz>=0.9.0,<1.0.0;implementation_name=='pypy'",
21 "cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'",
22 "eth-abi>=1.1.1,<2",
23 "eth-account>=0.2.1,<0.3.0",
24 "eth-utils>=1.0.1,<2.0.0",
25 "hexbytes>=0.1.0,<1.0.0",
26 "lru-dict>=1.1.6,<2.0.0",
27 "eth-hash[pycryptodome]",
28 "requests>=2.16.0,<3.0.0",
29 "websockets>=4.0.1,<5.0.0",
30 "pypiwin32>=223;platform_system=='Windows'",
31 ],
32 setup_requires=['setuptools-markdown'],
33 python_requires='>=3.5, <4',
34 extras_require={
35 'tester': [
36 "eth-tester[py-evm]==0.1.0-beta.26",
37 "py-geth>=2.0.1,<3.0.0",
38 ],
39 'testrpc': ["eth-testrpc>=1.3.3,<2.0.0"],
40 'linter': [
41 "flake8==3.4.1",
42 "isort>=4.2.15,<5",
43 ],
44 },
45 py_modules=['web3', 'ens'],
46 license="MIT",
47 zip_safe=False,
48 keywords='ethereum',
49 packages=find_packages(exclude=["tests", "tests.*"]),
50 classifiers=[
51 'Development Status :: 5 - Production/Stable',
52 'Intended Audience :: Developers',
53 'License :: OSI Approved :: MIT License',
54 'Natural Language :: English',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.5',
57 'Programming Language :: Python :: 3.6',
58 ],
59 )
60
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
"lru-dict>=1.1.6,<2.0.0",
"eth-hash[pycryptodome]",
"requests>=2.16.0,<3.0.0",
- "websockets>=4.0.1,<5.0.0",
+ "websockets>=5.0.1,<6.0.0",
"pypiwin32>=223;platform_system=='Windows'",
],
setup_requires=['setuptools-markdown'],
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n- \"websockets>=4.0.1,<5.0.0\",\n+ \"websockets>=5.0.1,<6.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n", "issue": "Upgrade to or add support for websockets v5\n### What was wrong?\r\n\r\nWe are currently using the `websockets` library's v4 line. The v5 line is out.\r\n\r\n### How can it be fixed?\r\n\r\nLook into adding support for both v4 and v5.\r\n\r\nIf this is too cumbersome, we can simply upgrade to requiring `>=v5`\r\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom setuptools import (\n find_packages,\n setup,\n)\n\n\nsetup(\n name='web3',\n # *IMPORTANT*: Don't manually change the version here. Use the 'bumpversion' utility.\n version='4.3.0',\n description=\"\"\"Web3.py\"\"\",\n long_description_markdown_filename='README.md',\n author='Piper Merriam',\n author_email='[email protected]',\n url='https://github.com/ethereum/web3.py',\n include_package_data=True,\n install_requires=[\n \"toolz>=0.9.0,<1.0.0;implementation_name=='pypy'\",\n \"cytoolz>=0.9.0,<1.0.0;implementation_name=='cpython'\",\n \"eth-abi>=1.1.1,<2\",\n \"eth-account>=0.2.1,<0.3.0\",\n \"eth-utils>=1.0.1,<2.0.0\",\n \"hexbytes>=0.1.0,<1.0.0\",\n \"lru-dict>=1.1.6,<2.0.0\",\n \"eth-hash[pycryptodome]\",\n \"requests>=2.16.0,<3.0.0\",\n \"websockets>=4.0.1,<5.0.0\",\n \"pypiwin32>=223;platform_system=='Windows'\",\n ],\n setup_requires=['setuptools-markdown'],\n python_requires='>=3.5, <4',\n extras_require={\n 'tester': [\n \"eth-tester[py-evm]==0.1.0-beta.26\",\n \"py-geth>=2.0.1,<3.0.0\",\n ],\n 'testrpc': [\"eth-testrpc>=1.3.3,<2.0.0\"],\n 'linter': [\n \"flake8==3.4.1\",\n \"isort>=4.2.15,<5\",\n ],\n },\n py_modules=['web3', 'ens'],\n license=\"MIT\",\n zip_safe=False,\n keywords='ethereum',\n packages=find_packages(exclude=[\"tests\", \"tests.*\"]),\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: MIT License',\n 'Natural Language :: English',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n ],\n)\n", "path": "setup.py"}]}
| 1,284 | 142 |
gh_patches_debug_16308
|
rasdani/github-patches
|
git_diff
|
projectmesa__mesa-755
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make opening the browser when launching the server optional
**What's the problem this feature will solve?**
When we call `server.launch()` on a `ModularServer` instance the browser always opens another tab. This is not always desired behavior.
**Describe the solution you'd like**
We should be able to make this optional. To maintain backwards compatibility we can keep the current behavior as default but over ride it with something like `server.launch(open_browser=False)`
**Additional context**
I will make a PR with this simple change.
</issue>
<code>
[start of mesa/visualization/ModularVisualization.py]
1 # -*- coding: utf-8 -*-
2 """
3 ModularServer
4 =============
5
6 A visualization server which renders a model via one or more elements.
7
8 The concept for the modular visualization server as follows:
9 A visualization is composed of VisualizationElements, each of which defines how
10 to generate some visualization from a model instance and render it on the
11 client. VisualizationElements may be anything from a simple text display to
12 a multilayered HTML5 canvas.
13
14 The actual server is launched with one or more VisualizationElements;
15 it runs the model object through each of them, generating data to be sent to
16 the client. The client page is also generated based on the JavaScript code
17 provided by each element.
18
19 This file consists of the following classes:
20
21 VisualizationElement: Parent class for all other visualization elements, with
22 the minimal necessary options.
23 PageHandler: The handler for the visualization page, generated from a template
24 and built from the various visualization elements.
25 SocketHandler: Handles the websocket connection between the client page and
26 the server.
27 ModularServer: The overall visualization application class which stores and
28 controls the model and visualization instance.
29
30
31 ModularServer should *not* need to be subclassed on a model-by-model basis; it
32 should be primarily a pass-through for VisualizationElement subclasses, which
33 define the actual visualization specifics.
34
35 For example, suppose we have created two visualization elements for our model,
36 called canvasvis and graphvis; we would launch a server with:
37
38 server = ModularServer(MyModel, [canvasvis, graphvis], name="My Model")
39 server.launch()
40
41 The client keeps track of what step it is showing. Clicking the Step button in
42 the browser sends a message requesting the viz_state corresponding to the next
43 step position, which is then sent back to the client via the websocket.
44
45 The websocket protocol is as follows:
46 Each message is a JSON object, with a "type" property which defines the rest of
47 the structure.
48
49 Server -> Client:
50 Send over the model state to visualize.
51 Model state is a list, with each element corresponding to a div; each div
52 is expected to have a render function associated with it, which knows how
53 to render that particular data. The example below includes two elements:
54 the first is data for a CanvasGrid, the second for a raw text display.
55
56 {
57 "type": "viz_state",
58 "data": [{0:[ {"Shape": "circle", "x": 0, "y": 0, "r": 0.5,
59 "Color": "#AAAAAA", "Filled": "true", "Layer": 0,
60 "text": 'A', "text_color": "white" }]},
61 "Shape Count: 1"]
62 }
63
64 Informs the client that the model is over.
65 {"type": "end"}
66
67 Informs the client of the current model's parameters
68 {
69 "type": "model_params",
70 "params": 'dict' of model params, (i.e. {arg_1: val_1, ...})
71 }
72
73 Client -> Server:
74 Reset the model.
75 TODO: Allow this to come with parameters
76 {
77 "type": "reset"
78 }
79
80 Get a given state.
81 {
82 "type": "get_step",
83 "step:" index of the step to get.
84 }
85
86 Submit model parameter updates
87 {
88 "type": "submit_params",
89 "param": name of model parameter
90 "value": new value for 'param'
91 }
92
93 Get the model's parameters
94 {
95 "type": "get_params"
96 }
97
98 """
99 import os
100 import tornado.autoreload
101 import tornado.ioloop
102 import tornado.web
103 import tornado.websocket
104 import tornado.escape
105 import tornado.gen
106 import webbrowser
107
108 from mesa.visualization.UserParam import UserSettableParameter
109
110 # Suppress several pylint warnings for this file.
111 # Attributes being defined outside of init is a Tornado feature.
112 # pylint: disable=attribute-defined-outside-init
113
114
115 class VisualizationElement:
116 """
117 Defines an element of the visualization.
118
119 Attributes:
120 package_includes: A list of external JavaScript files to include that
121 are part of the Mesa packages.
122 local_includes: A list of JavaScript files that are local to the
123 directory that the server is being run in.
124 js_code: A JavaScript code string to instantiate the element.
125
126 Methods:
127 render: Takes a model object, and produces JSON data which can be sent
128 to the client.
129
130 """
131
132 package_includes = []
133 local_includes = []
134 js_code = ''
135 render_args = {}
136
137 def __init__(self):
138 pass
139
140 def render(self, model):
141 """ Build visualization data from a model object.
142
143 Args:
144 model: A model object
145
146 Returns:
147 A JSON-ready object.
148
149 """
150 return "<b>VisualizationElement goes here</b>."
151
152 # =============================================================================
153 # Actual Tornado code starts here:
154
155
156 class PageHandler(tornado.web.RequestHandler):
157 """ Handler for the HTML template which holds the visualization. """
158
159 def get(self):
160 elements = self.application.visualization_elements
161 for i, element in enumerate(elements):
162 element.index = i
163 self.render("modular_template.html", port=self.application.port,
164 model_name=self.application.model_name,
165 description=self.application.description,
166 package_includes=self.application.package_includes,
167 local_includes=self.application.local_includes,
168 scripts=self.application.js_code)
169
170
171 class SocketHandler(tornado.websocket.WebSocketHandler):
172 """ Handler for websocket. """
173 def open(self):
174 if self.application.verbose:
175 print("Socket opened!")
176 self.write_message({
177 "type": "model_params",
178 "params": self.application.user_params
179 })
180
181 def check_origin(self, origin):
182 return True
183
184 @property
185 def viz_state_message(self):
186 return {
187 "type": "viz_state",
188 "data": self.application.render_model()
189 }
190
191 def on_message(self, message):
192 """ Receiving a message from the websocket, parse, and act accordingly.
193
194 """
195 if self.application.verbose:
196 print(message)
197 msg = tornado.escape.json_decode(message)
198
199 if msg["type"] == "get_step":
200 if not self.application.model.running:
201 self.write_message({"type": "end"})
202 else:
203 self.application.model.step()
204 self.write_message(self.viz_state_message)
205
206 elif msg["type"] == "reset":
207 self.application.reset_model()
208 self.write_message(self.viz_state_message)
209
210 elif msg["type"] == "submit_params":
211 param = msg["param"]
212 value = msg["value"]
213
214 # Is the param editable?
215 if param in self.application.user_params:
216 if isinstance(self.application.model_kwargs[param], UserSettableParameter):
217 self.application.model_kwargs[param].value = value
218 else:
219 self.application.model_kwargs[param] = value
220
221 else:
222 if self.application.verbose:
223 print("Unexpected message!")
224
225
226 class ModularServer(tornado.web.Application):
227 """ Main visualization application. """
228 verbose = True
229
230 port = 8521 # Default port to listen on
231 max_steps = 100000
232
233 # Handlers and other globals:
234 page_handler = (r'/', PageHandler)
235 socket_handler = (r'/ws', SocketHandler)
236 static_handler = (r'/static/(.*)', tornado.web.StaticFileHandler,
237 {"path": os.path.dirname(__file__) + "/templates"})
238 local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,
239 {"path": ''})
240
241 handlers = [page_handler, socket_handler, static_handler, local_handler]
242
243 settings = {"debug": True,
244 "autoreload": False,
245 "template_path": os.path.dirname(__file__) + "/templates"}
246
247 EXCLUDE_LIST = ('width', 'height',)
248
249 def __init__(self, model_cls, visualization_elements, name="Mesa Model",
250 model_params={}):
251 """ Create a new visualization server with the given elements. """
252 # Prep visualization elements:
253 self.visualization_elements = visualization_elements
254 self.package_includes = set()
255 self.local_includes = set()
256 self.js_code = []
257 for element in self.visualization_elements:
258 for include_file in element.package_includes:
259 self.package_includes.add(include_file)
260 for include_file in element.local_includes:
261 self.local_includes.add(include_file)
262 self.js_code.append(element.js_code)
263
264 # Initializing the model
265 self.model_name = name
266 self.model_cls = model_cls
267 self.description = 'No description available'
268 if hasattr(model_cls, 'description'):
269 self.description = model_cls.description
270 elif model_cls.__doc__ is not None:
271 self.description = model_cls.__doc__
272
273 self.model_kwargs = model_params
274 self.reset_model()
275
276 # Initializing the application itself:
277 super().__init__(self.handlers, **self.settings)
278
279 @property
280 def user_params(self):
281 result = {}
282 for param, val in self.model_kwargs.items():
283 if isinstance(val, UserSettableParameter):
284 result[param] = val.json
285
286 return result
287
288 def reset_model(self):
289 """ Reinstantiate the model object, using the current parameters. """
290
291 model_params = {}
292 for key, val in self.model_kwargs.items():
293 if isinstance(val, UserSettableParameter):
294 if val.param_type == 'static_text': # static_text is never used for setting params
295 continue
296 model_params[key] = val.value
297 else:
298 model_params[key] = val
299
300 self.model = self.model_cls(**model_params)
301
302 def render_model(self):
303 """ Turn the current state of the model into a dictionary of
304 visualizations
305
306 """
307 visualization_state = []
308 for element in self.visualization_elements:
309 element_state = element.render(self.model)
310 visualization_state.append(element_state)
311 return visualization_state
312
313 def launch(self, port=None):
314 """ Run the app. """
315 if port is not None:
316 self.port = port
317 url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
318 print('Interface starting at {url}'.format(url=url))
319 self.listen(self.port)
320 webbrowser.open(url)
321 tornado.autoreload.start()
322 tornado.ioloop.IOLoop.current().start()
323
[end of mesa/visualization/ModularVisualization.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/mesa/visualization/ModularVisualization.py b/mesa/visualization/ModularVisualization.py
--- a/mesa/visualization/ModularVisualization.py
+++ b/mesa/visualization/ModularVisualization.py
@@ -310,13 +310,14 @@
visualization_state.append(element_state)
return visualization_state
- def launch(self, port=None):
+ def launch(self, port=None, open_browser=True):
""" Run the app. """
if port is not None:
self.port = port
url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)
print('Interface starting at {url}'.format(url=url))
self.listen(self.port)
- webbrowser.open(url)
+ if open_browser:
+ webbrowser.open(url)
tornado.autoreload.start()
tornado.ioloop.IOLoop.current().start()
|
{"golden_diff": "diff --git a/mesa/visualization/ModularVisualization.py b/mesa/visualization/ModularVisualization.py\n--- a/mesa/visualization/ModularVisualization.py\n+++ b/mesa/visualization/ModularVisualization.py\n@@ -310,13 +310,14 @@\n visualization_state.append(element_state)\n return visualization_state\n \n- def launch(self, port=None):\n+ def launch(self, port=None, open_browser=True):\n \"\"\" Run the app. \"\"\"\n if port is not None:\n self.port = port\n url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)\n print('Interface starting at {url}'.format(url=url))\n self.listen(self.port)\n- webbrowser.open(url)\n+ if open_browser:\n+ webbrowser.open(url)\n tornado.autoreload.start()\n tornado.ioloop.IOLoop.current().start()\n", "issue": "Make opening the browser when launching the server optional\n**What's the problem this feature will solve?**\r\nWhen we call `server.launch()` on a `ModularServer` instance the browser always opens another tab. This is not always desired behavior. \r\n\r\n**Describe the solution you'd like**\r\nWe should be able to make this optional. To maintain backwards compatibility we can keep the current behavior as default but over ride it with something like `server.launch(open_browser=False)`\r\n\r\n**Additional context**\r\nI will make a PR with this simple change.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nModularServer\n=============\n\nA visualization server which renders a model via one or more elements.\n\nThe concept for the modular visualization server as follows:\nA visualization is composed of VisualizationElements, each of which defines how\nto generate some visualization from a model instance and render it on the\nclient. VisualizationElements may be anything from a simple text display to\na multilayered HTML5 canvas.\n\nThe actual server is launched with one or more VisualizationElements;\nit runs the model object through each of them, generating data to be sent to\nthe client. The client page is also generated based on the JavaScript code\nprovided by each element.\n\nThis file consists of the following classes:\n\nVisualizationElement: Parent class for all other visualization elements, with\n the minimal necessary options.\nPageHandler: The handler for the visualization page, generated from a template\n and built from the various visualization elements.\nSocketHandler: Handles the websocket connection between the client page and\n the server.\nModularServer: The overall visualization application class which stores and\n controls the model and visualization instance.\n\n\nModularServer should *not* need to be subclassed on a model-by-model basis; it\nshould be primarily a pass-through for VisualizationElement subclasses, which\ndefine the actual visualization specifics.\n\nFor example, suppose we have created two visualization elements for our model,\ncalled canvasvis and graphvis; we would launch a server with:\n\n server = ModularServer(MyModel, [canvasvis, graphvis], name=\"My Model\")\n server.launch()\n\nThe client keeps track of what step it is showing. Clicking the Step button in\nthe browser sends a message requesting the viz_state corresponding to the next\nstep position, which is then sent back to the client via the websocket.\n\nThe websocket protocol is as follows:\nEach message is a JSON object, with a \"type\" property which defines the rest of\nthe structure.\n\nServer -> Client:\n Send over the model state to visualize.\n Model state is a list, with each element corresponding to a div; each div\n is expected to have a render function associated with it, which knows how\n to render that particular data. The example below includes two elements:\n the first is data for a CanvasGrid, the second for a raw text display.\n\n {\n \"type\": \"viz_state\",\n \"data\": [{0:[ {\"Shape\": \"circle\", \"x\": 0, \"y\": 0, \"r\": 0.5,\n \"Color\": \"#AAAAAA\", \"Filled\": \"true\", \"Layer\": 0,\n \"text\": 'A', \"text_color\": \"white\" }]},\n \"Shape Count: 1\"]\n }\n\n Informs the client that the model is over.\n {\"type\": \"end\"}\n\n Informs the client of the current model's parameters\n {\n \"type\": \"model_params\",\n \"params\": 'dict' of model params, (i.e. {arg_1: val_1, ...})\n }\n\nClient -> Server:\n Reset the model.\n TODO: Allow this to come with parameters\n {\n \"type\": \"reset\"\n }\n\n Get a given state.\n {\n \"type\": \"get_step\",\n \"step:\" index of the step to get.\n }\n\n Submit model parameter updates\n {\n \"type\": \"submit_params\",\n \"param\": name of model parameter\n \"value\": new value for 'param'\n }\n\n Get the model's parameters\n {\n \"type\": \"get_params\"\n }\n\n\"\"\"\nimport os\nimport tornado.autoreload\nimport tornado.ioloop\nimport tornado.web\nimport tornado.websocket\nimport tornado.escape\nimport tornado.gen\nimport webbrowser\n\nfrom mesa.visualization.UserParam import UserSettableParameter\n\n# Suppress several pylint warnings for this file.\n# Attributes being defined outside of init is a Tornado feature.\n# pylint: disable=attribute-defined-outside-init\n\n\nclass VisualizationElement:\n \"\"\"\n Defines an element of the visualization.\n\n Attributes:\n package_includes: A list of external JavaScript files to include that\n are part of the Mesa packages.\n local_includes: A list of JavaScript files that are local to the\n directory that the server is being run in.\n js_code: A JavaScript code string to instantiate the element.\n\n Methods:\n render: Takes a model object, and produces JSON data which can be sent\n to the client.\n\n \"\"\"\n\n package_includes = []\n local_includes = []\n js_code = ''\n render_args = {}\n\n def __init__(self):\n pass\n\n def render(self, model):\n \"\"\" Build visualization data from a model object.\n\n Args:\n model: A model object\n\n Returns:\n A JSON-ready object.\n\n \"\"\"\n return \"<b>VisualizationElement goes here</b>.\"\n\n# =============================================================================\n# Actual Tornado code starts here:\n\n\nclass PageHandler(tornado.web.RequestHandler):\n \"\"\" Handler for the HTML template which holds the visualization. \"\"\"\n\n def get(self):\n elements = self.application.visualization_elements\n for i, element in enumerate(elements):\n element.index = i\n self.render(\"modular_template.html\", port=self.application.port,\n model_name=self.application.model_name,\n description=self.application.description,\n package_includes=self.application.package_includes,\n local_includes=self.application.local_includes,\n scripts=self.application.js_code)\n\n\nclass SocketHandler(tornado.websocket.WebSocketHandler):\n \"\"\" Handler for websocket. \"\"\"\n def open(self):\n if self.application.verbose:\n print(\"Socket opened!\")\n self.write_message({\n \"type\": \"model_params\",\n \"params\": self.application.user_params\n })\n\n def check_origin(self, origin):\n return True\n\n @property\n def viz_state_message(self):\n return {\n \"type\": \"viz_state\",\n \"data\": self.application.render_model()\n }\n\n def on_message(self, message):\n \"\"\" Receiving a message from the websocket, parse, and act accordingly.\n\n \"\"\"\n if self.application.verbose:\n print(message)\n msg = tornado.escape.json_decode(message)\n\n if msg[\"type\"] == \"get_step\":\n if not self.application.model.running:\n self.write_message({\"type\": \"end\"})\n else:\n self.application.model.step()\n self.write_message(self.viz_state_message)\n\n elif msg[\"type\"] == \"reset\":\n self.application.reset_model()\n self.write_message(self.viz_state_message)\n\n elif msg[\"type\"] == \"submit_params\":\n param = msg[\"param\"]\n value = msg[\"value\"]\n\n # Is the param editable?\n if param in self.application.user_params:\n if isinstance(self.application.model_kwargs[param], UserSettableParameter):\n self.application.model_kwargs[param].value = value\n else:\n self.application.model_kwargs[param] = value\n\n else:\n if self.application.verbose:\n print(\"Unexpected message!\")\n\n\nclass ModularServer(tornado.web.Application):\n \"\"\" Main visualization application. \"\"\"\n verbose = True\n\n port = 8521 # Default port to listen on\n max_steps = 100000\n\n # Handlers and other globals:\n page_handler = (r'/', PageHandler)\n socket_handler = (r'/ws', SocketHandler)\n static_handler = (r'/static/(.*)', tornado.web.StaticFileHandler,\n {\"path\": os.path.dirname(__file__) + \"/templates\"})\n local_handler = (r'/local/(.*)', tornado.web.StaticFileHandler,\n {\"path\": ''})\n\n handlers = [page_handler, socket_handler, static_handler, local_handler]\n\n settings = {\"debug\": True,\n \"autoreload\": False,\n \"template_path\": os.path.dirname(__file__) + \"/templates\"}\n\n EXCLUDE_LIST = ('width', 'height',)\n\n def __init__(self, model_cls, visualization_elements, name=\"Mesa Model\",\n model_params={}):\n \"\"\" Create a new visualization server with the given elements. \"\"\"\n # Prep visualization elements:\n self.visualization_elements = visualization_elements\n self.package_includes = set()\n self.local_includes = set()\n self.js_code = []\n for element in self.visualization_elements:\n for include_file in element.package_includes:\n self.package_includes.add(include_file)\n for include_file in element.local_includes:\n self.local_includes.add(include_file)\n self.js_code.append(element.js_code)\n\n # Initializing the model\n self.model_name = name\n self.model_cls = model_cls\n self.description = 'No description available'\n if hasattr(model_cls, 'description'):\n self.description = model_cls.description\n elif model_cls.__doc__ is not None:\n self.description = model_cls.__doc__\n\n self.model_kwargs = model_params\n self.reset_model()\n\n # Initializing the application itself:\n super().__init__(self.handlers, **self.settings)\n\n @property\n def user_params(self):\n result = {}\n for param, val in self.model_kwargs.items():\n if isinstance(val, UserSettableParameter):\n result[param] = val.json\n\n return result\n\n def reset_model(self):\n \"\"\" Reinstantiate the model object, using the current parameters. \"\"\"\n\n model_params = {}\n for key, val in self.model_kwargs.items():\n if isinstance(val, UserSettableParameter):\n if val.param_type == 'static_text': # static_text is never used for setting params\n continue\n model_params[key] = val.value\n else:\n model_params[key] = val\n\n self.model = self.model_cls(**model_params)\n\n def render_model(self):\n \"\"\" Turn the current state of the model into a dictionary of\n visualizations\n\n \"\"\"\n visualization_state = []\n for element in self.visualization_elements:\n element_state = element.render(self.model)\n visualization_state.append(element_state)\n return visualization_state\n\n def launch(self, port=None):\n \"\"\" Run the app. \"\"\"\n if port is not None:\n self.port = port\n url = 'http://127.0.0.1:{PORT}'.format(PORT=self.port)\n print('Interface starting at {url}'.format(url=url))\n self.listen(self.port)\n webbrowser.open(url)\n tornado.autoreload.start()\n tornado.ioloop.IOLoop.current().start()\n", "path": "mesa/visualization/ModularVisualization.py"}]}
| 3,741 | 199 |
gh_patches_debug_30860
|
rasdani/github-patches
|
git_diff
|
TheAlgorithms__Python-2032
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mergesort Update Variable Names
I was looking over the mergesort.py file in the divide_and_conquer directory when I saw that all of the variable names are a single letter and there is not much documentation. Does anyone know enough about this file to improve the variable names and make the code more understandable?
</issue>
<code>
[start of divide_and_conquer/mergesort.py]
1 def merge(a, b, m, e):
2 l = a[b : m + 1] # noqa: E741
3 r = a[m + 1 : e + 1]
4 k = b
5 i = 0
6 j = 0
7 while i < len(l) and j < len(r):
8 # change sign for Descending order
9 if l[i] < r[j]:
10 a[k] = l[i]
11 i += 1
12 else:
13 a[k] = r[j]
14 j += 1
15 k += 1
16 while i < len(l):
17 a[k] = l[i]
18 i += 1
19 k += 1
20 while j < len(r):
21 a[k] = r[j]
22 j += 1
23 k += 1
24 return a
25
26
27 def mergesort(a, b, e):
28 """
29 >>> mergesort([3,2,1],0,2)
30 [1, 2, 3]
31 >>> mergesort([3,2,1,0,1,2,3,5,4],0,8)
32 [0, 1, 1, 2, 2, 3, 3, 4, 5]
33 """
34 if b < e:
35 m = (b + e) // 2
36 # print("ms1",a,b,m)
37 mergesort(a, b, m)
38 # print("ms2",a,m+1,e)
39 mergesort(a, m + 1, e)
40 # print("m",a,b,m,e)
41 merge(a, b, m, e)
42 return a
43
44
45 if __name__ == "__main__":
46 import doctest
47
48 doctest.testmod()
49
[end of divide_and_conquer/mergesort.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/divide_and_conquer/mergesort.py b/divide_and_conquer/mergesort.py
--- a/divide_and_conquer/mergesort.py
+++ b/divide_and_conquer/mergesort.py
@@ -1,45 +1,48 @@
-def merge(a, b, m, e):
- l = a[b : m + 1] # noqa: E741
- r = a[m + 1 : e + 1]
- k = b
+def merge(arr, left, mid, right):
+ # overall array will divided into 2 array
+ # left_arr contains the left portion of array from left to mid
+ # right_arr contains the right portion of array from mid + 1 to right
+ left_arr = arr[left : mid + 1]
+ right_arr = arr[mid + 1 : right + 1]
+ k = left
i = 0
j = 0
- while i < len(l) and j < len(r):
+ while i < len(left_arr) and j < len(right_arr):
# change sign for Descending order
- if l[i] < r[j]:
- a[k] = l[i]
+ if left_arr[i] < right_arr[j]:
+ arr[k] = left_arr[i]
i += 1
else:
- a[k] = r[j]
+ arr[k] = right_arr[j]
j += 1
k += 1
- while i < len(l):
- a[k] = l[i]
+ while i < len(left_arr):
+ arr[k] = left_arr[i]
i += 1
k += 1
- while j < len(r):
- a[k] = r[j]
+ while j < len(right_arr):
+ arr[k] = right_arr[j]
j += 1
k += 1
- return a
+ return arr
-def mergesort(a, b, e):
+def mergesort(arr, left, right):
"""
- >>> mergesort([3,2,1],0,2)
+ >>> mergesort([3, 2, 1], 0, 2)
[1, 2, 3]
- >>> mergesort([3,2,1,0,1,2,3,5,4],0,8)
+ >>> mergesort([3, 2, 1, 0, 1, 2, 3, 5, 4], 0, 8)
[0, 1, 1, 2, 2, 3, 3, 4, 5]
"""
- if b < e:
- m = (b + e) // 2
+ if left < right:
+ mid = (left + right) // 2
# print("ms1",a,b,m)
- mergesort(a, b, m)
+ mergesort(arr, left, mid)
# print("ms2",a,m+1,e)
- mergesort(a, m + 1, e)
+ mergesort(arr, mid + 1, right)
# print("m",a,b,m,e)
- merge(a, b, m, e)
- return a
+ merge(arr, left, mid, right)
+ return arr
if __name__ == "__main__":
|
{"golden_diff": "diff --git a/divide_and_conquer/mergesort.py b/divide_and_conquer/mergesort.py\n--- a/divide_and_conquer/mergesort.py\n+++ b/divide_and_conquer/mergesort.py\n@@ -1,45 +1,48 @@\n-def merge(a, b, m, e):\n- l = a[b : m + 1] # noqa: E741\n- r = a[m + 1 : e + 1]\n- k = b\n+def merge(arr, left, mid, right):\n+ # overall array will divided into 2 array\n+ # left_arr contains the left portion of array from left to mid\n+ # right_arr contains the right portion of array from mid + 1 to right\n+ left_arr = arr[left : mid + 1]\n+ right_arr = arr[mid + 1 : right + 1]\n+ k = left\n i = 0\n j = 0\n- while i < len(l) and j < len(r):\n+ while i < len(left_arr) and j < len(right_arr):\n # change sign for Descending order\n- if l[i] < r[j]:\n- a[k] = l[i]\n+ if left_arr[i] < right_arr[j]:\n+ arr[k] = left_arr[i]\n i += 1\n else:\n- a[k] = r[j]\n+ arr[k] = right_arr[j]\n j += 1\n k += 1\n- while i < len(l):\n- a[k] = l[i]\n+ while i < len(left_arr):\n+ arr[k] = left_arr[i]\n i += 1\n k += 1\n- while j < len(r):\n- a[k] = r[j]\n+ while j < len(right_arr):\n+ arr[k] = right_arr[j]\n j += 1\n k += 1\n- return a\n+ return arr\n \n \n-def mergesort(a, b, e):\n+def mergesort(arr, left, right):\n \"\"\"\n- >>> mergesort([3,2,1],0,2)\n+ >>> mergesort([3, 2, 1], 0, 2)\n [1, 2, 3]\n- >>> mergesort([3,2,1,0,1,2,3,5,4],0,8)\n+ >>> mergesort([3, 2, 1, 0, 1, 2, 3, 5, 4], 0, 8)\n [0, 1, 1, 2, 2, 3, 3, 4, 5]\n \"\"\"\n- if b < e:\n- m = (b + e) // 2\n+ if left < right:\n+ mid = (left + right) // 2\n # print(\"ms1\",a,b,m)\n- mergesort(a, b, m)\n+ mergesort(arr, left, mid)\n # print(\"ms2\",a,m+1,e)\n- mergesort(a, m + 1, e)\n+ mergesort(arr, mid + 1, right)\n # print(\"m\",a,b,m,e)\n- merge(a, b, m, e)\n- return a\n+ merge(arr, left, mid, right)\n+ return arr\n \n \n if __name__ == \"__main__\":\n", "issue": "Mergesort Update Variable Names\nI was looking over the mergesort.py file in the divide_and_conquer directory when I saw that all of the variable names are a single letter and there is not much documentation. Does anyone know enough about this file to improve the variable names and make the code more understandable?\n", "before_files": [{"content": "def merge(a, b, m, e):\n l = a[b : m + 1] # noqa: E741\n r = a[m + 1 : e + 1]\n k = b\n i = 0\n j = 0\n while i < len(l) and j < len(r):\n # change sign for Descending order\n if l[i] < r[j]:\n a[k] = l[i]\n i += 1\n else:\n a[k] = r[j]\n j += 1\n k += 1\n while i < len(l):\n a[k] = l[i]\n i += 1\n k += 1\n while j < len(r):\n a[k] = r[j]\n j += 1\n k += 1\n return a\n\n\ndef mergesort(a, b, e):\n \"\"\"\n >>> mergesort([3,2,1],0,2)\n [1, 2, 3]\n >>> mergesort([3,2,1,0,1,2,3,5,4],0,8)\n [0, 1, 1, 2, 2, 3, 3, 4, 5]\n \"\"\"\n if b < e:\n m = (b + e) // 2\n # print(\"ms1\",a,b,m)\n mergesort(a, b, m)\n # print(\"ms2\",a,m+1,e)\n mergesort(a, m + 1, e)\n # print(\"m\",a,b,m,e)\n merge(a, b, m, e)\n return a\n\n\nif __name__ == \"__main__\":\n import doctest\n\n doctest.testmod()\n", "path": "divide_and_conquer/mergesort.py"}]}
| 1,087 | 773 |
gh_patches_debug_34494
|
rasdani/github-patches
|
git_diff
|
techmatters__terraso-backend-238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cannot delete users who have uploaded shared files
## Description
Attempting to delete a user who has uploaded files will give an error like so
```
Cannot delete user
Deleting the selected user would require deleting the following protected related objects:
Data entry: acBie9x4 WieezMsPbKL4P2
Data entry: KoBo question set
Data entry: myfile
Data entry: plus+sign+cool
Data entry: acBie9x4WieezMsPbKL4P2
```
</issue>
<code>
[start of terraso_backend/apps/core/models/users.py]
1 import uuid
2
3 from django.contrib.auth.models import AbstractUser, BaseUserManager
4 from django.db import models
5 from safedelete.models import SOFT_DELETE_CASCADE, SafeDeleteManager, SafeDeleteModel
6
7
8 class UserManager(SafeDeleteManager, BaseUserManager):
9 use_in_migrations = True
10
11 def _create_user(self, email, password, **extra_fields):
12 """Create and save a User with the given email and password."""
13 if not email:
14 raise ValueError("The given email must be set")
15
16 email = self.normalize_email(email)
17 user = self.model(email=email, **extra_fields)
18 user.set_password(password)
19 user.save(using=self._db)
20
21 return user
22
23 def create_user(self, email, password=None, **extra_fields):
24 """Create and save a regular User with the given email and password."""
25 extra_fields.setdefault("is_staff", False)
26 extra_fields.setdefault("is_superuser", False)
27 return self._create_user(email, password, **extra_fields)
28
29 def create_superuser(self, email, password, **extra_fields):
30 """Create and save a SuperUser with the given email and password."""
31 extra_fields.setdefault("is_staff", True)
32 extra_fields.setdefault("is_superuser", True)
33
34 if extra_fields.get("is_staff") is not True:
35 raise ValueError("Superuser must have is_staff=True.")
36 if extra_fields.get("is_superuser") is not True:
37 raise ValueError("Superuser must have is_superuser=True.")
38
39 return self._create_user(email, password, **extra_fields)
40
41
42 class User(SafeDeleteModel, AbstractUser):
43 """This model represents a User on Terraso platform."""
44
45 fields_to_trim = ["first_name", "last_name"]
46
47 _safedelete_policy = SOFT_DELETE_CASCADE
48
49 id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
50 created_at = models.DateTimeField(auto_now_add=True)
51 updated_at = models.DateTimeField(auto_now=True)
52
53 username = None
54 email = models.EmailField()
55 profile_image = models.URLField(blank=True, default="")
56
57 USERNAME_FIELD = "email"
58 REQUIRED_FIELDS = []
59
60 objects = UserManager()
61
62 class Meta:
63 get_latest_by = "created_at"
64 ordering = ["-created_at"]
65 constraints = (
66 models.UniqueConstraint(
67 fields=("email",),
68 condition=models.Q(deleted_at__isnull=True),
69 name="unique_active_email",
70 ),
71 )
72
73 def save(self, *args, **kwargs):
74 for field in self.fields_to_trim:
75 setattr(self, field, getattr(self, field).strip())
76 return super().save(*args, **kwargs)
77
78 def is_landscape_manager(self, landscape_id):
79 return (
80 self.memberships.managers_only()
81 .filter(
82 group__associated_landscapes__is_default_landscape_group=True,
83 group__associated_landscapes__landscape__pk=landscape_id,
84 )
85 .exists()
86 )
87
88 def is_group_manager(self, group_id):
89 return self.memberships.managers_only().filter(group__pk=group_id).exists()
90
91 def __str__(self):
92 return self.email
93
94
95 class UserPreference(models.Model):
96 id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)
97 created_at = models.DateTimeField(auto_now_add=True)
98 updated_at = models.DateTimeField(auto_now=True)
99 key = models.CharField(max_length=128)
100 value = models.CharField(max_length=512, blank=True, default="")
101
102 user = models.ForeignKey(User, on_delete=models.CASCADE, related_name="preferences")
103
104 class Meta:
105 constraints = (
106 models.UniqueConstraint(
107 fields=("key", "user"),
108 name="unique_user_preference",
109 ),
110 )
111
[end of terraso_backend/apps/core/models/users.py]
[start of terraso_backend/apps/shared_data/models/data_entries.py]
1 from django.db import models
2 from django.utils import timezone
3 from django.utils.translation import gettext_lazy as _
4
5 from apps.core.models import BaseModel, Group, User
6 from apps.shared_data import permission_rules as perm_rules
7 from apps.shared_data.services import DataEntryFileStorage
8
9
10 class DataEntry(BaseModel):
11 """
12 Data Entry stores information about resources (usually files) that contain
13 different kind of data used by Landscape managers. Common resource types are
14 csv, xls and JSON files.
15
16 A Data Entry can point to internal or external resources. An internal
17 resource is stored on Terraso's infrastructure and an external resource is
18 stored out of the Terraso's infrastructure. In both cases, the Data Entry
19 only has the URL for that resource as a link to it.
20
21 Attributes
22 ----------
23 name: str
24 any user given name for that resource
25 description: str
26 a longer description explaining the resource
27 resource_type: str
28 the 'technical' type of the resource, usually the mime type
29 url: str
30 the URL where the resource can be accessed
31
32 groups: ManyToManyField(Group)
33 Groups where the resource is linked to (shared)
34 created_by: User
35 User who created the resource
36 """
37
38 name = models.CharField(max_length=128)
39 description = models.TextField(blank=True, default="")
40
41 ENTRY_TYPE_FILE = "file"
42 ENTRY_TYPE_LINK = "link"
43 MEMBERSHIP_TYPES = (
44 (ENTRY_TYPE_FILE, _("File")),
45 (ENTRY_TYPE_LINK, _("Link")),
46 )
47 entry_type = models.CharField(
48 max_length=32,
49 choices=MEMBERSHIP_TYPES,
50 )
51
52 resource_type = models.CharField(max_length=255, blank=True, default="")
53 url = models.URLField()
54 size = models.PositiveBigIntegerField(null=True, blank=True)
55
56 groups = models.ManyToManyField(Group, related_name="data_entries")
57 created_by = models.ForeignKey(User, on_delete=models.PROTECT)
58 file_removed_at = models.DateTimeField(blank=True, null=True)
59
60 class Meta(BaseModel.Meta):
61 verbose_name_plural = "Data Entries"
62 rules_permissions = {
63 "change": perm_rules.allowed_to_change_data_entry,
64 "delete": perm_rules.allowed_to_delete_data_entry,
65 "view": perm_rules.allowed_to_view_data_entry,
66 }
67
68 @property
69 def s3_object_name(self):
70 object_name = "/".join(self.url.split("/")[-2:]) if self.url else ""
71
72 # We want to put back the space character so the sign url works properly
73 object_name = object_name.replace("%20", " ")
74 return object_name
75
76 @property
77 def signed_url(self):
78 storage = DataEntryFileStorage(custom_domain=None)
79 return storage.url(self.s3_object_name)
80
81 def delete_file_on_storage(self):
82 if not self.deleted_at:
83 raise RuntimeError(
84 f"Storage object cannot be deleted if its DataEntry ({self.id}) is not deleted."
85 )
86
87 if self.file_removed_at:
88 return
89
90 storage = DataEntryFileStorage(custom_domain=None)
91 storage.delete(self.s3_object_name)
92 self.file_removed_at = timezone.now()
93 self.save(keep_deleted=True)
94
95 def to_dict(self):
96 return dict(
97 id=str(self.id),
98 name=self.name,
99 entry_type=self.entry_type,
100 description=self.description,
101 url=self.signed_url,
102 resource_type=self.resource_type,
103 size=self.size,
104 created_by=str(self.created_by.id),
105 groups=[str(group.id) for group in self.groups.all()],
106 )
107
108 def __str__(self):
109 return self.name
110
111 @classmethod
112 def get_entry_type_from_text(cls, entry_type):
113 if entry_type and entry_type.lower() == cls.ENTRY_TYPE_FILE:
114 return cls.ENTRY_TYPE_FILE
115 return cls.ENTRY_TYPE_LINK
116
[end of terraso_backend/apps/shared_data/models/data_entries.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/terraso_backend/apps/core/models/users.py b/terraso_backend/apps/core/models/users.py
--- a/terraso_backend/apps/core/models/users.py
+++ b/terraso_backend/apps/core/models/users.py
@@ -85,6 +85,17 @@
.exists()
)
+ def soft_delete_policy_action(self, **kwargs):
+ """Relink files to deleted user. The default policy is to set the `created_by` field to
+ null if the user is deleted. However, for a soft deletion we want to keep this link. That
+ way if the user is restored, the created_by is still pointing to the same place."""
+ linked_dataentries = self.dataentry_set.all()
+ delete_response = super().soft_delete_policy_action()
+ for entry in linked_dataentries:
+ entry.created_by = self
+ entry.save()
+ return delete_response
+
def is_group_manager(self, group_id):
return self.memberships.managers_only().filter(group__pk=group_id).exists()
diff --git a/terraso_backend/apps/shared_data/models/data_entries.py b/terraso_backend/apps/shared_data/models/data_entries.py
--- a/terraso_backend/apps/shared_data/models/data_entries.py
+++ b/terraso_backend/apps/shared_data/models/data_entries.py
@@ -1,6 +1,7 @@
from django.db import models
from django.utils import timezone
from django.utils.translation import gettext_lazy as _
+from safedelete.models import SOFT_DELETE
from apps.core.models import BaseModel, Group, User
from apps.shared_data import permission_rules as perm_rules
@@ -35,6 +36,9 @@
User who created the resource
"""
+ # file will not be deleted in cascade
+ _safedelete_policy = SOFT_DELETE
+
name = models.CharField(max_length=128)
description = models.TextField(blank=True, default="")
@@ -54,7 +58,7 @@
size = models.PositiveBigIntegerField(null=True, blank=True)
groups = models.ManyToManyField(Group, related_name="data_entries")
- created_by = models.ForeignKey(User, on_delete=models.PROTECT)
+ created_by = models.ForeignKey(User, null=True, on_delete=models.DO_NOTHING)
file_removed_at = models.DateTimeField(blank=True, null=True)
class Meta(BaseModel.Meta):
|
{"golden_diff": "diff --git a/terraso_backend/apps/core/models/users.py b/terraso_backend/apps/core/models/users.py\n--- a/terraso_backend/apps/core/models/users.py\n+++ b/terraso_backend/apps/core/models/users.py\n@@ -85,6 +85,17 @@\n .exists()\n )\n \n+ def soft_delete_policy_action(self, **kwargs):\n+ \"\"\"Relink files to deleted user. The default policy is to set the `created_by` field to\n+ null if the user is deleted. However, for a soft deletion we want to keep this link. That\n+ way if the user is restored, the created_by is still pointing to the same place.\"\"\"\n+ linked_dataentries = self.dataentry_set.all()\n+ delete_response = super().soft_delete_policy_action()\n+ for entry in linked_dataentries:\n+ entry.created_by = self\n+ entry.save()\n+ return delete_response\n+\n def is_group_manager(self, group_id):\n return self.memberships.managers_only().filter(group__pk=group_id).exists()\n \ndiff --git a/terraso_backend/apps/shared_data/models/data_entries.py b/terraso_backend/apps/shared_data/models/data_entries.py\n--- a/terraso_backend/apps/shared_data/models/data_entries.py\n+++ b/terraso_backend/apps/shared_data/models/data_entries.py\n@@ -1,6 +1,7 @@\n from django.db import models\n from django.utils import timezone\n from django.utils.translation import gettext_lazy as _\n+from safedelete.models import SOFT_DELETE\n \n from apps.core.models import BaseModel, Group, User\n from apps.shared_data import permission_rules as perm_rules\n@@ -35,6 +36,9 @@\n User who created the resource\n \"\"\"\n \n+ # file will not be deleted in cascade\n+ _safedelete_policy = SOFT_DELETE\n+\n name = models.CharField(max_length=128)\n description = models.TextField(blank=True, default=\"\")\n \n@@ -54,7 +58,7 @@\n size = models.PositiveBigIntegerField(null=True, blank=True)\n \n groups = models.ManyToManyField(Group, related_name=\"data_entries\")\n- created_by = models.ForeignKey(User, on_delete=models.PROTECT)\n+ created_by = models.ForeignKey(User, null=True, on_delete=models.DO_NOTHING)\n file_removed_at = models.DateTimeField(blank=True, null=True)\n \n class Meta(BaseModel.Meta):\n", "issue": "Cannot delete users who have uploaded shared files\n## Description\r\nAttempting to delete a user who has uploaded files will give an error like so\r\n\r\n```\r\nCannot delete user\r\nDeleting the selected user would require deleting the following protected related objects:\r\n\r\nData entry: acBie9x4 WieezMsPbKL4P2\r\nData entry: KoBo question set\r\nData entry: myfile\r\nData entry: plus+sign+cool\r\nData entry: acBie9x4WieezMsPbKL4P2\r\n\r\n```\r\n\n", "before_files": [{"content": "import uuid\n\nfrom django.contrib.auth.models import AbstractUser, BaseUserManager\nfrom django.db import models\nfrom safedelete.models import SOFT_DELETE_CASCADE, SafeDeleteManager, SafeDeleteModel\n\n\nclass UserManager(SafeDeleteManager, BaseUserManager):\n use_in_migrations = True\n\n def _create_user(self, email, password, **extra_fields):\n \"\"\"Create and save a User with the given email and password.\"\"\"\n if not email:\n raise ValueError(\"The given email must be set\")\n\n email = self.normalize_email(email)\n user = self.model(email=email, **extra_fields)\n user.set_password(password)\n user.save(using=self._db)\n\n return user\n\n def create_user(self, email, password=None, **extra_fields):\n \"\"\"Create and save a regular User with the given email and password.\"\"\"\n extra_fields.setdefault(\"is_staff\", False)\n extra_fields.setdefault(\"is_superuser\", False)\n return self._create_user(email, password, **extra_fields)\n\n def create_superuser(self, email, password, **extra_fields):\n \"\"\"Create and save a SuperUser with the given email and password.\"\"\"\n extra_fields.setdefault(\"is_staff\", True)\n extra_fields.setdefault(\"is_superuser\", True)\n\n if extra_fields.get(\"is_staff\") is not True:\n raise ValueError(\"Superuser must have is_staff=True.\")\n if extra_fields.get(\"is_superuser\") is not True:\n raise ValueError(\"Superuser must have is_superuser=True.\")\n\n return self._create_user(email, password, **extra_fields)\n\n\nclass User(SafeDeleteModel, AbstractUser):\n \"\"\"This model represents a User on Terraso platform.\"\"\"\n\n fields_to_trim = [\"first_name\", \"last_name\"]\n\n _safedelete_policy = SOFT_DELETE_CASCADE\n\n id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n\n username = None\n email = models.EmailField()\n profile_image = models.URLField(blank=True, default=\"\")\n\n USERNAME_FIELD = \"email\"\n REQUIRED_FIELDS = []\n\n objects = UserManager()\n\n class Meta:\n get_latest_by = \"created_at\"\n ordering = [\"-created_at\"]\n constraints = (\n models.UniqueConstraint(\n fields=(\"email\",),\n condition=models.Q(deleted_at__isnull=True),\n name=\"unique_active_email\",\n ),\n )\n\n def save(self, *args, **kwargs):\n for field in self.fields_to_trim:\n setattr(self, field, getattr(self, field).strip())\n return super().save(*args, **kwargs)\n\n def is_landscape_manager(self, landscape_id):\n return (\n self.memberships.managers_only()\n .filter(\n group__associated_landscapes__is_default_landscape_group=True,\n group__associated_landscapes__landscape__pk=landscape_id,\n )\n .exists()\n )\n\n def is_group_manager(self, group_id):\n return self.memberships.managers_only().filter(group__pk=group_id).exists()\n\n def __str__(self):\n return self.email\n\n\nclass UserPreference(models.Model):\n id = models.UUIDField(primary_key=True, default=uuid.uuid4, editable=False)\n created_at = models.DateTimeField(auto_now_add=True)\n updated_at = models.DateTimeField(auto_now=True)\n key = models.CharField(max_length=128)\n value = models.CharField(max_length=512, blank=True, default=\"\")\n\n user = models.ForeignKey(User, on_delete=models.CASCADE, related_name=\"preferences\")\n\n class Meta:\n constraints = (\n models.UniqueConstraint(\n fields=(\"key\", \"user\"),\n name=\"unique_user_preference\",\n ),\n )\n", "path": "terraso_backend/apps/core/models/users.py"}, {"content": "from django.db import models\nfrom django.utils import timezone\nfrom django.utils.translation import gettext_lazy as _\n\nfrom apps.core.models import BaseModel, Group, User\nfrom apps.shared_data import permission_rules as perm_rules\nfrom apps.shared_data.services import DataEntryFileStorage\n\n\nclass DataEntry(BaseModel):\n \"\"\"\n Data Entry stores information about resources (usually files) that contain\n different kind of data used by Landscape managers. Common resource types are\n csv, xls and JSON files.\n\n A Data Entry can point to internal or external resources. An internal\n resource is stored on Terraso's infrastructure and an external resource is\n stored out of the Terraso's infrastructure. In both cases, the Data Entry\n only has the URL for that resource as a link to it.\n\n Attributes\n ----------\n name: str\n any user given name for that resource\n description: str\n a longer description explaining the resource\n resource_type: str\n the 'technical' type of the resource, usually the mime type\n url: str\n the URL where the resource can be accessed\n\n groups: ManyToManyField(Group)\n Groups where the resource is linked to (shared)\n created_by: User\n User who created the resource\n \"\"\"\n\n name = models.CharField(max_length=128)\n description = models.TextField(blank=True, default=\"\")\n\n ENTRY_TYPE_FILE = \"file\"\n ENTRY_TYPE_LINK = \"link\"\n MEMBERSHIP_TYPES = (\n (ENTRY_TYPE_FILE, _(\"File\")),\n (ENTRY_TYPE_LINK, _(\"Link\")),\n )\n entry_type = models.CharField(\n max_length=32,\n choices=MEMBERSHIP_TYPES,\n )\n\n resource_type = models.CharField(max_length=255, blank=True, default=\"\")\n url = models.URLField()\n size = models.PositiveBigIntegerField(null=True, blank=True)\n\n groups = models.ManyToManyField(Group, related_name=\"data_entries\")\n created_by = models.ForeignKey(User, on_delete=models.PROTECT)\n file_removed_at = models.DateTimeField(blank=True, null=True)\n\n class Meta(BaseModel.Meta):\n verbose_name_plural = \"Data Entries\"\n rules_permissions = {\n \"change\": perm_rules.allowed_to_change_data_entry,\n \"delete\": perm_rules.allowed_to_delete_data_entry,\n \"view\": perm_rules.allowed_to_view_data_entry,\n }\n\n @property\n def s3_object_name(self):\n object_name = \"/\".join(self.url.split(\"/\")[-2:]) if self.url else \"\"\n\n # We want to put back the space character so the sign url works properly\n object_name = object_name.replace(\"%20\", \" \")\n return object_name\n\n @property\n def signed_url(self):\n storage = DataEntryFileStorage(custom_domain=None)\n return storage.url(self.s3_object_name)\n\n def delete_file_on_storage(self):\n if not self.deleted_at:\n raise RuntimeError(\n f\"Storage object cannot be deleted if its DataEntry ({self.id}) is not deleted.\"\n )\n\n if self.file_removed_at:\n return\n\n storage = DataEntryFileStorage(custom_domain=None)\n storage.delete(self.s3_object_name)\n self.file_removed_at = timezone.now()\n self.save(keep_deleted=True)\n\n def to_dict(self):\n return dict(\n id=str(self.id),\n name=self.name,\n entry_type=self.entry_type,\n description=self.description,\n url=self.signed_url,\n resource_type=self.resource_type,\n size=self.size,\n created_by=str(self.created_by.id),\n groups=[str(group.id) for group in self.groups.all()],\n )\n\n def __str__(self):\n return self.name\n\n @classmethod\n def get_entry_type_from_text(cls, entry_type):\n if entry_type and entry_type.lower() == cls.ENTRY_TYPE_FILE:\n return cls.ENTRY_TYPE_FILE\n return cls.ENTRY_TYPE_LINK\n", "path": "terraso_backend/apps/shared_data/models/data_entries.py"}]}
| 2,784 | 523 |
gh_patches_debug_13420
|
rasdani/github-patches
|
git_diff
|
comic__grand-challenge.org-786
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Partial update not working correctly because of custom validation
The validation rule in `app/grandchallenge/annotations/serializers.py:31` is breaking the partial update functionality.
If you try to do a partial update PATCH request to the endpoint, it will try to find the `annotation_set` attribute in the request data. If this is not present it will throw a KeyError.
This should be fixed by first checking if the key exists in the request data and only then running the validation check. The validation check is not needed if the key does not exist because it will then either not change (for partial update request) or throw a `field is required` validation error (for every other type of request).
I will fix this and add a test for it.
</issue>
<code>
[start of app/grandchallenge/annotations/serializers.py]
1 from rest_framework import serializers
2
3 from .models import (
4 ETDRSGridAnnotation,
5 MeasurementAnnotation,
6 BooleanClassificationAnnotation,
7 PolygonAnnotationSet,
8 SinglePolygonAnnotation,
9 LandmarkAnnotationSet,
10 SingleLandmarkAnnotation,
11 )
12 from .validators import validate_grader_is_current_retina_user
13
14
15 class AbstractAnnotationSerializer(serializers.ModelSerializer):
16 def validate_grader(self, value):
17 """
18 Validate that grader is the user creating the object for retina_graders group
19 """
20 validate_grader_is_current_retina_user(value, self.context)
21 return value
22
23 class Meta:
24 abstract = True
25
26
27 class AbstractSingleAnnotationSerializer(serializers.ModelSerializer):
28 def validate(self, data):
29 """
30 Validate that the user that is creating this object equals the annotation_set.grader for retina_graders
31 """
32 validate_grader_is_current_retina_user(
33 data["annotation_set"].grader, self.context
34 )
35 return data
36
37 class Meta:
38 abstract = True
39
40
41 class ETDRSGridAnnotationSerializer(AbstractAnnotationSerializer):
42 class Meta:
43 model = ETDRSGridAnnotation
44 fields = ("grader", "created", "image", "fovea", "optic_disk")
45
46
47 class MeasurementAnnotationSerializer(AbstractAnnotationSerializer):
48 class Meta:
49 model = MeasurementAnnotation
50 fields = ("image", "grader", "created", "start_voxel", "end_voxel")
51
52
53 class BooleanClassificationAnnotationSerializer(AbstractAnnotationSerializer):
54 class Meta:
55 model = BooleanClassificationAnnotation
56 fields = ("image", "grader", "created", "name", "value")
57
58
59 class SinglePolygonAnnotationSerializer(AbstractSingleAnnotationSerializer):
60 annotation_set = serializers.PrimaryKeyRelatedField(
61 queryset=PolygonAnnotationSet.objects.all()
62 )
63
64 class Meta:
65 model = SinglePolygonAnnotation
66 fields = ("id", "value", "annotation_set")
67
68
69 class PolygonAnnotationSetSerializer(AbstractAnnotationSerializer):
70 singlepolygonannotation_set = SinglePolygonAnnotationSerializer(
71 many=True, read_only=True
72 )
73
74 class Meta:
75 model = PolygonAnnotationSet
76 fields = (
77 "id",
78 "image",
79 "grader",
80 "created",
81 "name",
82 "singlepolygonannotation_set",
83 )
84
85
86 class LandmarkAnnotationSetSerializer(AbstractAnnotationSerializer):
87 class Meta:
88 model = LandmarkAnnotationSet
89 fields = ("grader", "created")
90
91
92 class SingleLandmarkAnnotationSerializer(AbstractSingleAnnotationSerializer):
93 class Meta:
94 model = SingleLandmarkAnnotation
95 fields = ("image", "annotation_set", "landmarks")
96
[end of app/grandchallenge/annotations/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/app/grandchallenge/annotations/serializers.py b/app/grandchallenge/annotations/serializers.py
--- a/app/grandchallenge/annotations/serializers.py
+++ b/app/grandchallenge/annotations/serializers.py
@@ -27,11 +27,14 @@
class AbstractSingleAnnotationSerializer(serializers.ModelSerializer):
def validate(self, data):
"""
- Validate that the user that is creating this object equals the annotation_set.grader for retina_graders
+ Validate that the user that is creating this object equals the
+ annotation_set.grader for retina_graders
"""
- validate_grader_is_current_retina_user(
- data["annotation_set"].grader, self.context
- )
+ if data.get("annotation_set") is None:
+ return data
+
+ grader = data["annotation_set"].grader
+ validate_grader_is_current_retina_user(grader, self.context)
return data
class Meta:
|
{"golden_diff": "diff --git a/app/grandchallenge/annotations/serializers.py b/app/grandchallenge/annotations/serializers.py\n--- a/app/grandchallenge/annotations/serializers.py\n+++ b/app/grandchallenge/annotations/serializers.py\n@@ -27,11 +27,14 @@\n class AbstractSingleAnnotationSerializer(serializers.ModelSerializer):\n def validate(self, data):\n \"\"\"\n- Validate that the user that is creating this object equals the annotation_set.grader for retina_graders\n+ Validate that the user that is creating this object equals the\n+ annotation_set.grader for retina_graders\n \"\"\"\n- validate_grader_is_current_retina_user(\n- data[\"annotation_set\"].grader, self.context\n- )\n+ if data.get(\"annotation_set\") is None:\n+ return data\n+\n+ grader = data[\"annotation_set\"].grader\n+ validate_grader_is_current_retina_user(grader, self.context)\n return data\n \n class Meta:\n", "issue": "Partial update not working correctly because of custom validation\nThe validation rule in `app/grandchallenge/annotations/serializers.py:31` is breaking the partial update functionality.\r\nIf you try to do a partial update PATCH request to the endpoint, it will try to find the `annotation_set` attribute in the request data. If this is not present it will throw a KeyError. \r\n\r\nThis should be fixed by first checking if the key exists in the request data and only then running the validation check. The validation check is not needed if the key does not exist because it will then either not change (for partial update request) or throw a `field is required` validation error (for every other type of request).\r\n\r\nI will fix this and add a test for it.\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom .models import (\n ETDRSGridAnnotation,\n MeasurementAnnotation,\n BooleanClassificationAnnotation,\n PolygonAnnotationSet,\n SinglePolygonAnnotation,\n LandmarkAnnotationSet,\n SingleLandmarkAnnotation,\n)\nfrom .validators import validate_grader_is_current_retina_user\n\n\nclass AbstractAnnotationSerializer(serializers.ModelSerializer):\n def validate_grader(self, value):\n \"\"\"\n Validate that grader is the user creating the object for retina_graders group\n \"\"\"\n validate_grader_is_current_retina_user(value, self.context)\n return value\n\n class Meta:\n abstract = True\n\n\nclass AbstractSingleAnnotationSerializer(serializers.ModelSerializer):\n def validate(self, data):\n \"\"\"\n Validate that the user that is creating this object equals the annotation_set.grader for retina_graders\n \"\"\"\n validate_grader_is_current_retina_user(\n data[\"annotation_set\"].grader, self.context\n )\n return data\n\n class Meta:\n abstract = True\n\n\nclass ETDRSGridAnnotationSerializer(AbstractAnnotationSerializer):\n class Meta:\n model = ETDRSGridAnnotation\n fields = (\"grader\", \"created\", \"image\", \"fovea\", \"optic_disk\")\n\n\nclass MeasurementAnnotationSerializer(AbstractAnnotationSerializer):\n class Meta:\n model = MeasurementAnnotation\n fields = (\"image\", \"grader\", \"created\", \"start_voxel\", \"end_voxel\")\n\n\nclass BooleanClassificationAnnotationSerializer(AbstractAnnotationSerializer):\n class Meta:\n model = BooleanClassificationAnnotation\n fields = (\"image\", \"grader\", \"created\", \"name\", \"value\")\n\n\nclass SinglePolygonAnnotationSerializer(AbstractSingleAnnotationSerializer):\n annotation_set = serializers.PrimaryKeyRelatedField(\n queryset=PolygonAnnotationSet.objects.all()\n )\n\n class Meta:\n model = SinglePolygonAnnotation\n fields = (\"id\", \"value\", \"annotation_set\")\n\n\nclass PolygonAnnotationSetSerializer(AbstractAnnotationSerializer):\n singlepolygonannotation_set = SinglePolygonAnnotationSerializer(\n many=True, read_only=True\n )\n\n class Meta:\n model = PolygonAnnotationSet\n fields = (\n \"id\",\n \"image\",\n \"grader\",\n \"created\",\n \"name\",\n \"singlepolygonannotation_set\",\n )\n\n\nclass LandmarkAnnotationSetSerializer(AbstractAnnotationSerializer):\n class Meta:\n model = LandmarkAnnotationSet\n fields = (\"grader\", \"created\")\n\n\nclass SingleLandmarkAnnotationSerializer(AbstractSingleAnnotationSerializer):\n class Meta:\n model = SingleLandmarkAnnotation\n fields = (\"image\", \"annotation_set\", \"landmarks\")\n", "path": "app/grandchallenge/annotations/serializers.py"}]}
| 1,435 | 214 |
gh_patches_debug_21085
|
rasdani/github-patches
|
git_diff
|
google__flax-541
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PPO example does not terminate properly
### Configuration
Running the PPO example for a short number of frames in order to reproduce as fast as possible on a cloud VM with a V100 GPU. Config python3.7, flax 0.2.2, jax 0.2.1, jaxlib 0.1.55 .
Command run:
`python ppo_main.py --config.game=Qbert --config.total_frames=4000`
### Problem you have encountered:
Program does not exit. One can `print('Done')` after `ppo_lib.train` in `ppo_main` but there is an open thread and program can't exit (even after adding `raise SystemExit`).
### Extra comments
Added extra line in `main` ` tf.config.experimental.set_visible_devices([],'GPU')` in order for the program to run properly with `tensorflow-gpu`, this is common in other `flax/examples`.
</issue>
<code>
[start of examples/ppo/ppo_main.py]
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import os
16 from absl import flags
17 from absl import app
18 import jax
19 import jax.random
20 from ml_collections import config_flags
21
22 import ppo_lib
23 import models
24 import env_utils
25
26 FLAGS = flags.FLAGS
27
28 flags.DEFINE_string(
29 'logdir', default='/tmp/ppo_training',
30 help=('Directory to save checkpoints and logging info.'))
31
32 config_flags.DEFINE_config_file(
33 'config', os.path.join(os.path.dirname(__file__), 'default_config.py'),
34 'File path to the default configuration file.')
35
36 def main(argv):
37 config = FLAGS.config
38 game = config.game + 'NoFrameskip-v4'
39 num_actions = env_utils.get_num_actions(game)
40 print(f'Playing {game} with {num_actions} actions')
41 key = jax.random.PRNGKey(0)
42 key, subkey = jax.random.split(key)
43 model = models.create_model(subkey, num_outputs=num_actions)
44 optimizer = models.create_optimizer(model, learning_rate=config.learning_rate)
45 del model
46 optimizer = ppo_lib.train(optimizer, config, FLAGS.logdir)
47
48 if __name__ == '__main__':
49 app.run(main)
50
[end of examples/ppo/ppo_main.py]
[start of examples/ppo/agent.py]
1 # Copyright 2020 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """Agent utilities, incl. choosing the move and running in separate process."""
16
17 import multiprocessing
18 import collections
19 import jax
20 import numpy as onp
21
22 import env_utils
23
24 @jax.jit
25 def policy_action(model, state):
26 """Forward pass of the network."""
27 out = model(state)
28 return out
29
30
31 ExpTuple = collections.namedtuple(
32 'ExpTuple', ['state', 'action', 'reward', 'value', 'log_prob', 'done'])
33
34
35 class RemoteSimulator:
36 """Wrap functionality for an agent emulating Atari in a separate process.
37
38 An object of this class is created for every agent.
39 """
40
41 def __init__(self, game: str):
42 """Start the remote process and create Pipe() to communicate with it."""
43 parent_conn, child_conn = multiprocessing.Pipe()
44 self.proc = multiprocessing.Process(
45 target=rcv_action_send_exp, args=(child_conn, game))
46 self.conn = parent_conn
47 self.proc.start()
48
49
50 def rcv_action_send_exp(conn, game: str):
51 """Run the remote agents.
52
53 Receive action from the main learner, perform one step of simulation and
54 send back collected experience.
55 """
56 env = env_utils.create_env(game, clip_rewards=True)
57 while True:
58 obs = env.reset()
59 done = False
60 # Observations fetched from Atari env need additional batch dimension.
61 state = obs[None, ...]
62 while not done:
63 conn.send(state)
64 action = conn.recv()
65 obs, reward, done, _ = env.step(action)
66 next_state = obs[None, ...] if not done else None
67 experience = (state, action, reward, done)
68 conn.send(experience)
69 if done:
70 break
71 state = next_state
72
[end of examples/ppo/agent.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/examples/ppo/agent.py b/examples/ppo/agent.py
--- a/examples/ppo/agent.py
+++ b/examples/ppo/agent.py
@@ -43,6 +43,7 @@
parent_conn, child_conn = multiprocessing.Pipe()
self.proc = multiprocessing.Process(
target=rcv_action_send_exp, args=(child_conn, game))
+ self.proc.daemon = True
self.conn = parent_conn
self.proc.start()
diff --git a/examples/ppo/ppo_main.py b/examples/ppo/ppo_main.py
--- a/examples/ppo/ppo_main.py
+++ b/examples/ppo/ppo_main.py
@@ -19,6 +19,8 @@
import jax.random
from ml_collections import config_flags
+import tensorflow as tf
+
import ppo_lib
import models
import env_utils
@@ -34,6 +36,9 @@
'File path to the default configuration file.')
def main(argv):
+ # Make sure tf does not allocate gpu memory.
+ tf.config.experimental.set_visible_devices([], 'GPU')
+
config = FLAGS.config
game = config.game + 'NoFrameskip-v4'
num_actions = env_utils.get_num_actions(game)
|
{"golden_diff": "diff --git a/examples/ppo/agent.py b/examples/ppo/agent.py\n--- a/examples/ppo/agent.py\n+++ b/examples/ppo/agent.py\n@@ -43,6 +43,7 @@\n parent_conn, child_conn = multiprocessing.Pipe()\n self.proc = multiprocessing.Process(\n target=rcv_action_send_exp, args=(child_conn, game))\n+ self.proc.daemon = True\n self.conn = parent_conn\n self.proc.start()\n \ndiff --git a/examples/ppo/ppo_main.py b/examples/ppo/ppo_main.py\n--- a/examples/ppo/ppo_main.py\n+++ b/examples/ppo/ppo_main.py\n@@ -19,6 +19,8 @@\n import jax.random\n from ml_collections import config_flags\n \n+import tensorflow as tf\n+\n import ppo_lib\n import models\n import env_utils\n@@ -34,6 +36,9 @@\n 'File path to the default configuration file.')\n \n def main(argv):\n+ # Make sure tf does not allocate gpu memory.\n+ tf.config.experimental.set_visible_devices([], 'GPU')\n+\n config = FLAGS.config\n game = config.game + 'NoFrameskip-v4'\n num_actions = env_utils.get_num_actions(game)\n", "issue": "PPO example does not terminate properly\n### Configuration\r\n\r\nRunning the PPO example for a short number of frames in order to reproduce as fast as possible on a cloud VM with a V100 GPU. Config python3.7, flax 0.2.2, jax 0.2.1, jaxlib 0.1.55 .\r\n\r\nCommand run:\r\n`python ppo_main.py --config.game=Qbert --config.total_frames=4000`\r\n\r\n### Problem you have encountered:\r\n\r\nProgram does not exit. One can `print('Done')` after `ppo_lib.train` in `ppo_main` but there is an open thread and program can't exit (even after adding `raise SystemExit`).\r\n\r\n### Extra comments\r\n\r\nAdded extra line in `main` ` tf.config.experimental.set_visible_devices([],'GPU')` in order for the program to run properly with `tensorflow-gpu`, this is common in other `flax/examples`. \n", "before_files": [{"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\nfrom absl import flags\nfrom absl import app\nimport jax\nimport jax.random\nfrom ml_collections import config_flags\n\nimport ppo_lib\nimport models\nimport env_utils\n\nFLAGS = flags.FLAGS\n\nflags.DEFINE_string(\n 'logdir', default='/tmp/ppo_training',\n help=('Directory to save checkpoints and logging info.'))\n\nconfig_flags.DEFINE_config_file(\n 'config', os.path.join(os.path.dirname(__file__), 'default_config.py'),\n 'File path to the default configuration file.')\n\ndef main(argv):\n config = FLAGS.config\n game = config.game + 'NoFrameskip-v4'\n num_actions = env_utils.get_num_actions(game)\n print(f'Playing {game} with {num_actions} actions')\n key = jax.random.PRNGKey(0)\n key, subkey = jax.random.split(key)\n model = models.create_model(subkey, num_outputs=num_actions)\n optimizer = models.create_optimizer(model, learning_rate=config.learning_rate)\n del model\n optimizer = ppo_lib.train(optimizer, config, FLAGS.logdir)\n\nif __name__ == '__main__':\n app.run(main)\n", "path": "examples/ppo/ppo_main.py"}, {"content": "# Copyright 2020 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"Agent utilities, incl. choosing the move and running in separate process.\"\"\"\n\nimport multiprocessing\nimport collections\nimport jax\nimport numpy as onp\n\nimport env_utils\n\[email protected]\ndef policy_action(model, state):\n \"\"\"Forward pass of the network.\"\"\"\n out = model(state)\n return out\n\n\nExpTuple = collections.namedtuple(\n 'ExpTuple', ['state', 'action', 'reward', 'value', 'log_prob', 'done'])\n\n\nclass RemoteSimulator:\n \"\"\"Wrap functionality for an agent emulating Atari in a separate process.\n\n An object of this class is created for every agent.\n \"\"\"\n\n def __init__(self, game: str):\n \"\"\"Start the remote process and create Pipe() to communicate with it.\"\"\"\n parent_conn, child_conn = multiprocessing.Pipe()\n self.proc = multiprocessing.Process(\n target=rcv_action_send_exp, args=(child_conn, game))\n self.conn = parent_conn\n self.proc.start()\n\n\ndef rcv_action_send_exp(conn, game: str):\n \"\"\"Run the remote agents.\n\n Receive action from the main learner, perform one step of simulation and\n send back collected experience.\n \"\"\"\n env = env_utils.create_env(game, clip_rewards=True)\n while True:\n obs = env.reset()\n done = False\n # Observations fetched from Atari env need additional batch dimension.\n state = obs[None, ...]\n while not done:\n conn.send(state)\n action = conn.recv()\n obs, reward, done, _ = env.step(action)\n next_state = obs[None, ...] if not done else None\n experience = (state, action, reward, done)\n conn.send(experience)\n if done:\n break\n state = next_state\n", "path": "examples/ppo/agent.py"}]}
| 1,885 | 269 |
gh_patches_debug_619
|
rasdani/github-patches
|
git_diff
|
liqd__a4-meinberlin-4706
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
#6460 Previous/Next Button Poll Request Results no backround color
**URL:** https://meinberlin-dev.liqd.net/projekte/test-poll-merge-running-poll-with-user-content/
**user:** any
**expected behaviour:** Previous/Next button on the poll request results has a pink background.
**behaviour:** Button has no background. Only the outlines turn pink when the button is clicked
**important screensize:**
**device & browser:**
**Comment/Question:**
Screenshot?
dev:
<img width="286" alt="Bildschirmfoto 2022-11-09 um 05 38 05" src="https://user-images.githubusercontent.com/113356258/200740386-60d26bc2-f169-40e4-9730-79d6d8724dad.png">
<img width="220" alt="Bildschirmfoto 2022-11-09 um 05 40 30" src="https://user-images.githubusercontent.com/113356258/200740411-e40f6bf6-83ba-468f-a941-93bbfe045993.png">
stage:
<img width="189" alt="Bildschirmfoto 2022-11-09 um 05 44 21" src="https://user-images.githubusercontent.com/113356258/200740726-f116d498-cb19-4074-bd57-541f7d5d8d2a.png">
</issue>
<code>
[start of meinberlin/apps/ideas/views.py]
1 from django.contrib import messages
2 from django.db import transaction
3 from django.urls import reverse
4 from django.utils.translation import gettext_lazy as _
5 from django.views import generic
6
7 from adhocracy4.categories import filters as category_filters
8 from adhocracy4.exports.views import DashboardExportView
9 from adhocracy4.filters import filters as a4_filters
10 from adhocracy4.filters import views as filter_views
11 from adhocracy4.filters import widgets as filters_widgets
12 from adhocracy4.filters.filters import FreeTextFilter
13 from adhocracy4.labels import filters as label_filters
14 from adhocracy4.projects.mixins import DisplayProjectOrModuleMixin
15 from adhocracy4.projects.mixins import ProjectMixin
16 from adhocracy4.rules import mixins as rules_mixins
17 from meinberlin.apps.contrib import forms as contrib_forms
18 from meinberlin.apps.contrib.views import CanonicalURLDetailView
19 from meinberlin.apps.moderatorfeedback.forms import ModeratorStatementForm
20 from meinberlin.apps.moderatorfeedback.models import ModeratorStatement
21 from meinberlin.apps.notifications.emails import \
22 NotifyContactOnModeratorFeedback
23 from meinberlin.apps.notifications.emails import \
24 NotifyCreatorOnModeratorFeedback
25
26 from . import forms
27 from . import models
28
29
30 class FreeTextFilterWidget(filters_widgets.FreeTextFilterWidget):
31 label = _('Search')
32
33
34 def get_ordering_choices(view):
35 choices = (('-created', _('Most recent')),)
36 if view.module.has_feature('rate', models.Idea):
37 choices += ('-positive_rating_count', _('Most popular')),
38 choices += ('-comment_count', _('Most commented')),
39 return choices
40
41
42 class IdeaFilterSet(a4_filters.DefaultsFilterSet):
43 defaults = {
44 'ordering': '-created'
45 }
46 category = category_filters.CategoryFilter()
47 labels = label_filters.LabelFilter()
48 ordering = a4_filters.DynamicChoicesOrderingFilter(
49 choices=get_ordering_choices
50 )
51 search = FreeTextFilter(
52 widget=FreeTextFilterWidget,
53 fields=['name']
54 )
55
56 class Meta:
57 model = models.Idea
58 fields = ['search', 'labels', 'category']
59
60
61 class AbstractIdeaListView(ProjectMixin,
62 filter_views.FilteredListView):
63 paginate_by = 15
64
65
66 class IdeaListView(AbstractIdeaListView,
67 DisplayProjectOrModuleMixin
68 ):
69 model = models.Idea
70 filter_set = IdeaFilterSet
71
72 def get_queryset(self):
73 return super().get_queryset()\
74 .filter(module=self.module)
75
76
77 class AbstractIdeaDetailView(ProjectMixin,
78 rules_mixins.PermissionRequiredMixin,
79 CanonicalURLDetailView):
80 get_context_from_object = True
81
82
83 class IdeaDetailView(AbstractIdeaDetailView):
84 model = models.Idea
85 queryset = models.Idea.objects.annotate_positive_rating_count()\
86 .annotate_negative_rating_count()
87 permission_required = 'meinberlin_ideas.view_idea'
88
89
90 class AbstractIdeaCreateView(ProjectMixin,
91 rules_mixins.PermissionRequiredMixin,
92 generic.CreateView):
93 """Create an idea in the context of a module."""
94
95 def get_permission_object(self, *args, **kwargs):
96 return self.module
97
98 def form_valid(self, form):
99 form.instance.creator = self.request.user
100 form.instance.module = self.module
101 return super().form_valid(form)
102
103 def get_form_kwargs(self):
104 kwargs = super().get_form_kwargs()
105 kwargs['module'] = self.module
106 if self.module.settings_instance:
107 kwargs['settings_instance'] = self.module.settings_instance
108 return kwargs
109
110
111 class IdeaCreateView(AbstractIdeaCreateView):
112 model = models.Idea
113 form_class = forms.IdeaForm
114 permission_required = 'meinberlin_ideas.add_idea'
115 template_name = 'meinberlin_ideas/idea_create_form.html'
116
117
118 class AbstractIdeaUpdateView(ProjectMixin,
119 rules_mixins.PermissionRequiredMixin,
120 generic.UpdateView):
121 get_context_from_object = True
122
123 def get_form_kwargs(self):
124 kwargs = super().get_form_kwargs()
125 instance = kwargs.get('instance')
126 kwargs['module'] = instance.module
127 if instance.module.settings_instance:
128 kwargs['settings_instance'] = \
129 instance.module.settings_instance
130 return kwargs
131
132
133 class IdeaUpdateView(AbstractIdeaUpdateView):
134 model = models.Idea
135 form_class = forms.IdeaForm
136 permission_required = 'meinberlin_ideas.change_idea'
137 template_name = 'meinberlin_ideas/idea_update_form.html'
138
139
140 class AbstractIdeaDeleteView(ProjectMixin,
141 rules_mixins.PermissionRequiredMixin,
142 generic.DeleteView):
143 get_context_from_object = True
144
145 def get_success_url(self):
146 return reverse(
147 'project-detail', kwargs={'slug': self.project.slug})
148
149 def delete(self, request, *args, **kwargs):
150 messages.success(self.request, self.success_message)
151 return super(AbstractIdeaDeleteView, self)\
152 .delete(request, *args, **kwargs)
153
154
155 class IdeaDeleteView(AbstractIdeaDeleteView):
156 model = models.Idea
157 success_message = _('Your Idea has been deleted')
158 permission_required = 'meinberlin_ideas.change_idea'
159 template_name = 'meinberlin_ideas/idea_confirm_delete.html'
160
161
162 class AbstractIdeaModerateView(
163 ProjectMixin,
164 rules_mixins.PermissionRequiredMixin,
165 generic.detail.SingleObjectMixin,
166 generic.detail.SingleObjectTemplateResponseMixin,
167 contrib_forms.BaseMultiModelFormView):
168
169 get_context_from_object = True
170
171 def __init__(self):
172 self.forms = {
173 'moderateable': {
174 'model': self.model,
175 'form_class': self.moderateable_form_class
176 },
177 'statement': {
178 'model': ModeratorStatement,
179 'form_class': ModeratorStatementForm
180 }
181 }
182
183 def dispatch(self, *args, **kwargs):
184 self.object = self.get_object()
185 return super().dispatch(*args, **kwargs)
186
187 def get_success_url(self):
188 return self.object.get_absolute_url()
189
190 def forms_save(self, forms, commit=True):
191 objects = super().forms_save(forms, commit=False)
192 moderateable = objects['moderateable']
193 statement = objects['statement']
194
195 if not statement.pk:
196 statement.creator = self.request.user
197
198 with transaction.atomic():
199 statement.save()
200 moderateable.moderator_statement = statement
201 moderateable.save()
202 if hasattr(self.object, 'contact_email'):
203 NotifyContactOnModeratorFeedback.send(self.object)
204 else:
205 NotifyCreatorOnModeratorFeedback.send(self.object)
206 return objects
207
208 def get_instance(self, name):
209 if name == 'moderateable':
210 return self.object
211 elif name == 'statement':
212 return self.object.moderator_statement
213
214
215 class IdeaModerateView(AbstractIdeaModerateView):
216 model = models.Idea
217 permission_required = 'meinberlin_ideas.moderate_idea'
218 template_name = 'meinberlin_ideas/idea_moderate_form.html'
219 moderateable_form_class = forms.IdeaModerateForm
220
221
222 class IdeaDashboardExportView(DashboardExportView):
223 template_name = 'a4exports/export_dashboard.html'
224
225 def get_context_data(self, **kwargs):
226 context = super().get_context_data(**kwargs)
227 context['export'] = reverse(
228 'a4dashboard:idea-export',
229 kwargs={'module_slug': self.module.slug})
230 context['comment_export'] = reverse(
231 'a4dashboard:idea-comment-export',
232 kwargs={'module_slug': self.module.slug})
233 return context
234
[end of meinberlin/apps/ideas/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/meinberlin/apps/ideas/views.py b/meinberlin/apps/ideas/views.py
--- a/meinberlin/apps/ideas/views.py
+++ b/meinberlin/apps/ideas/views.py
@@ -55,7 +55,7 @@
class Meta:
model = models.Idea
- fields = ['search', 'labels', 'category']
+ fields = ['search', 'category', 'labels']
class AbstractIdeaListView(ProjectMixin,
|
{"golden_diff": "diff --git a/meinberlin/apps/ideas/views.py b/meinberlin/apps/ideas/views.py\n--- a/meinberlin/apps/ideas/views.py\n+++ b/meinberlin/apps/ideas/views.py\n@@ -55,7 +55,7 @@\n \n class Meta:\n model = models.Idea\n- fields = ['search', 'labels', 'category']\n+ fields = ['search', 'category', 'labels']\n \n \n class AbstractIdeaListView(ProjectMixin,\n", "issue": "#6460 Previous/Next Button Poll Request Results no backround color\n**URL:** https://meinberlin-dev.liqd.net/projekte/test-poll-merge-running-poll-with-user-content/\r\n**user:** any\r\n**expected behaviour:** Previous/Next button on the poll request results has a pink background.\r\n**behaviour:** Button has no background. Only the outlines turn pink when the button is clicked\r\n**important screensize:**\r\n**device & browser:** \r\n**Comment/Question:** \r\n\r\nScreenshot?\r\n\r\ndev:\r\n<img width=\"286\" alt=\"Bildschirmfoto 2022-11-09 um 05 38 05\" src=\"https://user-images.githubusercontent.com/113356258/200740386-60d26bc2-f169-40e4-9730-79d6d8724dad.png\">\r\n<img width=\"220\" alt=\"Bildschirmfoto 2022-11-09 um 05 40 30\" src=\"https://user-images.githubusercontent.com/113356258/200740411-e40f6bf6-83ba-468f-a941-93bbfe045993.png\">\r\n\r\nstage:\r\n\r\n<img width=\"189\" alt=\"Bildschirmfoto 2022-11-09 um 05 44 21\" src=\"https://user-images.githubusercontent.com/113356258/200740726-f116d498-cb19-4074-bd57-541f7d5d8d2a.png\">\r\n\n", "before_files": [{"content": "from django.contrib import messages\nfrom django.db import transaction\nfrom django.urls import reverse\nfrom django.utils.translation import gettext_lazy as _\nfrom django.views import generic\n\nfrom adhocracy4.categories import filters as category_filters\nfrom adhocracy4.exports.views import DashboardExportView\nfrom adhocracy4.filters import filters as a4_filters\nfrom adhocracy4.filters import views as filter_views\nfrom adhocracy4.filters import widgets as filters_widgets\nfrom adhocracy4.filters.filters import FreeTextFilter\nfrom adhocracy4.labels import filters as label_filters\nfrom adhocracy4.projects.mixins import DisplayProjectOrModuleMixin\nfrom adhocracy4.projects.mixins import ProjectMixin\nfrom adhocracy4.rules import mixins as rules_mixins\nfrom meinberlin.apps.contrib import forms as contrib_forms\nfrom meinberlin.apps.contrib.views import CanonicalURLDetailView\nfrom meinberlin.apps.moderatorfeedback.forms import ModeratorStatementForm\nfrom meinberlin.apps.moderatorfeedback.models import ModeratorStatement\nfrom meinberlin.apps.notifications.emails import \\\n NotifyContactOnModeratorFeedback\nfrom meinberlin.apps.notifications.emails import \\\n NotifyCreatorOnModeratorFeedback\n\nfrom . import forms\nfrom . import models\n\n\nclass FreeTextFilterWidget(filters_widgets.FreeTextFilterWidget):\n label = _('Search')\n\n\ndef get_ordering_choices(view):\n choices = (('-created', _('Most recent')),)\n if view.module.has_feature('rate', models.Idea):\n choices += ('-positive_rating_count', _('Most popular')),\n choices += ('-comment_count', _('Most commented')),\n return choices\n\n\nclass IdeaFilterSet(a4_filters.DefaultsFilterSet):\n defaults = {\n 'ordering': '-created'\n }\n category = category_filters.CategoryFilter()\n labels = label_filters.LabelFilter()\n ordering = a4_filters.DynamicChoicesOrderingFilter(\n choices=get_ordering_choices\n )\n search = FreeTextFilter(\n widget=FreeTextFilterWidget,\n fields=['name']\n )\n\n class Meta:\n model = models.Idea\n fields = ['search', 'labels', 'category']\n\n\nclass AbstractIdeaListView(ProjectMixin,\n filter_views.FilteredListView):\n paginate_by = 15\n\n\nclass IdeaListView(AbstractIdeaListView,\n DisplayProjectOrModuleMixin\n ):\n model = models.Idea\n filter_set = IdeaFilterSet\n\n def get_queryset(self):\n return super().get_queryset()\\\n .filter(module=self.module)\n\n\nclass AbstractIdeaDetailView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n CanonicalURLDetailView):\n get_context_from_object = True\n\n\nclass IdeaDetailView(AbstractIdeaDetailView):\n model = models.Idea\n queryset = models.Idea.objects.annotate_positive_rating_count()\\\n .annotate_negative_rating_count()\n permission_required = 'meinberlin_ideas.view_idea'\n\n\nclass AbstractIdeaCreateView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.CreateView):\n \"\"\"Create an idea in the context of a module.\"\"\"\n\n def get_permission_object(self, *args, **kwargs):\n return self.module\n\n def form_valid(self, form):\n form.instance.creator = self.request.user\n form.instance.module = self.module\n return super().form_valid(form)\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n kwargs['module'] = self.module\n if self.module.settings_instance:\n kwargs['settings_instance'] = self.module.settings_instance\n return kwargs\n\n\nclass IdeaCreateView(AbstractIdeaCreateView):\n model = models.Idea\n form_class = forms.IdeaForm\n permission_required = 'meinberlin_ideas.add_idea'\n template_name = 'meinberlin_ideas/idea_create_form.html'\n\n\nclass AbstractIdeaUpdateView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.UpdateView):\n get_context_from_object = True\n\n def get_form_kwargs(self):\n kwargs = super().get_form_kwargs()\n instance = kwargs.get('instance')\n kwargs['module'] = instance.module\n if instance.module.settings_instance:\n kwargs['settings_instance'] = \\\n instance.module.settings_instance\n return kwargs\n\n\nclass IdeaUpdateView(AbstractIdeaUpdateView):\n model = models.Idea\n form_class = forms.IdeaForm\n permission_required = 'meinberlin_ideas.change_idea'\n template_name = 'meinberlin_ideas/idea_update_form.html'\n\n\nclass AbstractIdeaDeleteView(ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.DeleteView):\n get_context_from_object = True\n\n def get_success_url(self):\n return reverse(\n 'project-detail', kwargs={'slug': self.project.slug})\n\n def delete(self, request, *args, **kwargs):\n messages.success(self.request, self.success_message)\n return super(AbstractIdeaDeleteView, self)\\\n .delete(request, *args, **kwargs)\n\n\nclass IdeaDeleteView(AbstractIdeaDeleteView):\n model = models.Idea\n success_message = _('Your Idea has been deleted')\n permission_required = 'meinberlin_ideas.change_idea'\n template_name = 'meinberlin_ideas/idea_confirm_delete.html'\n\n\nclass AbstractIdeaModerateView(\n ProjectMixin,\n rules_mixins.PermissionRequiredMixin,\n generic.detail.SingleObjectMixin,\n generic.detail.SingleObjectTemplateResponseMixin,\n contrib_forms.BaseMultiModelFormView):\n\n get_context_from_object = True\n\n def __init__(self):\n self.forms = {\n 'moderateable': {\n 'model': self.model,\n 'form_class': self.moderateable_form_class\n },\n 'statement': {\n 'model': ModeratorStatement,\n 'form_class': ModeratorStatementForm\n }\n }\n\n def dispatch(self, *args, **kwargs):\n self.object = self.get_object()\n return super().dispatch(*args, **kwargs)\n\n def get_success_url(self):\n return self.object.get_absolute_url()\n\n def forms_save(self, forms, commit=True):\n objects = super().forms_save(forms, commit=False)\n moderateable = objects['moderateable']\n statement = objects['statement']\n\n if not statement.pk:\n statement.creator = self.request.user\n\n with transaction.atomic():\n statement.save()\n moderateable.moderator_statement = statement\n moderateable.save()\n if hasattr(self.object, 'contact_email'):\n NotifyContactOnModeratorFeedback.send(self.object)\n else:\n NotifyCreatorOnModeratorFeedback.send(self.object)\n return objects\n\n def get_instance(self, name):\n if name == 'moderateable':\n return self.object\n elif name == 'statement':\n return self.object.moderator_statement\n\n\nclass IdeaModerateView(AbstractIdeaModerateView):\n model = models.Idea\n permission_required = 'meinberlin_ideas.moderate_idea'\n template_name = 'meinberlin_ideas/idea_moderate_form.html'\n moderateable_form_class = forms.IdeaModerateForm\n\n\nclass IdeaDashboardExportView(DashboardExportView):\n template_name = 'a4exports/export_dashboard.html'\n\n def get_context_data(self, **kwargs):\n context = super().get_context_data(**kwargs)\n context['export'] = reverse(\n 'a4dashboard:idea-export',\n kwargs={'module_slug': self.module.slug})\n context['comment_export'] = reverse(\n 'a4dashboard:idea-comment-export',\n kwargs={'module_slug': self.module.slug})\n return context\n", "path": "meinberlin/apps/ideas/views.py"}]}
| 3,176 | 107 |
gh_patches_debug_7204
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-1072
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Documentation for 0.7 says that it depends on pandas==0.23.4, but when I import modin it says it requires pandas==0.25
In the [modin documentation for 0.7]( https://modin.readthedocs.io/en/latest/installation.html#dependencies), it says that it depends on `pandas==0.23.4`, but when I install `modin==0.7` and try to import it, the following import error is thrown:
ImportError: The pandas version installed does not match the required pandas version in Modin. Please install pandas 0.25.3 to use Modin.
Is this an error in the documentation? Is there anyway I can use `modin==0.7` with `pandas==0.23.4` as I am using Dataiku DSS v6.0, which requires `pandas==0.23.4` and cannot be upgraded.
</issue>
<code>
[start of modin/pandas/__init__.py]
1 import pandas
2
3 __pandas_version__ = "0.25.3"
4
5 if pandas.__version__ != __pandas_version__:
6 raise ImportError(
7 "The pandas version installed does not match the required pandas "
8 "version in Modin. Please install pandas {} to use "
9 "Modin.".format(__pandas_version__)
10 )
11
12 from pandas import (
13 eval,
14 unique,
15 value_counts,
16 cut,
17 to_numeric,
18 factorize,
19 test,
20 qcut,
21 date_range,
22 period_range,
23 Index,
24 MultiIndex,
25 CategoricalIndex,
26 bdate_range,
27 DatetimeIndex,
28 Timedelta,
29 Timestamp,
30 to_timedelta,
31 set_eng_float_format,
32 options,
33 set_option,
34 NaT,
35 PeriodIndex,
36 Categorical,
37 Interval,
38 UInt8Dtype,
39 UInt16Dtype,
40 UInt32Dtype,
41 UInt64Dtype,
42 SparseDtype,
43 Int8Dtype,
44 Int16Dtype,
45 Int32Dtype,
46 Int64Dtype,
47 CategoricalDtype,
48 DatetimeTZDtype,
49 IntervalDtype,
50 PeriodDtype,
51 RangeIndex,
52 Int64Index,
53 UInt64Index,
54 Float64Index,
55 TimedeltaIndex,
56 IntervalIndex,
57 IndexSlice,
58 Grouper,
59 array,
60 Period,
61 show_versions,
62 DateOffset,
63 timedelta_range,
64 infer_freq,
65 interval_range,
66 ExcelWriter,
67 SparseArray,
68 SparseSeries,
69 SparseDataFrame,
70 datetime,
71 NamedAgg,
72 )
73 import threading
74 import os
75 import types
76 import sys
77
78 from .. import __version__
79 from .concat import concat
80 from .dataframe import DataFrame
81 from .datetimes import to_datetime
82 from .io import (
83 read_csv,
84 read_parquet,
85 read_json,
86 read_html,
87 read_clipboard,
88 read_excel,
89 read_hdf,
90 read_feather,
91 read_msgpack,
92 read_stata,
93 read_sas,
94 read_pickle,
95 read_sql,
96 read_gbq,
97 read_table,
98 read_fwf,
99 read_sql_table,
100 read_sql_query,
101 read_spss,
102 ExcelFile,
103 to_pickle,
104 HDFStore,
105 )
106 from .reshape import get_dummies, melt, crosstab, lreshape, wide_to_long
107 from .series import Series
108 from .general import (
109 isna,
110 isnull,
111 merge,
112 merge_asof,
113 merge_ordered,
114 pivot_table,
115 notnull,
116 notna,
117 pivot,
118 )
119 from .plotting import Plotting as plotting
120 from .. import __execution_engine__ as execution_engine
121
122 # Set this so that Pandas doesn't try to multithread by itself
123 os.environ["OMP_NUM_THREADS"] = "1"
124 num_cpus = 1
125
126
127 def initialize_ray():
128 import ray
129
130 """Initializes ray based on environment variables and internal defaults."""
131 if threading.current_thread().name == "MainThread":
132 plasma_directory = None
133 cluster = os.environ.get("MODIN_RAY_CLUSTER", None)
134 redis_address = os.environ.get("MODIN_REDIS_ADDRESS", None)
135 if cluster == "True" and redis_address is not None:
136 # We only start ray in a cluster setting for the head node.
137 ray.init(
138 include_webui=False,
139 ignore_reinit_error=True,
140 redis_address=redis_address,
141 logging_level=100,
142 )
143 elif cluster is None:
144 object_store_memory = os.environ.get("MODIN_MEMORY", None)
145 if os.environ.get("MODIN_OUT_OF_CORE", "False").title() == "True":
146 from tempfile import gettempdir
147
148 plasma_directory = gettempdir()
149 # We may have already set the memory from the environment variable, we don't
150 # want to overwrite that value if we have.
151 if object_store_memory is None:
152 # Round down to the nearest Gigabyte.
153 mem_bytes = ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9
154 # Default to 8x memory for out of core
155 object_store_memory = 8 * mem_bytes
156 # In case anything failed above, we can still improve the memory for Modin.
157 if object_store_memory is None:
158 # Round down to the nearest Gigabyte.
159 object_store_memory = int(
160 0.6 * ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9
161 )
162 # If the memory pool is smaller than 2GB, just use the default in ray.
163 if object_store_memory == 0:
164 object_store_memory = None
165 else:
166 object_store_memory = int(object_store_memory)
167 ray.init(
168 include_webui=False,
169 ignore_reinit_error=True,
170 plasma_directory=plasma_directory,
171 object_store_memory=object_store_memory,
172 redis_address=redis_address,
173 logging_level=100,
174 memory=object_store_memory,
175 )
176 # Register custom serializer for method objects to avoid warning message.
177 # We serialize `MethodType` objects when we use AxisPartition operations.
178 ray.register_custom_serializer(types.MethodType, use_pickle=True)
179
180 # Register a fix import function to run on all_workers including the driver.
181 # This is a hack solution to fix #647, #746
182 def move_stdlib_ahead_of_site_packages(*args):
183 site_packages_path = None
184 site_packages_path_index = -1
185 for i, path in enumerate(sys.path):
186 if sys.exec_prefix in path and path.endswith("site-packages"):
187 site_packages_path = path
188 site_packages_path_index = i
189 # break on first found
190 break
191
192 if site_packages_path is not None:
193 # stdlib packages layout as follows:
194 # - python3.x
195 # - typing.py
196 # - site-packages/
197 # - pandas
198 # So extracting the dirname of the site_packages can point us
199 # to the directory containing standard libraries.
200 sys.path.insert(
201 site_packages_path_index, os.path.dirname(site_packages_path)
202 )
203
204 move_stdlib_ahead_of_site_packages()
205 ray.worker.global_worker.run_function_on_all_workers(
206 move_stdlib_ahead_of_site_packages
207 )
208
209
210 if execution_engine == "Ray":
211 import ray
212
213 initialize_ray()
214 num_cpus = ray.cluster_resources()["CPU"]
215 elif execution_engine == "Dask": # pragma: no cover
216 from distributed.client import _get_global_client
217 import warnings
218
219 warnings.warn("The Dask Engine for Modin is experimental.")
220
221 if threading.current_thread().name == "MainThread":
222 # initialize the dask client
223 client = _get_global_client()
224 if client is None:
225 from distributed import Client
226 import multiprocessing
227
228 num_cpus = multiprocessing.cpu_count()
229 client = Client(n_workers=num_cpus)
230 elif execution_engine != "Python":
231 raise ImportError("Unrecognized execution engine: {}.".format(execution_engine))
232
233 DEFAULT_NPARTITIONS = max(4, int(num_cpus))
234
235 __all__ = [
236 "DataFrame",
237 "Series",
238 "read_csv",
239 "read_parquet",
240 "read_json",
241 "read_html",
242 "read_clipboard",
243 "read_excel",
244 "read_hdf",
245 "read_feather",
246 "read_msgpack",
247 "read_stata",
248 "read_sas",
249 "read_pickle",
250 "read_sql",
251 "read_gbq",
252 "read_table",
253 "read_spss",
254 "concat",
255 "eval",
256 "unique",
257 "value_counts",
258 "cut",
259 "to_numeric",
260 "factorize",
261 "test",
262 "qcut",
263 "to_datetime",
264 "get_dummies",
265 "isna",
266 "isnull",
267 "merge",
268 "pivot_table",
269 "date_range",
270 "Index",
271 "MultiIndex",
272 "Series",
273 "bdate_range",
274 "period_range",
275 "DatetimeIndex",
276 "to_timedelta",
277 "set_eng_float_format",
278 "options",
279 "set_option",
280 "CategoricalIndex",
281 "Timedelta",
282 "Timestamp",
283 "NaT",
284 "PeriodIndex",
285 "Categorical",
286 "__version__",
287 "melt",
288 "crosstab",
289 "plotting",
290 "Interval",
291 "UInt8Dtype",
292 "UInt16Dtype",
293 "UInt32Dtype",
294 "UInt64Dtype",
295 "SparseDtype",
296 "Int8Dtype",
297 "Int16Dtype",
298 "Int32Dtype",
299 "Int64Dtype",
300 "CategoricalDtype",
301 "DatetimeTZDtype",
302 "IntervalDtype",
303 "PeriodDtype",
304 "RangeIndex",
305 "Int64Index",
306 "UInt64Index",
307 "Float64Index",
308 "TimedeltaIndex",
309 "IntervalIndex",
310 "IndexSlice",
311 "Grouper",
312 "array",
313 "Period",
314 "show_versions",
315 "DateOffset",
316 "timedelta_range",
317 "infer_freq",
318 "interval_range",
319 "ExcelWriter",
320 "read_fwf",
321 "read_sql_table",
322 "read_sql_query",
323 "ExcelFile",
324 "to_pickle",
325 "HDFStore",
326 "lreshape",
327 "wide_to_long",
328 "merge_asof",
329 "merge_ordered",
330 "notnull",
331 "notna",
332 "pivot",
333 "SparseArray",
334 "SparseSeries",
335 "SparseDataFrame",
336 "datetime",
337 "NamedAgg",
338 "DEFAULT_NPARTITIONS",
339 ]
340
341 del pandas
342
[end of modin/pandas/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/modin/pandas/__init__.py b/modin/pandas/__init__.py
--- a/modin/pandas/__init__.py
+++ b/modin/pandas/__init__.py
@@ -3,10 +3,11 @@
__pandas_version__ = "0.25.3"
if pandas.__version__ != __pandas_version__:
- raise ImportError(
- "The pandas version installed does not match the required pandas "
- "version in Modin. Please install pandas {} to use "
- "Modin.".format(__pandas_version__)
+ import warnings
+
+ warnings.warn(
+ "The pandas version installed does not match the required pandas version in "
+ "Modin. This may cause undesired side effects!".format(__pandas_version__)
)
from pandas import (
|
{"golden_diff": "diff --git a/modin/pandas/__init__.py b/modin/pandas/__init__.py\n--- a/modin/pandas/__init__.py\n+++ b/modin/pandas/__init__.py\n@@ -3,10 +3,11 @@\n __pandas_version__ = \"0.25.3\"\n \n if pandas.__version__ != __pandas_version__:\n- raise ImportError(\n- \"The pandas version installed does not match the required pandas \"\n- \"version in Modin. Please install pandas {} to use \"\n- \"Modin.\".format(__pandas_version__)\n+ import warnings\n+\n+ warnings.warn(\n+ \"The pandas version installed does not match the required pandas version in \"\n+ \"Modin. This may cause undesired side effects!\".format(__pandas_version__)\n )\n \n from pandas import (\n", "issue": "Documentation for 0.7 says that it depends on pandas==0.23.4, but when I import modin it says it requires pandas==0.25\nIn the [modin documentation for 0.7]( https://modin.readthedocs.io/en/latest/installation.html#dependencies), it says that it depends on `pandas==0.23.4`, but when I install `modin==0.7` and try to import it, the following import error is thrown:\r\n\r\n ImportError: The pandas version installed does not match the required pandas version in Modin. Please install pandas 0.25.3 to use Modin.\r\n\r\nIs this an error in the documentation? Is there anyway I can use `modin==0.7` with `pandas==0.23.4` as I am using Dataiku DSS v6.0, which requires `pandas==0.23.4` and cannot be upgraded.\n", "before_files": [{"content": "import pandas\n\n__pandas_version__ = \"0.25.3\"\n\nif pandas.__version__ != __pandas_version__:\n raise ImportError(\n \"The pandas version installed does not match the required pandas \"\n \"version in Modin. Please install pandas {} to use \"\n \"Modin.\".format(__pandas_version__)\n )\n\nfrom pandas import (\n eval,\n unique,\n value_counts,\n cut,\n to_numeric,\n factorize,\n test,\n qcut,\n date_range,\n period_range,\n Index,\n MultiIndex,\n CategoricalIndex,\n bdate_range,\n DatetimeIndex,\n Timedelta,\n Timestamp,\n to_timedelta,\n set_eng_float_format,\n options,\n set_option,\n NaT,\n PeriodIndex,\n Categorical,\n Interval,\n UInt8Dtype,\n UInt16Dtype,\n UInt32Dtype,\n UInt64Dtype,\n SparseDtype,\n Int8Dtype,\n Int16Dtype,\n Int32Dtype,\n Int64Dtype,\n CategoricalDtype,\n DatetimeTZDtype,\n IntervalDtype,\n PeriodDtype,\n RangeIndex,\n Int64Index,\n UInt64Index,\n Float64Index,\n TimedeltaIndex,\n IntervalIndex,\n IndexSlice,\n Grouper,\n array,\n Period,\n show_versions,\n DateOffset,\n timedelta_range,\n infer_freq,\n interval_range,\n ExcelWriter,\n SparseArray,\n SparseSeries,\n SparseDataFrame,\n datetime,\n NamedAgg,\n)\nimport threading\nimport os\nimport types\nimport sys\n\nfrom .. import __version__\nfrom .concat import concat\nfrom .dataframe import DataFrame\nfrom .datetimes import to_datetime\nfrom .io import (\n read_csv,\n read_parquet,\n read_json,\n read_html,\n read_clipboard,\n read_excel,\n read_hdf,\n read_feather,\n read_msgpack,\n read_stata,\n read_sas,\n read_pickle,\n read_sql,\n read_gbq,\n read_table,\n read_fwf,\n read_sql_table,\n read_sql_query,\n read_spss,\n ExcelFile,\n to_pickle,\n HDFStore,\n)\nfrom .reshape import get_dummies, melt, crosstab, lreshape, wide_to_long\nfrom .series import Series\nfrom .general import (\n isna,\n isnull,\n merge,\n merge_asof,\n merge_ordered,\n pivot_table,\n notnull,\n notna,\n pivot,\n)\nfrom .plotting import Plotting as plotting\nfrom .. import __execution_engine__ as execution_engine\n\n# Set this so that Pandas doesn't try to multithread by itself\nos.environ[\"OMP_NUM_THREADS\"] = \"1\"\nnum_cpus = 1\n\n\ndef initialize_ray():\n import ray\n\n \"\"\"Initializes ray based on environment variables and internal defaults.\"\"\"\n if threading.current_thread().name == \"MainThread\":\n plasma_directory = None\n cluster = os.environ.get(\"MODIN_RAY_CLUSTER\", None)\n redis_address = os.environ.get(\"MODIN_REDIS_ADDRESS\", None)\n if cluster == \"True\" and redis_address is not None:\n # We only start ray in a cluster setting for the head node.\n ray.init(\n include_webui=False,\n ignore_reinit_error=True,\n redis_address=redis_address,\n logging_level=100,\n )\n elif cluster is None:\n object_store_memory = os.environ.get(\"MODIN_MEMORY\", None)\n if os.environ.get(\"MODIN_OUT_OF_CORE\", \"False\").title() == \"True\":\n from tempfile import gettempdir\n\n plasma_directory = gettempdir()\n # We may have already set the memory from the environment variable, we don't\n # want to overwrite that value if we have.\n if object_store_memory is None:\n # Round down to the nearest Gigabyte.\n mem_bytes = ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9\n # Default to 8x memory for out of core\n object_store_memory = 8 * mem_bytes\n # In case anything failed above, we can still improve the memory for Modin.\n if object_store_memory is None:\n # Round down to the nearest Gigabyte.\n object_store_memory = int(\n 0.6 * ray.utils.get_system_memory() // 10 ** 9 * 10 ** 9\n )\n # If the memory pool is smaller than 2GB, just use the default in ray.\n if object_store_memory == 0:\n object_store_memory = None\n else:\n object_store_memory = int(object_store_memory)\n ray.init(\n include_webui=False,\n ignore_reinit_error=True,\n plasma_directory=plasma_directory,\n object_store_memory=object_store_memory,\n redis_address=redis_address,\n logging_level=100,\n memory=object_store_memory,\n )\n # Register custom serializer for method objects to avoid warning message.\n # We serialize `MethodType` objects when we use AxisPartition operations.\n ray.register_custom_serializer(types.MethodType, use_pickle=True)\n\n # Register a fix import function to run on all_workers including the driver.\n # This is a hack solution to fix #647, #746\n def move_stdlib_ahead_of_site_packages(*args):\n site_packages_path = None\n site_packages_path_index = -1\n for i, path in enumerate(sys.path):\n if sys.exec_prefix in path and path.endswith(\"site-packages\"):\n site_packages_path = path\n site_packages_path_index = i\n # break on first found\n break\n\n if site_packages_path is not None:\n # stdlib packages layout as follows:\n # - python3.x\n # - typing.py\n # - site-packages/\n # - pandas\n # So extracting the dirname of the site_packages can point us\n # to the directory containing standard libraries.\n sys.path.insert(\n site_packages_path_index, os.path.dirname(site_packages_path)\n )\n\n move_stdlib_ahead_of_site_packages()\n ray.worker.global_worker.run_function_on_all_workers(\n move_stdlib_ahead_of_site_packages\n )\n\n\nif execution_engine == \"Ray\":\n import ray\n\n initialize_ray()\n num_cpus = ray.cluster_resources()[\"CPU\"]\nelif execution_engine == \"Dask\": # pragma: no cover\n from distributed.client import _get_global_client\n import warnings\n\n warnings.warn(\"The Dask Engine for Modin is experimental.\")\n\n if threading.current_thread().name == \"MainThread\":\n # initialize the dask client\n client = _get_global_client()\n if client is None:\n from distributed import Client\n import multiprocessing\n\n num_cpus = multiprocessing.cpu_count()\n client = Client(n_workers=num_cpus)\nelif execution_engine != \"Python\":\n raise ImportError(\"Unrecognized execution engine: {}.\".format(execution_engine))\n\nDEFAULT_NPARTITIONS = max(4, int(num_cpus))\n\n__all__ = [\n \"DataFrame\",\n \"Series\",\n \"read_csv\",\n \"read_parquet\",\n \"read_json\",\n \"read_html\",\n \"read_clipboard\",\n \"read_excel\",\n \"read_hdf\",\n \"read_feather\",\n \"read_msgpack\",\n \"read_stata\",\n \"read_sas\",\n \"read_pickle\",\n \"read_sql\",\n \"read_gbq\",\n \"read_table\",\n \"read_spss\",\n \"concat\",\n \"eval\",\n \"unique\",\n \"value_counts\",\n \"cut\",\n \"to_numeric\",\n \"factorize\",\n \"test\",\n \"qcut\",\n \"to_datetime\",\n \"get_dummies\",\n \"isna\",\n \"isnull\",\n \"merge\",\n \"pivot_table\",\n \"date_range\",\n \"Index\",\n \"MultiIndex\",\n \"Series\",\n \"bdate_range\",\n \"period_range\",\n \"DatetimeIndex\",\n \"to_timedelta\",\n \"set_eng_float_format\",\n \"options\",\n \"set_option\",\n \"CategoricalIndex\",\n \"Timedelta\",\n \"Timestamp\",\n \"NaT\",\n \"PeriodIndex\",\n \"Categorical\",\n \"__version__\",\n \"melt\",\n \"crosstab\",\n \"plotting\",\n \"Interval\",\n \"UInt8Dtype\",\n \"UInt16Dtype\",\n \"UInt32Dtype\",\n \"UInt64Dtype\",\n \"SparseDtype\",\n \"Int8Dtype\",\n \"Int16Dtype\",\n \"Int32Dtype\",\n \"Int64Dtype\",\n \"CategoricalDtype\",\n \"DatetimeTZDtype\",\n \"IntervalDtype\",\n \"PeriodDtype\",\n \"RangeIndex\",\n \"Int64Index\",\n \"UInt64Index\",\n \"Float64Index\",\n \"TimedeltaIndex\",\n \"IntervalIndex\",\n \"IndexSlice\",\n \"Grouper\",\n \"array\",\n \"Period\",\n \"show_versions\",\n \"DateOffset\",\n \"timedelta_range\",\n \"infer_freq\",\n \"interval_range\",\n \"ExcelWriter\",\n \"read_fwf\",\n \"read_sql_table\",\n \"read_sql_query\",\n \"ExcelFile\",\n \"to_pickle\",\n \"HDFStore\",\n \"lreshape\",\n \"wide_to_long\",\n \"merge_asof\",\n \"merge_ordered\",\n \"notnull\",\n \"notna\",\n \"pivot\",\n \"SparseArray\",\n \"SparseSeries\",\n \"SparseDataFrame\",\n \"datetime\",\n \"NamedAgg\",\n \"DEFAULT_NPARTITIONS\",\n]\n\ndel pandas\n", "path": "modin/pandas/__init__.py"}]}
| 3,776 | 183 |
gh_patches_debug_22186
|
rasdani/github-patches
|
git_diff
|
conan-io__conan-center-index-870
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[libsodium] libsodium/1.0.18: Recipe broken on Python < 3.6
The use of f-strings in the Python recipe causes the package to not be installed with a Python version that is less than 3.6. This is a serious problem for using conan with older distributions, such as Ubuntu Xenial or CentOS. Instead of f-strings, the `.format(...)` or `%` interpolation methods should be used.
### Package and Environment Details
* Package Name/Version: **libsodium/1.0.18**
### Conan output
```
ERROR: Error loading conanfile at '~/.conan/data/libsodium/1.0.18/_/_/export/conanfile.py': Unable to load conanfile in ~/.conan/data/libsodium/1.0.18/_/_/export/conanfile.py
File "/usr/lib/python3.5/imp.py", line 172, in load_source
module = _load(spec)
File "<frozen importlib._bootstrap>", line 693, in _load
File "<frozen importlib._bootstrap>", line 673, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 661, in exec_module
File "<frozen importlib._bootstrap_external>", line 767, in get_code
File "<frozen importlib._bootstrap_external>", line 727, in source_to_code
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
File "~/.conan/data/libsodium/1.0.18/_/_/export/conanfile.py", line 126
raise ConanInvalidConfiguration(f"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}")
^
SyntaxError: invalid syntax
```
### Locations in Recipe
```
libsodium/1.0.18/conanfile.py
126: raise ConanInvalidConfiguration(f"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}")
148: raise ConanInvalidConfiguration(f"Unsupported os for libsodium: {self.settings.os}")
```
This is as far as I can tell the only package in this repository that uses f-strings.
</issue>
<code>
[start of recipes/libsodium/1.0.18/conanfile.py]
1 from conans import ConanFile, AutoToolsBuildEnvironment, tools, MSBuild
2 from conans.errors import ConanInvalidConfiguration
3 import os
4
5
6 class LibsodiumConan(ConanFile):
7 name = "libsodium"
8 description = "A modern and easy-to-use crypto library."
9 license = "ISC"
10 url = "https://github.com/conan-io/conan-center-index"
11 homepage = "https://download.libsodium.org/doc/"
12 exports_sources = ["patches/**"]
13 settings = "os", "compiler", "arch", "build_type"
14 topics = ("sodium", "libsodium", "encryption", "signature", "hashing")
15 generators = "cmake"
16 _source_subfolder = "source_subfolder"
17
18 options = {
19 "shared" : [True, False],
20 "fPIC": [True, False],
21 "use_soname" : [True, False],
22 "PIE" : [True, False],
23 }
24
25 default_options = {
26 "shared": False,
27 "fPIC": True,
28 "use_soname": True,
29 "PIE": False,
30 }
31
32 @property
33 def _android_id_str(self):
34 return "androideabi" if str(self.settings.arch) in ["armv6", "armv7"] else "android"
35
36 @property
37 def _is_mingw(self):
38 return self.settings.os == "Windows" and self.settings.compiler == "gcc"
39
40 @property
41 def _vs_configuration(self):
42 configuration = ""
43 if self.options.shared:
44 configuration += "Dyn"
45 else:
46 configuration += "Static"
47 build_type = "Debug" if self.settings.build_type == "Debug" else "Release"
48 configuration += build_type
49 return configuration
50
51 @property
52 def _vs_sln_folder(self):
53 folder = {"14": "vs2015",
54 "15": "vs2017",
55 "16": "vs2019"}.get(str(self.settings.compiler.version), None)
56 if not folder:
57 raise ConanInvalidConfiguration("Unsupported msvc version: {}".format(self.settings.compiler.version))
58 return folder
59
60 def configure(self):
61 del self.settings.compiler.libcxx
62 del self.settings.compiler.cppstd
63
64 def config_options(self):
65 if self.settings.os == "Windows":
66 del self.options.fPIC
67
68 def build_requirements(self):
69 # There are several unix tools used (bash scripts for Emscripten, autoreconf on MinGW, etc...)
70 if self.settings.compiler != "Visual Studio" and tools.os_info.is_windows and \
71 not "CONAN_BASH_PATH" in os.environ and tools.os_info.detect_windows_subsystem() != "Windows":
72 self.build_requires("msys2/20190524")
73
74 def source(self):
75 tools.get(**self.conan_data["sources"][self.version])
76 extracted_dir = self.name + "-" + self.version
77 os.rename(extracted_dir, self._source_subfolder)
78
79 def _build_visual(self):
80 sln_path = os.path.join(self.build_folder, self._source_subfolder, "builds", "msvc", self._vs_sln_folder, "libsodium.sln")
81
82 msbuild = MSBuild(self)
83 msbuild.build(sln_path, upgrade_project=False, platforms={"x86": "Win32"}, build_type=self._vs_configuration)
84
85 def _build_autotools_impl(self, configure_args):
86 win_bash = False
87 if self._is_mingw:
88 win_bash = True
89
90 autotools = AutoToolsBuildEnvironment(self, win_bash=win_bash)
91 if self._is_mingw:
92 self.run("autoreconf -i", cwd=self._source_subfolder, win_bash=win_bash)
93 autotools.configure(args=configure_args, configure_dir=self._source_subfolder, host=False)
94 autotools.make(args=["-j%s" % str(tools.cpu_count())])
95 autotools.install()
96
97 def _build_autotools_linux(self, configure_args):
98 self._build_autotools_impl(configure_args)
99
100 def _build_autotools_emscripten(self, configure_args):
101 self.run("./dist-build/emscripten.sh --standard", cwd=self._source_subfolder)
102
103 def _build_autotools_android(self, configure_args):
104 host_arch = "%s-linux-%s" % (tools.to_android_abi(self.settings.arch), self._android_id_str)
105 configure_args.append("--host=%s" % host_arch)
106 self._build_autotools_impl(configure_args)
107
108 def _build_autotools_mingw(self, configure_args):
109 arch = "i686" if self.settings.arch == "x86" else self.settings.arch
110 host_arch = "%s-w64-mingw32" % arch
111 configure_args.append("--host=%s" % host_arch)
112 self._build_autotools_impl(configure_args)
113
114 def _build_autotools_darwin(self, configure_args):
115 os = "ios" if self.settings.os == "iOS" else "darwin"
116 host_arch = "%s-apple-%s" % (self.settings.arch, os)
117 configure_args.append("--host=%s" % host_arch)
118 self._build_autotools_impl(configure_args)
119
120 def _build_autotools_neutrino(self, configure_args):
121 neutrino_archs = {"x86_64":"x86_64-pc", "x86":"i586-pc", "armv7":"arm-unknown", "armv8": "aarch64-unknown"}
122 if self.settings.os.version == "7.0" and str(self.settings.arch) in neutrino_archs:
123 host_arch = "%s-nto-qnx7.0.0" % neutrino_archs[str(self.settings.arch)]
124 if self.settings.arch == "armv7":
125 host_arch += "eabi"
126 else:
127 raise ConanInvalidConfiguration(f"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}")
128 configure_args.append("--host=%s" % host_arch)
129 self._build_autotools_impl(configure_args)
130
131 def _build_autotools(self):
132 absolute_install_dir = os.path.abspath(os.path.join(".", "install"))
133 absolute_install_dir = absolute_install_dir.replace("\\", "/")
134 configure_args = self._get_configure_args(absolute_install_dir)
135
136 if self.settings.os == "Linux":
137 self._build_autotools_linux(configure_args)
138 elif self.settings.os == "Emscripten":
139 self._build_autotools_emscripten(configure_args)
140 elif self.settings.os == "Android":
141 self._build_autotools_android(configure_args)
142 elif tools.is_apple_os(self.settings.os):
143 self._build_autotools_darwin(configure_args)
144 elif self._is_mingw:
145 self._build_autotools_mingw(configure_args)
146 elif self.settings.os == "Neutrino":
147 self._build_autotools_neutrino(configure_args)
148 else:
149 raise ConanInvalidConfiguration(f"Unsupported os for libsodium: {self.settings.os}")
150
151 def build(self):
152 for patch in self.conan_data["patches"][self.version]:
153 tools.patch(**patch)
154 if self.settings.os == "Macos":
155 tools.replace_in_file(os.path.join(self._source_subfolder, "configure"), r"-install_name \$rpath/", "-install_name ")
156 if self.settings.compiler != "Visual Studio":
157 self._build_autotools()
158 else:
159 self._build_visual()
160
161 def package(self):
162 self.copy("*LICENSE", dst="licenses", keep_path=False)
163 if self.settings.compiler == "Visual Studio":
164 self._package_visual()
165 else:
166 self._package_autotools()
167
168 def package_info(self):
169 if self.settings.compiler == "Visual Studio":
170 if not self.options.shared:
171 self.cpp_info.defines = ["SODIUM_STATIC=1"]
172 self.cpp_info.libs = tools.collect_libs(self)
173 if self.settings.os == "Linux":
174 self.cpp_info.system_libs = ["pthread"]
175
176 def _package_autotools(self):
177 if self.settings.os == "Emscripten":
178 prefix = "%s/libsodium-js" % self._source_subfolder
179 else:
180 prefix = "install"
181 lib_folder = os.path.join(prefix, "lib")
182 self.copy("*.h", dst="include", src=os.path.join(prefix, "include"))
183 self.copy("*.a", dst="lib", src=lib_folder)
184 self.copy("*.so*", dst="lib", src=lib_folder, symlinks=True)
185 self.copy("*.dylib", dst="lib", src=lib_folder, symlinks=True)
186
187 def _package_visual(self):
188 self.copy("*.lib", dst="lib", keep_path=False)
189 self.copy("*.dll", dst="bin", keep_path=False)
190 inc_src = os.path.join(self._source_subfolder, "src", self.name, "include")
191 self.copy("*.h", src=inc_src, dst="include", keep_path=True, excludes=("*/private/*"))
192
193 def _autotools_bool_arg(self, arg_base_name, value):
194 prefix = "--enable-" if value else "--disable-"
195 return prefix + arg_base_name
196
197 def _get_configure_args(self, absolute_install_dir):
198 args = [
199 "--prefix=%s" % absolute_install_dir,
200 self._autotools_bool_arg("shared", self.options.shared),
201 self._autotools_bool_arg("static", not self.options.shared),
202 self._autotools_bool_arg("soname-versions", self.options.use_soname),
203 self._autotools_bool_arg("pie", self.options.PIE)
204 ]
205 if self.options.get_safe("fPIC"):
206 args.append("--with-pic")
207 return args
208
[end of recipes/libsodium/1.0.18/conanfile.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/recipes/libsodium/1.0.18/conanfile.py b/recipes/libsodium/1.0.18/conanfile.py
--- a/recipes/libsodium/1.0.18/conanfile.py
+++ b/recipes/libsodium/1.0.18/conanfile.py
@@ -124,7 +124,7 @@
if self.settings.arch == "armv7":
host_arch += "eabi"
else:
- raise ConanInvalidConfiguration(f"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}")
+ raise ConanInvalidConfiguration("Unsupported arch or Neutrino version for libsodium: {} {}".format(self.settings.os, self.settings.arch))
configure_args.append("--host=%s" % host_arch)
self._build_autotools_impl(configure_args)
@@ -146,7 +146,7 @@
elif self.settings.os == "Neutrino":
self._build_autotools_neutrino(configure_args)
else:
- raise ConanInvalidConfiguration(f"Unsupported os for libsodium: {self.settings.os}")
+ raise ConanInvalidConfiguration("Unsupported os for libsodium: {}".format(self.settings.os))
def build(self):
for patch in self.conan_data["patches"][self.version]:
|
{"golden_diff": "diff --git a/recipes/libsodium/1.0.18/conanfile.py b/recipes/libsodium/1.0.18/conanfile.py\n--- a/recipes/libsodium/1.0.18/conanfile.py\n+++ b/recipes/libsodium/1.0.18/conanfile.py\n@@ -124,7 +124,7 @@\n if self.settings.arch == \"armv7\":\n host_arch += \"eabi\"\n else:\n- raise ConanInvalidConfiguration(f\"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}\")\n+ raise ConanInvalidConfiguration(\"Unsupported arch or Neutrino version for libsodium: {} {}\".format(self.settings.os, self.settings.arch))\n configure_args.append(\"--host=%s\" % host_arch)\n self._build_autotools_impl(configure_args)\n \n@@ -146,7 +146,7 @@\n elif self.settings.os == \"Neutrino\":\n self._build_autotools_neutrino(configure_args)\n else:\n- raise ConanInvalidConfiguration(f\"Unsupported os for libsodium: {self.settings.os}\")\n+ raise ConanInvalidConfiguration(\"Unsupported os for libsodium: {}\".format(self.settings.os))\n \n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n", "issue": "[libsodium] libsodium/1.0.18: Recipe broken on Python < 3.6\nThe use of f-strings in the Python recipe causes the package to not be installed with a Python version that is less than 3.6. This is a serious problem for using conan with older distributions, such as Ubuntu Xenial or CentOS. Instead of f-strings, the `.format(...)` or `%` interpolation methods should be used.\r\n\r\n### Package and Environment Details\r\n * Package Name/Version: **libsodium/1.0.18**\r\n\r\n### Conan output\r\n```\r\nERROR: Error loading conanfile at '~/.conan/data/libsodium/1.0.18/_/_/export/conanfile.py': Unable to load conanfile in ~/.conan/data/libsodium/1.0.18/_/_/export/conanfile.py\r\n File \"/usr/lib/python3.5/imp.py\", line 172, in load_source\r\n module = _load(spec)\r\n File \"<frozen importlib._bootstrap>\", line 693, in _load\r\n File \"<frozen importlib._bootstrap>\", line 673, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 661, in exec_module\r\n File \"<frozen importlib._bootstrap_external>\", line 767, in get_code\r\n File \"<frozen importlib._bootstrap_external>\", line 727, in source_to_code\r\n File \"<frozen importlib._bootstrap>\", line 222, in _call_with_frames_removed\r\n File \"~/.conan/data/libsodium/1.0.18/_/_/export/conanfile.py\", line 126\r\n raise ConanInvalidConfiguration(f\"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}\")\r\n ^\r\nSyntaxError: invalid syntax\r\n```\r\n\r\n### Locations in Recipe\r\n```\r\nlibsodium/1.0.18/conanfile.py\r\n126: raise ConanInvalidConfiguration(f\"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}\")\r\n148: raise ConanInvalidConfiguration(f\"Unsupported os for libsodium: {self.settings.os}\")\r\n```\r\n\r\nThis is as far as I can tell the only package in this repository that uses f-strings.\n", "before_files": [{"content": "from conans import ConanFile, AutoToolsBuildEnvironment, tools, MSBuild\nfrom conans.errors import ConanInvalidConfiguration\nimport os\n\n\nclass LibsodiumConan(ConanFile):\n name = \"libsodium\"\n description = \"A modern and easy-to-use crypto library.\"\n license = \"ISC\"\n url = \"https://github.com/conan-io/conan-center-index\"\n homepage = \"https://download.libsodium.org/doc/\"\n exports_sources = [\"patches/**\"]\n settings = \"os\", \"compiler\", \"arch\", \"build_type\"\n topics = (\"sodium\", \"libsodium\", \"encryption\", \"signature\", \"hashing\")\n generators = \"cmake\"\n _source_subfolder = \"source_subfolder\"\n\n options = {\n \"shared\" : [True, False],\n \"fPIC\": [True, False],\n \"use_soname\" : [True, False],\n \"PIE\" : [True, False],\n }\n\n default_options = {\n \"shared\": False,\n \"fPIC\": True,\n \"use_soname\": True,\n \"PIE\": False,\n }\n\n @property\n def _android_id_str(self):\n return \"androideabi\" if str(self.settings.arch) in [\"armv6\", \"armv7\"] else \"android\"\n\n @property\n def _is_mingw(self):\n return self.settings.os == \"Windows\" and self.settings.compiler == \"gcc\"\n\n @property\n def _vs_configuration(self):\n configuration = \"\"\n if self.options.shared:\n configuration += \"Dyn\"\n else:\n configuration += \"Static\"\n build_type = \"Debug\" if self.settings.build_type == \"Debug\" else \"Release\"\n configuration += build_type\n return configuration\n\n @property\n def _vs_sln_folder(self):\n folder = {\"14\": \"vs2015\",\n \"15\": \"vs2017\",\n \"16\": \"vs2019\"}.get(str(self.settings.compiler.version), None)\n if not folder:\n raise ConanInvalidConfiguration(\"Unsupported msvc version: {}\".format(self.settings.compiler.version))\n return folder\n\n def configure(self):\n del self.settings.compiler.libcxx\n del self.settings.compiler.cppstd\n\n def config_options(self):\n if self.settings.os == \"Windows\":\n del self.options.fPIC\n\n def build_requirements(self):\n # There are several unix tools used (bash scripts for Emscripten, autoreconf on MinGW, etc...)\n if self.settings.compiler != \"Visual Studio\" and tools.os_info.is_windows and \\\n not \"CONAN_BASH_PATH\" in os.environ and tools.os_info.detect_windows_subsystem() != \"Windows\":\n self.build_requires(\"msys2/20190524\")\n\n def source(self):\n tools.get(**self.conan_data[\"sources\"][self.version])\n extracted_dir = self.name + \"-\" + self.version\n os.rename(extracted_dir, self._source_subfolder)\n\n def _build_visual(self):\n sln_path = os.path.join(self.build_folder, self._source_subfolder, \"builds\", \"msvc\", self._vs_sln_folder, \"libsodium.sln\")\n\n msbuild = MSBuild(self)\n msbuild.build(sln_path, upgrade_project=False, platforms={\"x86\": \"Win32\"}, build_type=self._vs_configuration)\n\n def _build_autotools_impl(self, configure_args):\n win_bash = False\n if self._is_mingw:\n win_bash = True\n\n autotools = AutoToolsBuildEnvironment(self, win_bash=win_bash)\n if self._is_mingw:\n self.run(\"autoreconf -i\", cwd=self._source_subfolder, win_bash=win_bash)\n autotools.configure(args=configure_args, configure_dir=self._source_subfolder, host=False)\n autotools.make(args=[\"-j%s\" % str(tools.cpu_count())])\n autotools.install()\n\n def _build_autotools_linux(self, configure_args):\n self._build_autotools_impl(configure_args)\n\n def _build_autotools_emscripten(self, configure_args):\n self.run(\"./dist-build/emscripten.sh --standard\", cwd=self._source_subfolder)\n\n def _build_autotools_android(self, configure_args):\n host_arch = \"%s-linux-%s\" % (tools.to_android_abi(self.settings.arch), self._android_id_str)\n configure_args.append(\"--host=%s\" % host_arch)\n self._build_autotools_impl(configure_args)\n\n def _build_autotools_mingw(self, configure_args):\n arch = \"i686\" if self.settings.arch == \"x86\" else self.settings.arch\n host_arch = \"%s-w64-mingw32\" % arch\n configure_args.append(\"--host=%s\" % host_arch)\n self._build_autotools_impl(configure_args)\n\n def _build_autotools_darwin(self, configure_args):\n os = \"ios\" if self.settings.os == \"iOS\" else \"darwin\"\n host_arch = \"%s-apple-%s\" % (self.settings.arch, os)\n configure_args.append(\"--host=%s\" % host_arch)\n self._build_autotools_impl(configure_args)\n\n def _build_autotools_neutrino(self, configure_args):\n neutrino_archs = {\"x86_64\":\"x86_64-pc\", \"x86\":\"i586-pc\", \"armv7\":\"arm-unknown\", \"armv8\": \"aarch64-unknown\"}\n if self.settings.os.version == \"7.0\" and str(self.settings.arch) in neutrino_archs:\n host_arch = \"%s-nto-qnx7.0.0\" % neutrino_archs[str(self.settings.arch)]\n if self.settings.arch == \"armv7\":\n host_arch += \"eabi\"\n else:\n raise ConanInvalidConfiguration(f\"Unsupported arch or Neutrino version for libsodium: {self.settings.os} {self.settings.arch}\")\n configure_args.append(\"--host=%s\" % host_arch)\n self._build_autotools_impl(configure_args)\n\n def _build_autotools(self):\n absolute_install_dir = os.path.abspath(os.path.join(\".\", \"install\"))\n absolute_install_dir = absolute_install_dir.replace(\"\\\\\", \"/\")\n configure_args = self._get_configure_args(absolute_install_dir)\n\n if self.settings.os == \"Linux\":\n self._build_autotools_linux(configure_args)\n elif self.settings.os == \"Emscripten\":\n self._build_autotools_emscripten(configure_args)\n elif self.settings.os == \"Android\":\n self._build_autotools_android(configure_args)\n elif tools.is_apple_os(self.settings.os):\n self._build_autotools_darwin(configure_args)\n elif self._is_mingw:\n self._build_autotools_mingw(configure_args)\n elif self.settings.os == \"Neutrino\":\n self._build_autotools_neutrino(configure_args)\n else:\n raise ConanInvalidConfiguration(f\"Unsupported os for libsodium: {self.settings.os}\")\n\n def build(self):\n for patch in self.conan_data[\"patches\"][self.version]:\n tools.patch(**patch)\n if self.settings.os == \"Macos\":\n tools.replace_in_file(os.path.join(self._source_subfolder, \"configure\"), r\"-install_name \\$rpath/\", \"-install_name \")\n if self.settings.compiler != \"Visual Studio\":\n self._build_autotools()\n else:\n self._build_visual()\n\n def package(self):\n self.copy(\"*LICENSE\", dst=\"licenses\", keep_path=False)\n if self.settings.compiler == \"Visual Studio\":\n self._package_visual()\n else:\n self._package_autotools()\n\n def package_info(self):\n if self.settings.compiler == \"Visual Studio\":\n if not self.options.shared:\n self.cpp_info.defines = [\"SODIUM_STATIC=1\"]\n self.cpp_info.libs = tools.collect_libs(self)\n if self.settings.os == \"Linux\":\n self.cpp_info.system_libs = [\"pthread\"]\n\n def _package_autotools(self):\n if self.settings.os == \"Emscripten\":\n prefix = \"%s/libsodium-js\" % self._source_subfolder\n else:\n prefix = \"install\"\n lib_folder = os.path.join(prefix, \"lib\")\n self.copy(\"*.h\", dst=\"include\", src=os.path.join(prefix, \"include\"))\n self.copy(\"*.a\", dst=\"lib\", src=lib_folder)\n self.copy(\"*.so*\", dst=\"lib\", src=lib_folder, symlinks=True)\n self.copy(\"*.dylib\", dst=\"lib\", src=lib_folder, symlinks=True)\n\n def _package_visual(self):\n self.copy(\"*.lib\", dst=\"lib\", keep_path=False)\n self.copy(\"*.dll\", dst=\"bin\", keep_path=False)\n inc_src = os.path.join(self._source_subfolder, \"src\", self.name, \"include\")\n self.copy(\"*.h\", src=inc_src, dst=\"include\", keep_path=True, excludes=(\"*/private/*\"))\n\n def _autotools_bool_arg(self, arg_base_name, value):\n prefix = \"--enable-\" if value else \"--disable-\"\n return prefix + arg_base_name\n\n def _get_configure_args(self, absolute_install_dir):\n args = [\n \"--prefix=%s\" % absolute_install_dir,\n self._autotools_bool_arg(\"shared\", self.options.shared),\n self._autotools_bool_arg(\"static\", not self.options.shared),\n self._autotools_bool_arg(\"soname-versions\", self.options.use_soname),\n self._autotools_bool_arg(\"pie\", self.options.PIE)\n ]\n if self.options.get_safe(\"fPIC\"):\n args.append(\"--with-pic\")\n return args\n", "path": "recipes/libsodium/1.0.18/conanfile.py"}]}
| 3,765 | 292 |
gh_patches_debug_921
|
rasdani/github-patches
|
git_diff
|
tensorflow__addons-1941
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Usage with tf.keras API
https://github.com/tensorflow/addons/blob/5f618fdb92d9737da059de2a33fa606e97505398/tensorflow_addons/losses/focal_loss.py#L52-L53
The usage in `tf.keras` API example is incorrect. It should be replaced with:
```python
model = tf.keras.Model(inputs, outputs)
model.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())
```
</issue>
<code>
[start of tensorflow_addons/losses/focal_loss.py]
1 # Copyright 2019 The TensorFlow Authors. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Implements Focal loss."""
16
17 import tensorflow as tf
18 import tensorflow.keras.backend as K
19
20 from tensorflow_addons.utils.keras_utils import LossFunctionWrapper
21 from tensorflow_addons.utils.types import FloatTensorLike, TensorLike
22 from typeguard import typechecked
23
24
25 @tf.keras.utils.register_keras_serializable(package="Addons")
26 class SigmoidFocalCrossEntropy(LossFunctionWrapper):
27 """Implements the focal loss function.
28
29 Focal loss was first introduced in the RetinaNet paper
30 (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for
31 classification when you have highly imbalanced classes. It down-weights
32 well-classified examples and focuses on hard examples. The loss value is
33 much high for a sample which is misclassified by the classifier as compared
34 to the loss value corresponding to a well-classified example. One of the
35 best use-cases of focal loss is its usage in object detection where the
36 imbalance between the background class and other classes is extremely high.
37
38 Usage:
39
40 ```python
41 fl = tfa.losses.SigmoidFocalCrossEntropy()
42 loss = fl(
43 y_true = [[1.0], [1.0], [0.0]],
44 y_pred = [[0.97], [0.91], [0.03]])
45 print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,
46 1.9097870e-04,
47 2.0559824e-05]
48 ```
49 Usage with tf.keras API:
50
51 ```python
52 model = tf.keras.Model(inputs, outputs)
53 model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())
54 ```
55
56 Args
57 alpha: balancing factor, default value is 0.25
58 gamma: modulating factor, default value is 2.0
59
60 Returns:
61 Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same
62 shape as `y_true`; otherwise, it is scalar.
63
64 Raises:
65 ValueError: If the shape of `sample_weight` is invalid or value of
66 `gamma` is less than zero
67 """
68
69 @typechecked
70 def __init__(
71 self,
72 from_logits: bool = False,
73 alpha: FloatTensorLike = 0.25,
74 gamma: FloatTensorLike = 2.0,
75 reduction: str = tf.keras.losses.Reduction.NONE,
76 name: str = "sigmoid_focal_crossentropy",
77 ):
78 super().__init__(
79 sigmoid_focal_crossentropy,
80 name=name,
81 reduction=reduction,
82 from_logits=from_logits,
83 alpha=alpha,
84 gamma=gamma,
85 )
86
87
88 @tf.keras.utils.register_keras_serializable(package="Addons")
89 @tf.function
90 def sigmoid_focal_crossentropy(
91 y_true: TensorLike,
92 y_pred: TensorLike,
93 alpha: FloatTensorLike = 0.25,
94 gamma: FloatTensorLike = 2.0,
95 from_logits: bool = False,
96 ) -> tf.Tensor:
97 """
98 Args
99 y_true: true targets tensor.
100 y_pred: predictions tensor.
101 alpha: balancing factor.
102 gamma: modulating factor.
103
104 Returns:
105 Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the
106 same shape as `y_true`; otherwise, it is scalar.
107 """
108 if gamma and gamma < 0:
109 raise ValueError("Value of gamma should be greater than or equal to zero")
110
111 y_pred = tf.convert_to_tensor(y_pred)
112 y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)
113
114 # Get the cross_entropy for each entry
115 ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)
116
117 # If logits are provided then convert the predictions into probabilities
118 if from_logits:
119 pred_prob = tf.sigmoid(y_pred)
120 else:
121 pred_prob = y_pred
122
123 p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))
124 alpha_factor = 1.0
125 modulating_factor = 1.0
126
127 if alpha:
128 alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())
129 alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)
130
131 if gamma:
132 gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())
133 modulating_factor = tf.pow((1.0 - p_t), gamma)
134
135 # compute the final loss and return
136 return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)
137
[end of tensorflow_addons/losses/focal_loss.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py
--- a/tensorflow_addons/losses/focal_loss.py
+++ b/tensorflow_addons/losses/focal_loss.py
@@ -50,7 +50,7 @@
```python
model = tf.keras.Model(inputs, outputs)
- model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())
+ model.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())
```
Args
|
{"golden_diff": "diff --git a/tensorflow_addons/losses/focal_loss.py b/tensorflow_addons/losses/focal_loss.py\n--- a/tensorflow_addons/losses/focal_loss.py\n+++ b/tensorflow_addons/losses/focal_loss.py\n@@ -50,7 +50,7 @@\n \n ```python\n model = tf.keras.Model(inputs, outputs)\n- model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())\n+ model.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())\n ```\n \n Args\n", "issue": "Usage with tf.keras API\nhttps://github.com/tensorflow/addons/blob/5f618fdb92d9737da059de2a33fa606e97505398/tensorflow_addons/losses/focal_loss.py#L52-L53\r\n\r\nThe usage in `tf.keras` API example is incorrect. It should be replaced with:\r\n\r\n```python\r\nmodel = tf.keras.Model(inputs, outputs)\r\nmodel.compile('sgd', loss=tfa.losses.SigmoidFocalCrossEntropy())\r\n```\n", "before_files": [{"content": "# Copyright 2019 The TensorFlow Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Implements Focal loss.\"\"\"\n\nimport tensorflow as tf\nimport tensorflow.keras.backend as K\n\nfrom tensorflow_addons.utils.keras_utils import LossFunctionWrapper\nfrom tensorflow_addons.utils.types import FloatTensorLike, TensorLike\nfrom typeguard import typechecked\n\n\[email protected]_keras_serializable(package=\"Addons\")\nclass SigmoidFocalCrossEntropy(LossFunctionWrapper):\n \"\"\"Implements the focal loss function.\n\n Focal loss was first introduced in the RetinaNet paper\n (https://arxiv.org/pdf/1708.02002.pdf). Focal loss is extremely useful for\n classification when you have highly imbalanced classes. It down-weights\n well-classified examples and focuses on hard examples. The loss value is\n much high for a sample which is misclassified by the classifier as compared\n to the loss value corresponding to a well-classified example. One of the\n best use-cases of focal loss is its usage in object detection where the\n imbalance between the background class and other classes is extremely high.\n\n Usage:\n\n ```python\n fl = tfa.losses.SigmoidFocalCrossEntropy()\n loss = fl(\n y_true = [[1.0], [1.0], [0.0]],\n y_pred = [[0.97], [0.91], [0.03]])\n print('Loss: ', loss.numpy()) # Loss: [6.8532745e-06,\n 1.9097870e-04,\n 2.0559824e-05]\n ```\n Usage with tf.keras API:\n\n ```python\n model = tf.keras.Model(inputs, outputs)\n model.compile('sgd', loss=tf.keras.losses.SigmoidFocalCrossEntropy())\n ```\n\n Args\n alpha: balancing factor, default value is 0.25\n gamma: modulating factor, default value is 2.0\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`, this has the same\n shape as `y_true`; otherwise, it is scalar.\n\n Raises:\n ValueError: If the shape of `sample_weight` is invalid or value of\n `gamma` is less than zero\n \"\"\"\n\n @typechecked\n def __init__(\n self,\n from_logits: bool = False,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n reduction: str = tf.keras.losses.Reduction.NONE,\n name: str = \"sigmoid_focal_crossentropy\",\n ):\n super().__init__(\n sigmoid_focal_crossentropy,\n name=name,\n reduction=reduction,\n from_logits=from_logits,\n alpha=alpha,\n gamma=gamma,\n )\n\n\[email protected]_keras_serializable(package=\"Addons\")\[email protected]\ndef sigmoid_focal_crossentropy(\n y_true: TensorLike,\n y_pred: TensorLike,\n alpha: FloatTensorLike = 0.25,\n gamma: FloatTensorLike = 2.0,\n from_logits: bool = False,\n) -> tf.Tensor:\n \"\"\"\n Args\n y_true: true targets tensor.\n y_pred: predictions tensor.\n alpha: balancing factor.\n gamma: modulating factor.\n\n Returns:\n Weighted loss float `Tensor`. If `reduction` is `NONE`,this has the\n same shape as `y_true`; otherwise, it is scalar.\n \"\"\"\n if gamma and gamma < 0:\n raise ValueError(\"Value of gamma should be greater than or equal to zero\")\n\n y_pred = tf.convert_to_tensor(y_pred)\n y_true = tf.convert_to_tensor(y_true, dtype=y_pred.dtype)\n\n # Get the cross_entropy for each entry\n ce = K.binary_crossentropy(y_true, y_pred, from_logits=from_logits)\n\n # If logits are provided then convert the predictions into probabilities\n if from_logits:\n pred_prob = tf.sigmoid(y_pred)\n else:\n pred_prob = y_pred\n\n p_t = (y_true * pred_prob) + ((1 - y_true) * (1 - pred_prob))\n alpha_factor = 1.0\n modulating_factor = 1.0\n\n if alpha:\n alpha = tf.convert_to_tensor(alpha, dtype=K.floatx())\n alpha_factor = y_true * alpha + (1 - y_true) * (1 - alpha)\n\n if gamma:\n gamma = tf.convert_to_tensor(gamma, dtype=K.floatx())\n modulating_factor = tf.pow((1.0 - p_t), gamma)\n\n # compute the final loss and return\n return tf.reduce_sum(alpha_factor * modulating_factor * ce, axis=-1)\n", "path": "tensorflow_addons/losses/focal_loss.py"}]}
| 2,152 | 133 |
gh_patches_debug_30444
|
rasdani/github-patches
|
git_diff
|
dask__dask-618
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add Base.to_graphviz
Add function to return `graphviz` instance created from dask graph for below reasons:
- When using IPython, `.visualize` outputs unnecessary image file
- Sometimes we want to modify graphviz instance directly
</issue>
<code>
[start of dask/dot.py]
1 from __future__ import absolute_import, division, print_function
2
3 import re
4 from subprocess import check_call, CalledProcessError
5
6 from graphviz import Digraph
7
8 from .core import istask, get_dependencies, ishashable
9
10
11 def task_label(task):
12 """Label for a task on a dot graph.
13
14 Examples
15 --------
16 >>> from operator import add
17 >>> task_label((add, 1, 2))
18 'add'
19 >>> task_label((add, (add, 1, 2), 3))
20 'add(...)'
21 """
22 func = task[0]
23 if hasattr(func, 'funcs'):
24 if len(func.funcs) > 1:
25 return '{0}(...)'.format(funcname(func.funcs[0]))
26 else:
27 head = funcname(func.funcs[0])
28 else:
29 head = funcname(task[0])
30 if any(has_sub_tasks(i) for i in task[1:]):
31 return '{0}(...)'.format(head)
32 else:
33 return head
34
35
36 def has_sub_tasks(task):
37 """Returns True if the task has sub tasks"""
38 if istask(task):
39 return True
40 elif isinstance(task, list):
41 return any(has_sub_tasks(i) for i in task)
42 else:
43 return False
44
45
46 def funcname(func):
47 """Get the name of a function."""
48 while hasattr(func, 'func'):
49 func = func.func
50 return func.__name__
51
52
53 def name(x):
54 try:
55 return str(hash(x))
56 except TypeError:
57 return str(hash(str(x)))
58
59
60 _HASHPAT = re.compile('([0-9a-z]{32})')
61
62
63 def label(x, cache=None):
64 """
65
66 >>> label('x')
67 'x'
68
69 >>> label(('x', 1))
70 "('x', 1)"
71
72 >>> from hashlib import md5
73 >>> x = 'x-%s-hello' % md5(b'1234').hexdigest()
74 >>> x
75 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'
76
77 >>> label(x)
78 'x-#-hello'
79 """
80 s = str(x)
81 m = re.search(_HASHPAT, s)
82 if m is not None:
83 for h in m.groups():
84 if cache is not None:
85 n = cache.get(h, len(cache))
86 label = '#{0}'.format(n)
87 # cache will be overwritten destructively
88 cache[h] = n
89 else:
90 label = '#'
91 s = s.replace(h, label)
92 return s
93
94
95 def to_graphviz(dsk, data_attributes=None, function_attributes=None):
96 if data_attributes is None:
97 data_attributes = {}
98 if function_attributes is None:
99 function_attributes = {}
100
101 g = Digraph(graph_attr={'rankdir': 'BT'})
102
103 seen = set()
104 cache = {}
105
106 for k, v in dsk.items():
107 k_name = name(k)
108 if k_name not in seen:
109 seen.add(k_name)
110 g.node(k_name, label=label(k, cache=cache), shape='box',
111 **data_attributes.get(k, {}))
112
113 if istask(v):
114 func_name = name((k, 'function'))
115 if func_name not in seen:
116 seen.add(func_name)
117 g.node(func_name, label=task_label(v), shape='circle',
118 **function_attributes.get(k, {}))
119 g.edge(func_name, k_name)
120
121 for dep in get_dependencies(dsk, k):
122 dep_name = name(dep)
123 if dep_name not in seen:
124 seen.add(dep_name)
125 g.node(dep_name, label=label(dep, cache=cache), shape='box',
126 **data_attributes.get(dep, {}))
127 g.edge(dep_name, func_name)
128 elif ishashable(v) and v in dsk:
129 g.edge(name(v), k_name)
130 return g
131
132
133 def dot_graph(dsk, filename='mydask', **kwargs):
134 g = to_graphviz(dsk, **kwargs)
135 g.save(filename + '.dot')
136
137 try:
138 check_call('dot -Tpdf {0}.dot -o {0}.pdf'.format(filename), shell=True)
139 check_call('dot -Tpng {0}.dot -o {0}.png'.format(filename), shell=True)
140 except CalledProcessError:
141 raise RuntimeError(
142 "Please install The `dot` utility from graphviz:\n"
143 " Debian: sudo apt-get install graphviz\n"
144 " Mac OSX: brew install graphviz\n"
145 " Windows: http://www.graphviz.org/Download..php") # pragma: no cover
146 try:
147 from IPython.display import Image
148 return Image(filename + '.png')
149 except ImportError:
150 pass
151
[end of dask/dot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/dask/dot.py b/dask/dot.py
--- a/dask/dot.py
+++ b/dask/dot.py
@@ -6,6 +6,7 @@
from graphviz import Digraph
from .core import istask, get_dependencies, ishashable
+from .compatibility import BytesIO
def task_label(task):
@@ -132,19 +133,35 @@
def dot_graph(dsk, filename='mydask', **kwargs):
g = to_graphviz(dsk, **kwargs)
- g.save(filename + '.dot')
- try:
- check_call('dot -Tpdf {0}.dot -o {0}.pdf'.format(filename), shell=True)
- check_call('dot -Tpng {0}.dot -o {0}.png'.format(filename), shell=True)
- except CalledProcessError:
- raise RuntimeError(
- "Please install The `dot` utility from graphviz:\n"
- " Debian: sudo apt-get install graphviz\n"
- " Mac OSX: brew install graphviz\n"
- " Windows: http://www.graphviz.org/Download..php") # pragma: no cover
- try:
- from IPython.display import Image
- return Image(filename + '.png')
- except ImportError:
- pass
+ if filename is not None:
+ g.save(filename + '.dot')
+
+ try:
+ check_call('dot -Tpdf {0}.dot -o {0}.pdf'.format(filename),
+ shell=True)
+ check_call('dot -Tpng {0}.dot -o {0}.png'.format(filename),
+ shell=True)
+
+ except CalledProcessError:
+ msg = ("Please install The `dot` utility from graphviz:\n"
+ " Debian: sudo apt-get install graphviz\n"
+ " Mac OSX: brew install graphviz\n"
+ " Windows: http://www.graphviz.org/Download..php")
+ raise RuntimeError(msg) # pragma: no cover
+
+ try:
+ from IPython.display import Image
+ return Image(filename + '.png')
+ except ImportError:
+ pass
+
+ else:
+ try:
+ from IPython.display import Image
+ s = BytesIO()
+ s.write(g.pipe(format='png'))
+ s.seek(0)
+ return Image(s.read())
+ except ImportError:
+ pass
|
{"golden_diff": "diff --git a/dask/dot.py b/dask/dot.py\n--- a/dask/dot.py\n+++ b/dask/dot.py\n@@ -6,6 +6,7 @@\n from graphviz import Digraph\n \n from .core import istask, get_dependencies, ishashable\n+from .compatibility import BytesIO\n \n \n def task_label(task):\n@@ -132,19 +133,35 @@\n \n def dot_graph(dsk, filename='mydask', **kwargs):\n g = to_graphviz(dsk, **kwargs)\n- g.save(filename + '.dot')\n \n- try:\n- check_call('dot -Tpdf {0}.dot -o {0}.pdf'.format(filename), shell=True)\n- check_call('dot -Tpng {0}.dot -o {0}.png'.format(filename), shell=True)\n- except CalledProcessError:\n- raise RuntimeError(\n- \"Please install The `dot` utility from graphviz:\\n\"\n- \" Debian: sudo apt-get install graphviz\\n\"\n- \" Mac OSX: brew install graphviz\\n\"\n- \" Windows: http://www.graphviz.org/Download..php\") # pragma: no cover\n- try:\n- from IPython.display import Image\n- return Image(filename + '.png')\n- except ImportError:\n- pass\n+ if filename is not None:\n+ g.save(filename + '.dot')\n+\n+ try:\n+ check_call('dot -Tpdf {0}.dot -o {0}.pdf'.format(filename),\n+ shell=True)\n+ check_call('dot -Tpng {0}.dot -o {0}.png'.format(filename),\n+ shell=True)\n+\n+ except CalledProcessError:\n+ msg = (\"Please install The `dot` utility from graphviz:\\n\"\n+ \" Debian: sudo apt-get install graphviz\\n\"\n+ \" Mac OSX: brew install graphviz\\n\"\n+ \" Windows: http://www.graphviz.org/Download..php\")\n+ raise RuntimeError(msg) # pragma: no cover\n+\n+ try:\n+ from IPython.display import Image\n+ return Image(filename + '.png')\n+ except ImportError:\n+ pass\n+\n+ else:\n+ try:\n+ from IPython.display import Image\n+ s = BytesIO()\n+ s.write(g.pipe(format='png'))\n+ s.seek(0)\n+ return Image(s.read())\n+ except ImportError:\n+ pass\n", "issue": "Add Base.to_graphviz\nAdd function to return `graphviz` instance created from dask graph for below reasons:\n- When using IPython, `.visualize` outputs unnecessary image file\n- Sometimes we want to modify graphviz instance directly\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function\n\nimport re\nfrom subprocess import check_call, CalledProcessError\n\nfrom graphviz import Digraph\n\nfrom .core import istask, get_dependencies, ishashable\n\n\ndef task_label(task):\n \"\"\"Label for a task on a dot graph.\n\n Examples\n --------\n >>> from operator import add\n >>> task_label((add, 1, 2))\n 'add'\n >>> task_label((add, (add, 1, 2), 3))\n 'add(...)'\n \"\"\"\n func = task[0]\n if hasattr(func, 'funcs'):\n if len(func.funcs) > 1:\n return '{0}(...)'.format(funcname(func.funcs[0]))\n else:\n head = funcname(func.funcs[0])\n else:\n head = funcname(task[0])\n if any(has_sub_tasks(i) for i in task[1:]):\n return '{0}(...)'.format(head)\n else:\n return head\n\n\ndef has_sub_tasks(task):\n \"\"\"Returns True if the task has sub tasks\"\"\"\n if istask(task):\n return True\n elif isinstance(task, list):\n return any(has_sub_tasks(i) for i in task)\n else:\n return False\n\n\ndef funcname(func):\n \"\"\"Get the name of a function.\"\"\"\n while hasattr(func, 'func'):\n func = func.func\n return func.__name__\n\n\ndef name(x):\n try:\n return str(hash(x))\n except TypeError:\n return str(hash(str(x)))\n\n\n_HASHPAT = re.compile('([0-9a-z]{32})')\n\n\ndef label(x, cache=None):\n \"\"\"\n\n >>> label('x')\n 'x'\n\n >>> label(('x', 1))\n \"('x', 1)\"\n\n >>> from hashlib import md5\n >>> x = 'x-%s-hello' % md5(b'1234').hexdigest()\n >>> x\n 'x-81dc9bdb52d04dc20036dbd8313ed055-hello'\n\n >>> label(x)\n 'x-#-hello'\n \"\"\"\n s = str(x)\n m = re.search(_HASHPAT, s)\n if m is not None:\n for h in m.groups():\n if cache is not None:\n n = cache.get(h, len(cache))\n label = '#{0}'.format(n)\n # cache will be overwritten destructively\n cache[h] = n\n else:\n label = '#'\n s = s.replace(h, label)\n return s\n\n\ndef to_graphviz(dsk, data_attributes=None, function_attributes=None):\n if data_attributes is None:\n data_attributes = {}\n if function_attributes is None:\n function_attributes = {}\n\n g = Digraph(graph_attr={'rankdir': 'BT'})\n\n seen = set()\n cache = {}\n\n for k, v in dsk.items():\n k_name = name(k)\n if k_name not in seen:\n seen.add(k_name)\n g.node(k_name, label=label(k, cache=cache), shape='box',\n **data_attributes.get(k, {}))\n\n if istask(v):\n func_name = name((k, 'function'))\n if func_name not in seen:\n seen.add(func_name)\n g.node(func_name, label=task_label(v), shape='circle',\n **function_attributes.get(k, {}))\n g.edge(func_name, k_name)\n\n for dep in get_dependencies(dsk, k):\n dep_name = name(dep)\n if dep_name not in seen:\n seen.add(dep_name)\n g.node(dep_name, label=label(dep, cache=cache), shape='box',\n **data_attributes.get(dep, {}))\n g.edge(dep_name, func_name)\n elif ishashable(v) and v in dsk:\n g.edge(name(v), k_name)\n return g\n\n\ndef dot_graph(dsk, filename='mydask', **kwargs):\n g = to_graphviz(dsk, **kwargs)\n g.save(filename + '.dot')\n\n try:\n check_call('dot -Tpdf {0}.dot -o {0}.pdf'.format(filename), shell=True)\n check_call('dot -Tpng {0}.dot -o {0}.png'.format(filename), shell=True)\n except CalledProcessError:\n raise RuntimeError(\n \"Please install The `dot` utility from graphviz:\\n\"\n \" Debian: sudo apt-get install graphviz\\n\"\n \" Mac OSX: brew install graphviz\\n\"\n \" Windows: http://www.graphviz.org/Download..php\") # pragma: no cover\n try:\n from IPython.display import Image\n return Image(filename + '.png')\n except ImportError:\n pass\n", "path": "dask/dot.py"}]}
| 1,990 | 551 |
gh_patches_debug_5288
|
rasdani/github-patches
|
git_diff
|
great-expectations__great_expectations-1576
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/data_context/util.py]
1 import copy
2 import importlib
3 import inspect
4 import logging
5 import os
6 import re
7 from collections import OrderedDict
8
9 from great_expectations.data_context.types.base import (
10 DataContextConfig,
11 DataContextConfigSchema,
12 )
13 from great_expectations.exceptions import (
14 MissingConfigVariableError,
15 PluginClassNotFoundError,
16 PluginModuleNotFoundError,
17 )
18 from great_expectations.util import verify_dynamic_loading_support
19
20 logger = logging.getLogger(__name__)
21
22
23 def load_class(class_name, module_name):
24 """Dynamically load a class from strings or raise a helpful error."""
25 try:
26 loaded_module = importlib.import_module(module_name)
27 class_ = getattr(loaded_module, class_name)
28 except ModuleNotFoundError:
29 raise PluginModuleNotFoundError(module_name)
30 except AttributeError:
31 raise PluginClassNotFoundError(module_name=module_name, class_name=class_name)
32 return class_
33
34
35 # TODO: Rename config to constructor_kwargs and config_defaults -> constructor_kwarg_default
36 # TODO: Improve error messages in this method. Since so much of our workflow is config-driven, this will be a *super* important part of DX.
37 def instantiate_class_from_config(config, runtime_environment, config_defaults=None):
38 """Build a GE class from configuration dictionaries."""
39
40 if config_defaults is None:
41 config_defaults = {}
42
43 config = copy.deepcopy(config)
44
45 module_name = config.pop("module_name", None)
46 if module_name is None:
47 try:
48 module_name = config_defaults.pop("module_name")
49 except KeyError:
50 raise KeyError(
51 "Neither config : {} nor config_defaults : {} contains a module_name key.".format(
52 config, config_defaults,
53 )
54 )
55 else:
56 # Pop the value without using it, to avoid sending an unwanted value to the config_class
57 config_defaults.pop("module_name", None)
58
59 verify_dynamic_loading_support(module_name=module_name)
60
61 class_name = config.pop("class_name", None)
62 if class_name is None:
63 logger.warning(
64 "Instantiating class from config without an explicit class_name is dangerous. Consider adding "
65 "an explicit class_name for %s" % config.get("name")
66 )
67 try:
68 class_name = config_defaults.pop("class_name")
69 except KeyError:
70 raise KeyError(
71 "Neither config : {} nor config_defaults : {} contains a class_name key.".format(
72 config, config_defaults,
73 )
74 )
75 else:
76 # Pop the value without using it, to avoid sending an unwanted value to the config_class
77 config_defaults.pop("class_name", None)
78
79 class_ = load_class(class_name=class_name, module_name=module_name)
80
81 config_with_defaults = copy.deepcopy(config_defaults)
82 config_with_defaults.update(config)
83 if runtime_environment is not None:
84 # If there are additional kwargs available in the runtime_environment requested by a
85 # class to be instantiated, provide them
86 argspec = inspect.getfullargspec(class_.__init__)[0][1:]
87
88 missing_args = set(argspec) - set(config_with_defaults.keys())
89 config_with_defaults.update(
90 {
91 missing_arg: runtime_environment[missing_arg]
92 for missing_arg in missing_args
93 if missing_arg in runtime_environment
94 }
95 )
96 # Add the entire runtime_environment as well if it's requested
97 if "runtime_environment" in missing_args:
98 config_with_defaults.update({"runtime_environment": runtime_environment})
99
100 try:
101 class_instance = class_(**config_with_defaults)
102 except TypeError as e:
103 raise TypeError(
104 "Couldn't instantiate class : {} with config : \n\t{}\n \n".format(
105 class_name, format_dict_for_error_message(config_with_defaults)
106 )
107 + str(e)
108 )
109
110 return class_instance
111
112
113 def format_dict_for_error_message(dict_):
114 # TODO : Tidy this up a bit. Indentation isn't fully consistent.
115
116 return "\n\t".join("\t\t".join((str(key), str(dict_[key]))) for key in dict_)
117
118
119 def substitute_config_variable(template_str, config_variables_dict):
120 """
121 This method takes a string, and if it contains a pattern ${SOME_VARIABLE} or $SOME_VARIABLE,
122 returns a string where the pattern is replaced with the value of SOME_VARIABLE,
123 otherwise returns the string unchanged.
124
125 If the environment variable SOME_VARIABLE is set, the method uses its value for substitution.
126 If it is not set, the value of SOME_VARIABLE is looked up in the config variables store (file).
127 If it is not found there, the input string is returned as is.
128
129 :param template_str: a string that might or might not be of the form ${SOME_VARIABLE}
130 or $SOME_VARIABLE
131 :param config_variables_dict: a dictionary of config variables. It is loaded from the
132 config variables store (by default, "uncommitted/config_variables.yml file)
133 :return:
134 """
135 if template_str is None:
136 return template_str
137
138 try:
139 match = re.search(r"\$\{(.*?)\}", template_str) or re.search(
140 r"\$([_a-z][_a-z0-9]*)", template_str
141 )
142 except TypeError:
143 # If the value is not a string (e.g., a boolean), we should return it as is
144 return template_str
145
146 if match:
147 config_variable_value = config_variables_dict.get(match.group(1))
148
149 if config_variable_value:
150 if match.start() == 0 and match.end() == len(template_str):
151 return config_variable_value
152 else:
153 return (
154 template_str[: match.start()]
155 + config_variable_value
156 + template_str[match.end() :]
157 )
158
159 raise MissingConfigVariableError(
160 f"""\n\nUnable to find a match for config substitution variable: `{match.group(1)}`.
161 Please add this missing variable to your `uncommitted/config_variables.yml` file or your environment variables.
162 See https://great-expectations.readthedocs.io/en/latest/reference/data_context_reference.html#managing-environment-and-secrets""",
163 missing_config_variable=match.group(1),
164 )
165
166 return template_str
167
168
169 def substitute_all_config_variables(data, replace_variables_dict):
170 """
171 Substitute all config variables of the form ${SOME_VARIABLE} in a dictionary-like
172 config object for their values.
173
174 The method traverses the dictionary recursively.
175
176 :param data:
177 :param replace_variables_dict:
178 :return: a dictionary with all the variables replaced with their values
179 """
180 if isinstance(data, DataContextConfig):
181 data = DataContextConfigSchema().dump(data)
182
183 if isinstance(data, dict) or isinstance(data, OrderedDict):
184 return {
185 k: substitute_all_config_variables(v, replace_variables_dict)
186 for k, v in data.items()
187 }
188 elif isinstance(data, list):
189 return [
190 substitute_all_config_variables(v, replace_variables_dict) for v in data
191 ]
192 return substitute_config_variable(data, replace_variables_dict)
193
194
195 def file_relative_path(dunderfile, relative_path):
196 """
197 This function is useful when one needs to load a file that is
198 relative to the position of the current file. (Such as when
199 you encode a configuration file path in source file and want
200 in runnable in any current working directory)
201
202 It is meant to be used like the following:
203 file_relative_path(__file__, 'path/relative/to/file')
204
205 H/T https://github.com/dagster-io/dagster/blob/8a250e9619a49e8bff8e9aa7435df89c2d2ea039/python_modules/dagster/dagster/utils/__init__.py#L34
206 """
207 return os.path.join(os.path.dirname(dunderfile), relative_path)
208
[end of great_expectations/data_context/util.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/great_expectations/data_context/util.py b/great_expectations/data_context/util.py
--- a/great_expectations/data_context/util.py
+++ b/great_expectations/data_context/util.py
@@ -146,7 +146,7 @@
if match:
config_variable_value = config_variables_dict.get(match.group(1))
- if config_variable_value:
+ if config_variable_value is not None:
if match.start() == 0 and match.end() == len(template_str):
return config_variable_value
else:
|
{"golden_diff": "diff --git a/great_expectations/data_context/util.py b/great_expectations/data_context/util.py\n--- a/great_expectations/data_context/util.py\n+++ b/great_expectations/data_context/util.py\n@@ -146,7 +146,7 @@\n if match:\n config_variable_value = config_variables_dict.get(match.group(1))\n \n- if config_variable_value:\n+ if config_variable_value is not None:\n if match.start() == 0 and match.end() == len(template_str):\n return config_variable_value\n else:\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "import copy\nimport importlib\nimport inspect\nimport logging\nimport os\nimport re\nfrom collections import OrderedDict\n\nfrom great_expectations.data_context.types.base import (\n DataContextConfig,\n DataContextConfigSchema,\n)\nfrom great_expectations.exceptions import (\n MissingConfigVariableError,\n PluginClassNotFoundError,\n PluginModuleNotFoundError,\n)\nfrom great_expectations.util import verify_dynamic_loading_support\n\nlogger = logging.getLogger(__name__)\n\n\ndef load_class(class_name, module_name):\n \"\"\"Dynamically load a class from strings or raise a helpful error.\"\"\"\n try:\n loaded_module = importlib.import_module(module_name)\n class_ = getattr(loaded_module, class_name)\n except ModuleNotFoundError:\n raise PluginModuleNotFoundError(module_name)\n except AttributeError:\n raise PluginClassNotFoundError(module_name=module_name, class_name=class_name)\n return class_\n\n\n# TODO: Rename config to constructor_kwargs and config_defaults -> constructor_kwarg_default\n# TODO: Improve error messages in this method. Since so much of our workflow is config-driven, this will be a *super* important part of DX.\ndef instantiate_class_from_config(config, runtime_environment, config_defaults=None):\n \"\"\"Build a GE class from configuration dictionaries.\"\"\"\n\n if config_defaults is None:\n config_defaults = {}\n\n config = copy.deepcopy(config)\n\n module_name = config.pop(\"module_name\", None)\n if module_name is None:\n try:\n module_name = config_defaults.pop(\"module_name\")\n except KeyError:\n raise KeyError(\n \"Neither config : {} nor config_defaults : {} contains a module_name key.\".format(\n config, config_defaults,\n )\n )\n else:\n # Pop the value without using it, to avoid sending an unwanted value to the config_class\n config_defaults.pop(\"module_name\", None)\n\n verify_dynamic_loading_support(module_name=module_name)\n\n class_name = config.pop(\"class_name\", None)\n if class_name is None:\n logger.warning(\n \"Instantiating class from config without an explicit class_name is dangerous. Consider adding \"\n \"an explicit class_name for %s\" % config.get(\"name\")\n )\n try:\n class_name = config_defaults.pop(\"class_name\")\n except KeyError:\n raise KeyError(\n \"Neither config : {} nor config_defaults : {} contains a class_name key.\".format(\n config, config_defaults,\n )\n )\n else:\n # Pop the value without using it, to avoid sending an unwanted value to the config_class\n config_defaults.pop(\"class_name\", None)\n\n class_ = load_class(class_name=class_name, module_name=module_name)\n\n config_with_defaults = copy.deepcopy(config_defaults)\n config_with_defaults.update(config)\n if runtime_environment is not None:\n # If there are additional kwargs available in the runtime_environment requested by a\n # class to be instantiated, provide them\n argspec = inspect.getfullargspec(class_.__init__)[0][1:]\n\n missing_args = set(argspec) - set(config_with_defaults.keys())\n config_with_defaults.update(\n {\n missing_arg: runtime_environment[missing_arg]\n for missing_arg in missing_args\n if missing_arg in runtime_environment\n }\n )\n # Add the entire runtime_environment as well if it's requested\n if \"runtime_environment\" in missing_args:\n config_with_defaults.update({\"runtime_environment\": runtime_environment})\n\n try:\n class_instance = class_(**config_with_defaults)\n except TypeError as e:\n raise TypeError(\n \"Couldn't instantiate class : {} with config : \\n\\t{}\\n \\n\".format(\n class_name, format_dict_for_error_message(config_with_defaults)\n )\n + str(e)\n )\n\n return class_instance\n\n\ndef format_dict_for_error_message(dict_):\n # TODO : Tidy this up a bit. Indentation isn't fully consistent.\n\n return \"\\n\\t\".join(\"\\t\\t\".join((str(key), str(dict_[key]))) for key in dict_)\n\n\ndef substitute_config_variable(template_str, config_variables_dict):\n \"\"\"\n This method takes a string, and if it contains a pattern ${SOME_VARIABLE} or $SOME_VARIABLE,\n returns a string where the pattern is replaced with the value of SOME_VARIABLE,\n otherwise returns the string unchanged.\n\n If the environment variable SOME_VARIABLE is set, the method uses its value for substitution.\n If it is not set, the value of SOME_VARIABLE is looked up in the config variables store (file).\n If it is not found there, the input string is returned as is.\n\n :param template_str: a string that might or might not be of the form ${SOME_VARIABLE}\n or $SOME_VARIABLE\n :param config_variables_dict: a dictionary of config variables. It is loaded from the\n config variables store (by default, \"uncommitted/config_variables.yml file)\n :return:\n \"\"\"\n if template_str is None:\n return template_str\n\n try:\n match = re.search(r\"\\$\\{(.*?)\\}\", template_str) or re.search(\n r\"\\$([_a-z][_a-z0-9]*)\", template_str\n )\n except TypeError:\n # If the value is not a string (e.g., a boolean), we should return it as is\n return template_str\n\n if match:\n config_variable_value = config_variables_dict.get(match.group(1))\n\n if config_variable_value:\n if match.start() == 0 and match.end() == len(template_str):\n return config_variable_value\n else:\n return (\n template_str[: match.start()]\n + config_variable_value\n + template_str[match.end() :]\n )\n\n raise MissingConfigVariableError(\n f\"\"\"\\n\\nUnable to find a match for config substitution variable: `{match.group(1)}`.\nPlease add this missing variable to your `uncommitted/config_variables.yml` file or your environment variables.\nSee https://great-expectations.readthedocs.io/en/latest/reference/data_context_reference.html#managing-environment-and-secrets\"\"\",\n missing_config_variable=match.group(1),\n )\n\n return template_str\n\n\ndef substitute_all_config_variables(data, replace_variables_dict):\n \"\"\"\n Substitute all config variables of the form ${SOME_VARIABLE} in a dictionary-like\n config object for their values.\n\n The method traverses the dictionary recursively.\n\n :param data:\n :param replace_variables_dict:\n :return: a dictionary with all the variables replaced with their values\n \"\"\"\n if isinstance(data, DataContextConfig):\n data = DataContextConfigSchema().dump(data)\n\n if isinstance(data, dict) or isinstance(data, OrderedDict):\n return {\n k: substitute_all_config_variables(v, replace_variables_dict)\n for k, v in data.items()\n }\n elif isinstance(data, list):\n return [\n substitute_all_config_variables(v, replace_variables_dict) for v in data\n ]\n return substitute_config_variable(data, replace_variables_dict)\n\n\ndef file_relative_path(dunderfile, relative_path):\n \"\"\"\n This function is useful when one needs to load a file that is\n relative to the position of the current file. (Such as when\n you encode a configuration file path in source file and want\n in runnable in any current working directory)\n\n It is meant to be used like the following:\n file_relative_path(__file__, 'path/relative/to/file')\n\n H/T https://github.com/dagster-io/dagster/blob/8a250e9619a49e8bff8e9aa7435df89c2d2ea039/python_modules/dagster/dagster/utils/__init__.py#L34\n \"\"\"\n return os.path.join(os.path.dirname(dunderfile), relative_path)\n", "path": "great_expectations/data_context/util.py"}]}
| 2,747 | 120 |
gh_patches_debug_5012
|
rasdani/github-patches
|
git_diff
|
pretalx__pretalx-338
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to add questions
The following link doesn't do anything: `https://[Domain]/orga/event/[CFP_name]/cfp/questions/new`
## Expected Behavior
Being be to create a question
## Current Behavior
Unable to create a question
## Possible Solution
Fix the edit button.
## Steps to Reproduce (for bugs)
1. Create a new CFP
2. Go on `https://[Domain]/orga/event/[CFP_name]/cfp/questions/new`
3. Click on the edit button (the pen)
## Context
Creating a CFP
## Your Environment
* Version used: 0.4
* Environment name and version: Firefox 58.0.1 & Chromium 64.0.3282.119
* Operating System and version: Ubuntu 17.10 / Server: Ubuntu 16.04 with python 3.6
</issue>
<code>
[start of src/pretalx/orga/views/cfp.py]
1 from csp.decorators import csp_update
2 from django.contrib import messages
3 from django.db import transaction
4 from django.db.models.deletion import ProtectedError
5 from django.forms.models import inlineformset_factory
6 from django.shortcuts import redirect
7 from django.utils.decorators import method_decorator
8 from django.utils.functional import cached_property
9 from django.utils.translation import ugettext_lazy as _
10 from django.views.generic import ListView, TemplateView, UpdateView, View
11
12 from pretalx.common.forms import I18nFormSet
13 from pretalx.common.mixins.views import ActionFromUrl, PermissionRequired
14 from pretalx.common.views import CreateOrUpdateView
15 from pretalx.orga.forms import CfPForm, QuestionForm, SubmissionTypeForm
16 from pretalx.orga.forms.cfp import AnswerOptionForm, CfPSettingsForm
17 from pretalx.submission.models import (
18 AnswerOption, CfP, Question, SubmissionType,
19 )
20
21
22 class CfPTextDetail(PermissionRequired, ActionFromUrl, UpdateView):
23 form_class = CfPForm
24 model = CfP
25 template_name = 'orga/cfp/text.html'
26 permission_required = 'orga.edit_cfp'
27 write_permission_required = 'orga.edit_cfp'
28
29 def get_context_data(self, *args, **kwargs):
30 ctx = super().get_context_data(*args, **kwargs)
31 ctx['sform'] = self.sform
32 return ctx
33
34 @cached_property
35 def sform(self):
36 return CfPSettingsForm(
37 read_only=(self._action == 'view'),
38 locales=self.request.event.locales,
39 obj=self.request.event,
40 attribute_name='settings',
41 data=self.request.POST if self.request.method == "POST" else None,
42 prefix='settings'
43 )
44
45 def get_object(self):
46 return self.request.event.cfp
47
48 def get_success_url(self) -> str:
49 return self.get_object().urls.text
50
51 def form_valid(self, form):
52 if not self.sform.is_valid():
53 messages.error(self.request, _('We had trouble saving your input.'))
54 return self.form_invalid(form)
55 messages.success(self.request, 'The CfP update has been saved.')
56 form.instance.event = self.request.event
57 ret = super().form_valid(form)
58 if form.has_changed():
59 form.instance.log_action('pretalx.cfp.update', person=self.request.user, orga=True)
60 self.sform.save()
61 return ret
62
63
64 class CfPQuestionList(PermissionRequired, TemplateView):
65 template_name = 'orga/cfp/question_view.html'
66 permission_required = 'orga.view_question'
67
68 def get_permission_object(self):
69 return self.request.event
70
71 def get_context_data(self, *args, **kwargs):
72 ctx = super().get_context_data(*args, **kwargs)
73 ctx['speaker_questions'] = Question.all_objects.filter(event=self.request.event, target='speaker')
74 ctx['submission_questions'] = Question.all_objects.filter(event=self.request.event, target='submission')
75 return ctx
76
77
78 @method_decorator(csp_update(SCRIPT_SRC="'self' 'unsafe-inline'"), name='dispatch')
79 class CfPQuestionDetail(PermissionRequired, ActionFromUrl, CreateOrUpdateView):
80 model = Question
81 form_class = QuestionForm
82 permission_required = 'orga.edit_question'
83 write_permission_required = 'orga.edit_question'
84
85 def get_template_names(self):
86 if self.request.path.lstrip('/').endswith('edit'):
87 return 'orga/cfp/question_form.html'
88 return 'orga/cfp/question_detail.html'
89
90 def get_permission_object(self):
91 return self.get_object() or self.request.event
92
93 def get_object(self) -> Question:
94 return Question.all_objects.filter(event=self.request.event, pk=self.kwargs.get('pk')).first()
95
96 @cached_property
97 def formset(self):
98 formset_class = inlineformset_factory(
99 Question, AnswerOption, form=AnswerOptionForm, formset=I18nFormSet,
100 can_delete=True, extra=0,
101 )
102 return formset_class(
103 self.request.POST if self.request.method == 'POST' else None,
104 queryset=AnswerOption.objects.filter(question=self.get_object()) if self.get_object() else AnswerOption.objects.none(),
105 event=self.request.event
106 )
107
108 def save_formset(self, obj):
109 if self.formset.is_valid():
110 for form in self.formset.initial_forms:
111 if form in self.formset.deleted_forms:
112 if not form.instance.pk:
113 continue
114 obj.log_action(
115 'pretalx.question.option.delete', person=self.request.user, orga=True, data={
116 'id': form.instance.pk
117 }
118 )
119 form.instance.delete()
120 form.instance.pk = None
121 elif form.has_changed():
122 form.instance.question = obj
123 form.save()
124 change_data = {k: form.cleaned_data.get(k) for k in form.changed_data}
125 change_data['id'] = form.instance.pk
126 obj.log_action(
127 'pretalx.question.option.update',
128 person=self.request.user, orga=True, data=change_data
129 )
130
131 for form in self.formset.extra_forms:
132 if not form.has_changed():
133 continue
134 if self.formset._should_delete_form(form):
135 continue
136 form.instance.question = obj
137 form.save()
138 change_data = {k: form.cleaned_data.get(k) for k in form.changed_data}
139 change_data['id'] = form.instance.pk
140 obj.log_action(
141 'pretalx.question.option.create',
142 person=self.request.user, orga=True, data=change_data
143 )
144
145 return True
146 return False
147
148 def get_context_data(self, *args, **kwargs):
149 ctx = super().get_context_data(*args, **kwargs)
150 ctx['formset'] = self.formset
151 ctx['question'] = self.get_object()
152 return ctx
153
154 def get_form_kwargs(self, *args, **kwargs):
155 kwargs = super().get_form_kwargs(*args, **kwargs)
156 if not self.get_object():
157 initial = kwargs['initial'] or dict()
158 initial['target'] = self.request.GET.get('type')
159 kwargs['initial'] = initial
160 return kwargs
161
162 def get_success_url(self) -> str:
163 obj = self.get_object() or self.instance
164 return obj.urls.base
165
166 @transaction.atomic
167 def form_valid(self, form):
168 form.instance.event = self.request.event
169 self.instance = form.instance
170 ret = super().form_valid(form)
171 if form.cleaned_data.get('variant') in ('choices', 'multiple_choice'):
172 result = self.save_formset(self.instance)
173 if not result:
174 return self.get(self.request, *self.args, **self.kwargs)
175 if form.has_changed():
176 action = 'pretalx.question.' + ('update' if self.object else 'create')
177 form.instance.log_action(action, person=self.request.user, orga=True)
178 messages.success(self.request, 'The question has been saved.')
179 return ret
180
181
182 class CfPQuestionDelete(PermissionRequired, View):
183 permission_required = 'orga.remove_question'
184
185 def get_object(self) -> Question:
186 return Question.all_objects.get(event=self.request.event, pk=self.kwargs.get('pk'))
187
188 def dispatch(self, request, *args, **kwargs):
189 super().dispatch(request, *args, **kwargs)
190 question = self.get_object()
191
192 try:
193 with transaction.atomic():
194 question.options.all().delete()
195 question.delete()
196 question.log_action('pretalx.question.delete', person=self.request.user, orga=True)
197 messages.success(request, _('The question has been deleted.'))
198 except ProtectedError:
199 question.active = False
200 question.save()
201 messages.error(request, _('You cannot delete a question that has already been answered. We have deactivated the question instead.'))
202 return redirect(self.request.event.cfp.urls.questions)
203
204
205 class CfPQuestionToggle(PermissionRequired, View):
206 permission_required = 'orga.edit_question'
207
208 def get_object(self) -> Question:
209 return Question.all_objects.filter(event=self.request.event, pk=self.kwargs.get('pk')).first()
210
211 def dispatch(self, request, *args, **kwargs):
212 super().dispatch(request, *args, **kwargs)
213 question = self.get_object()
214
215 question.active = not question.active
216 question.save(update_fields=['active'])
217 return redirect(question.urls.base)
218
219
220 class SubmissionTypeList(PermissionRequired, ListView):
221 template_name = 'orga/cfp/submission_type_view.html'
222 context_object_name = 'types'
223 permission_required = 'orga.view_submission_type'
224
225 def get_permission_object(self):
226 return self.request.event
227
228 def get_queryset(self):
229 return self.request.event.submission_types.all()
230
231
232 class SubmissionTypeDetail(PermissionRequired, ActionFromUrl, CreateOrUpdateView):
233 model = SubmissionType
234 form_class = SubmissionTypeForm
235 template_name = 'orga/cfp/submission_type_form.html'
236 permission_required = 'orga.edit_submission_type'
237 write_permission_required = 'orga.edit_submission_type'
238
239 def get_success_url(self) -> str:
240 return self.request.event.cfp.urls.types
241
242 def get_object(self):
243 return self.request.event.submission_types.filter(pk=self.kwargs.get('pk')).first()
244
245 def get_permission_object(self):
246 return self.get_object() or self.request.event
247
248 def form_valid(self, form):
249 messages.success(self.request, 'The Submission Type has been saved.')
250 form.instance.event = self.request.event
251 ret = super().form_valid(form)
252 if form.has_changed():
253 action = 'pretalx.submission_type.' + ('update' if self.object else 'create')
254 form.instance.log_action(action, person=self.request.user, orga=True)
255 return ret
256
257
258 class SubmissionTypeDefault(PermissionRequired, View):
259 permission_required = 'orga.edit_submission_type'
260
261 def get_object(self):
262 return self.request.event.submission_types.get(pk=self.kwargs.get('pk'))
263
264 def dispatch(self, request, *args, **kwargs):
265 super().dispatch(request, *args, **kwargs)
266 submission_type = self.get_object()
267 self.request.event.cfp.default_type = submission_type
268 self.request.event.cfp.save(update_fields=['default_type'])
269 submission_type.log_action('pretalx.submission_type.make_default', person=self.request.user, orga=True)
270 messages.success(request, _('The Submission Type has been made default.'))
271 return redirect(self.request.event.cfp.urls.types)
272
273
274 class SubmissionTypeDelete(PermissionRequired, View):
275 permission_required = 'orga.remove_submission_type'
276
277 def get_object(self):
278 return self.request.event.submission_types.get(pk=self.kwargs.get('pk'))
279
280 def dispatch(self, request, *args, **kwargs):
281 super().dispatch(request, *args, **kwargs)
282 submission_type = self.get_object()
283
284 if request.event.submission_types.count() == 1:
285 messages.error(request, _('You cannot delete the only submission type. Try creating another one first!'))
286 elif request.event.cfp.default_type == submission_type:
287 messages.error(request, _('You cannot delete the default submission type. Make another type default first!'))
288 else:
289 try:
290 submission_type.delete()
291 request.event.log_action('pretalx.submission_type.delete', person=self.request.user, orga=True)
292 messages.success(request, _('The Submission Type has been deleted.'))
293 except ProtectedError: # TODO: show which/how many submissions are concerned
294 messages.error(request, _('This Submission Type is in use in a submission and cannot be deleted.'))
295 return redirect(self.request.event.cfp.urls.types)
296
[end of src/pretalx/orga/views/cfp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/pretalx/orga/views/cfp.py b/src/pretalx/orga/views/cfp.py
--- a/src/pretalx/orga/views/cfp.py
+++ b/src/pretalx/orga/views/cfp.py
@@ -83,7 +83,8 @@
write_permission_required = 'orga.edit_question'
def get_template_names(self):
- if self.request.path.lstrip('/').endswith('edit'):
+ action = self.request.path.lstrip('/').rpartition('/')[2]
+ if action in ('edit', 'new'):
return 'orga/cfp/question_form.html'
return 'orga/cfp/question_detail.html'
|
{"golden_diff": "diff --git a/src/pretalx/orga/views/cfp.py b/src/pretalx/orga/views/cfp.py\n--- a/src/pretalx/orga/views/cfp.py\n+++ b/src/pretalx/orga/views/cfp.py\n@@ -83,7 +83,8 @@\n write_permission_required = 'orga.edit_question'\n \n def get_template_names(self):\n- if self.request.path.lstrip('/').endswith('edit'):\n+ action = self.request.path.lstrip('/').rpartition('/')[2]\n+ if action in ('edit', 'new'):\n return 'orga/cfp/question_form.html'\n return 'orga/cfp/question_detail.html'\n", "issue": "Unable to add questions\nThe following link doesn't do anything: `https://[Domain]/orga/event/[CFP_name]/cfp/questions/new`\r\n\r\n## Expected Behavior\r\n\r\nBeing be to create a question\r\n\r\n## Current Behavior\r\n\r\nUnable to create a question\r\n\r\n## Possible Solution\r\n\r\nFix the edit button.\r\n\r\n## Steps to Reproduce (for bugs)\r\n\r\n1. Create a new CFP\r\n2. Go on `https://[Domain]/orga/event/[CFP_name]/cfp/questions/new`\r\n3. Click on the edit button (the pen)\r\n\r\n## Context\r\n\r\nCreating a CFP\r\n\r\n## Your Environment\r\n* Version used: 0.4\r\n* Environment name and version: Firefox 58.0.1 & Chromium 64.0.3282.119\r\n* Operating System and version: Ubuntu 17.10 / Server: Ubuntu 16.04 with python 3.6\n", "before_files": [{"content": "from csp.decorators import csp_update\nfrom django.contrib import messages\nfrom django.db import transaction\nfrom django.db.models.deletion import ProtectedError\nfrom django.forms.models import inlineformset_factory\nfrom django.shortcuts import redirect\nfrom django.utils.decorators import method_decorator\nfrom django.utils.functional import cached_property\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.views.generic import ListView, TemplateView, UpdateView, View\n\nfrom pretalx.common.forms import I18nFormSet\nfrom pretalx.common.mixins.views import ActionFromUrl, PermissionRequired\nfrom pretalx.common.views import CreateOrUpdateView\nfrom pretalx.orga.forms import CfPForm, QuestionForm, SubmissionTypeForm\nfrom pretalx.orga.forms.cfp import AnswerOptionForm, CfPSettingsForm\nfrom pretalx.submission.models import (\n AnswerOption, CfP, Question, SubmissionType,\n)\n\n\nclass CfPTextDetail(PermissionRequired, ActionFromUrl, UpdateView):\n form_class = CfPForm\n model = CfP\n template_name = 'orga/cfp/text.html'\n permission_required = 'orga.edit_cfp'\n write_permission_required = 'orga.edit_cfp'\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n ctx['sform'] = self.sform\n return ctx\n\n @cached_property\n def sform(self):\n return CfPSettingsForm(\n read_only=(self._action == 'view'),\n locales=self.request.event.locales,\n obj=self.request.event,\n attribute_name='settings',\n data=self.request.POST if self.request.method == \"POST\" else None,\n prefix='settings'\n )\n\n def get_object(self):\n return self.request.event.cfp\n\n def get_success_url(self) -> str:\n return self.get_object().urls.text\n\n def form_valid(self, form):\n if not self.sform.is_valid():\n messages.error(self.request, _('We had trouble saving your input.'))\n return self.form_invalid(form)\n messages.success(self.request, 'The CfP update has been saved.')\n form.instance.event = self.request.event\n ret = super().form_valid(form)\n if form.has_changed():\n form.instance.log_action('pretalx.cfp.update', person=self.request.user, orga=True)\n self.sform.save()\n return ret\n\n\nclass CfPQuestionList(PermissionRequired, TemplateView):\n template_name = 'orga/cfp/question_view.html'\n permission_required = 'orga.view_question'\n\n def get_permission_object(self):\n return self.request.event\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n ctx['speaker_questions'] = Question.all_objects.filter(event=self.request.event, target='speaker')\n ctx['submission_questions'] = Question.all_objects.filter(event=self.request.event, target='submission')\n return ctx\n\n\n@method_decorator(csp_update(SCRIPT_SRC=\"'self' 'unsafe-inline'\"), name='dispatch')\nclass CfPQuestionDetail(PermissionRequired, ActionFromUrl, CreateOrUpdateView):\n model = Question\n form_class = QuestionForm\n permission_required = 'orga.edit_question'\n write_permission_required = 'orga.edit_question'\n\n def get_template_names(self):\n if self.request.path.lstrip('/').endswith('edit'):\n return 'orga/cfp/question_form.html'\n return 'orga/cfp/question_detail.html'\n\n def get_permission_object(self):\n return self.get_object() or self.request.event\n\n def get_object(self) -> Question:\n return Question.all_objects.filter(event=self.request.event, pk=self.kwargs.get('pk')).first()\n\n @cached_property\n def formset(self):\n formset_class = inlineformset_factory(\n Question, AnswerOption, form=AnswerOptionForm, formset=I18nFormSet,\n can_delete=True, extra=0,\n )\n return formset_class(\n self.request.POST if self.request.method == 'POST' else None,\n queryset=AnswerOption.objects.filter(question=self.get_object()) if self.get_object() else AnswerOption.objects.none(),\n event=self.request.event\n )\n\n def save_formset(self, obj):\n if self.formset.is_valid():\n for form in self.formset.initial_forms:\n if form in self.formset.deleted_forms:\n if not form.instance.pk:\n continue\n obj.log_action(\n 'pretalx.question.option.delete', person=self.request.user, orga=True, data={\n 'id': form.instance.pk\n }\n )\n form.instance.delete()\n form.instance.pk = None\n elif form.has_changed():\n form.instance.question = obj\n form.save()\n change_data = {k: form.cleaned_data.get(k) for k in form.changed_data}\n change_data['id'] = form.instance.pk\n obj.log_action(\n 'pretalx.question.option.update',\n person=self.request.user, orga=True, data=change_data\n )\n\n for form in self.formset.extra_forms:\n if not form.has_changed():\n continue\n if self.formset._should_delete_form(form):\n continue\n form.instance.question = obj\n form.save()\n change_data = {k: form.cleaned_data.get(k) for k in form.changed_data}\n change_data['id'] = form.instance.pk\n obj.log_action(\n 'pretalx.question.option.create',\n person=self.request.user, orga=True, data=change_data\n )\n\n return True\n return False\n\n def get_context_data(self, *args, **kwargs):\n ctx = super().get_context_data(*args, **kwargs)\n ctx['formset'] = self.formset\n ctx['question'] = self.get_object()\n return ctx\n\n def get_form_kwargs(self, *args, **kwargs):\n kwargs = super().get_form_kwargs(*args, **kwargs)\n if not self.get_object():\n initial = kwargs['initial'] or dict()\n initial['target'] = self.request.GET.get('type')\n kwargs['initial'] = initial\n return kwargs\n\n def get_success_url(self) -> str:\n obj = self.get_object() or self.instance\n return obj.urls.base\n\n @transaction.atomic\n def form_valid(self, form):\n form.instance.event = self.request.event\n self.instance = form.instance\n ret = super().form_valid(form)\n if form.cleaned_data.get('variant') in ('choices', 'multiple_choice'):\n result = self.save_formset(self.instance)\n if not result:\n return self.get(self.request, *self.args, **self.kwargs)\n if form.has_changed():\n action = 'pretalx.question.' + ('update' if self.object else 'create')\n form.instance.log_action(action, person=self.request.user, orga=True)\n messages.success(self.request, 'The question has been saved.')\n return ret\n\n\nclass CfPQuestionDelete(PermissionRequired, View):\n permission_required = 'orga.remove_question'\n\n def get_object(self) -> Question:\n return Question.all_objects.get(event=self.request.event, pk=self.kwargs.get('pk'))\n\n def dispatch(self, request, *args, **kwargs):\n super().dispatch(request, *args, **kwargs)\n question = self.get_object()\n\n try:\n with transaction.atomic():\n question.options.all().delete()\n question.delete()\n question.log_action('pretalx.question.delete', person=self.request.user, orga=True)\n messages.success(request, _('The question has been deleted.'))\n except ProtectedError:\n question.active = False\n question.save()\n messages.error(request, _('You cannot delete a question that has already been answered. We have deactivated the question instead.'))\n return redirect(self.request.event.cfp.urls.questions)\n\n\nclass CfPQuestionToggle(PermissionRequired, View):\n permission_required = 'orga.edit_question'\n\n def get_object(self) -> Question:\n return Question.all_objects.filter(event=self.request.event, pk=self.kwargs.get('pk')).first()\n\n def dispatch(self, request, *args, **kwargs):\n super().dispatch(request, *args, **kwargs)\n question = self.get_object()\n\n question.active = not question.active\n question.save(update_fields=['active'])\n return redirect(question.urls.base)\n\n\nclass SubmissionTypeList(PermissionRequired, ListView):\n template_name = 'orga/cfp/submission_type_view.html'\n context_object_name = 'types'\n permission_required = 'orga.view_submission_type'\n\n def get_permission_object(self):\n return self.request.event\n\n def get_queryset(self):\n return self.request.event.submission_types.all()\n\n\nclass SubmissionTypeDetail(PermissionRequired, ActionFromUrl, CreateOrUpdateView):\n model = SubmissionType\n form_class = SubmissionTypeForm\n template_name = 'orga/cfp/submission_type_form.html'\n permission_required = 'orga.edit_submission_type'\n write_permission_required = 'orga.edit_submission_type'\n\n def get_success_url(self) -> str:\n return self.request.event.cfp.urls.types\n\n def get_object(self):\n return self.request.event.submission_types.filter(pk=self.kwargs.get('pk')).first()\n\n def get_permission_object(self):\n return self.get_object() or self.request.event\n\n def form_valid(self, form):\n messages.success(self.request, 'The Submission Type has been saved.')\n form.instance.event = self.request.event\n ret = super().form_valid(form)\n if form.has_changed():\n action = 'pretalx.submission_type.' + ('update' if self.object else 'create')\n form.instance.log_action(action, person=self.request.user, orga=True)\n return ret\n\n\nclass SubmissionTypeDefault(PermissionRequired, View):\n permission_required = 'orga.edit_submission_type'\n\n def get_object(self):\n return self.request.event.submission_types.get(pk=self.kwargs.get('pk'))\n\n def dispatch(self, request, *args, **kwargs):\n super().dispatch(request, *args, **kwargs)\n submission_type = self.get_object()\n self.request.event.cfp.default_type = submission_type\n self.request.event.cfp.save(update_fields=['default_type'])\n submission_type.log_action('pretalx.submission_type.make_default', person=self.request.user, orga=True)\n messages.success(request, _('The Submission Type has been made default.'))\n return redirect(self.request.event.cfp.urls.types)\n\n\nclass SubmissionTypeDelete(PermissionRequired, View):\n permission_required = 'orga.remove_submission_type'\n\n def get_object(self):\n return self.request.event.submission_types.get(pk=self.kwargs.get('pk'))\n\n def dispatch(self, request, *args, **kwargs):\n super().dispatch(request, *args, **kwargs)\n submission_type = self.get_object()\n\n if request.event.submission_types.count() == 1:\n messages.error(request, _('You cannot delete the only submission type. Try creating another one first!'))\n elif request.event.cfp.default_type == submission_type:\n messages.error(request, _('You cannot delete the default submission type. Make another type default first!'))\n else:\n try:\n submission_type.delete()\n request.event.log_action('pretalx.submission_type.delete', person=self.request.user, orga=True)\n messages.success(request, _('The Submission Type has been deleted.'))\n except ProtectedError: # TODO: show which/how many submissions are concerned\n messages.error(request, _('This Submission Type is in use in a submission and cannot be deleted.'))\n return redirect(self.request.event.cfp.urls.types)\n", "path": "src/pretalx/orga/views/cfp.py"}]}
| 4,026 | 151 |
gh_patches_debug_2462
|
rasdani/github-patches
|
git_diff
|
ansible-collections__community.aws-1206
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ec2_customer_gateway: bgp_asn is not required
### Summary
The ec2_customer_gateway module has incorrect documentation for the bgp_asn parameter.
It says the ASN must be passed when state=present, but the code defaults to 25000 if the parameter is absent. See the ensure_cgw_present() method:
```
def ensure_cgw_present(self, bgp_asn, ip_address):
if not bgp_asn:
bgp_asn = 65000
response = self.ec2.create_customer_gateway(
DryRun=False,
Type='ipsec.1',
PublicIp=ip_address,
BgpAsn=bgp_asn,
)
return response
### Issue Type
Documentation Report
### Component Name
ec2_customer_gateway
### Ansible Version
```console (paste below)
$ ansible --version
ansible [core 2.12.4]
config file = None
configured module search path = ['/home/neil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/lib/python3.10/site-packages/ansible
ansible collection location = /home/neil/.ansible/collections:/usr/share/ansible/collections
executable location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/bin/ansible
python version = 3.10.1 (main, Jan 10 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
jinja version = 3.1.1
libyaml = True
```
### Collection Versions
```console (paste below)
$ ansible-galaxy collection list
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
main branch, as of 2022-04-18.
### Additional Information
Suggested rewording:
```
options:
bgp_asn:
description:
- Border Gateway Protocol (BGP) Autonomous System Number (ASN), defaults to 25000.
type: int
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
</issue>
<code>
[start of plugins/modules/ec2_customer_gateway.py]
1 #!/usr/bin/python
2 #
3 # GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
4
5 from __future__ import absolute_import, division, print_function
6 __metaclass__ = type
7
8
9 DOCUMENTATION = '''
10 ---
11 module: ec2_customer_gateway
12 version_added: 1.0.0
13 short_description: Manage an AWS customer gateway
14 description:
15 - Manage an AWS customer gateway.
16 author: Michael Baydoun (@MichaelBaydoun)
17 notes:
18 - You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the
19 first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent
20 requests do not create new customer gateway resources.
21 - Return values contain customer_gateway and customer_gateways keys which are identical dicts. You should use
22 customer_gateway. See U(https://github.com/ansible/ansible-modules-extras/issues/2773) for details.
23 options:
24 bgp_asn:
25 description:
26 - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).
27 type: int
28 ip_address:
29 description:
30 - Internet-routable IP address for customers gateway, must be a static address.
31 required: true
32 type: str
33 name:
34 description:
35 - Name of the customer gateway.
36 required: true
37 type: str
38 routing:
39 description:
40 - The type of routing.
41 choices: ['static', 'dynamic']
42 default: dynamic
43 type: str
44 state:
45 description:
46 - Create or terminate the Customer Gateway.
47 default: present
48 choices: [ 'present', 'absent' ]
49 type: str
50 extends_documentation_fragment:
51 - amazon.aws.aws
52 - amazon.aws.ec2
53
54 '''
55
56 EXAMPLES = '''
57 - name: Create Customer Gateway
58 community.aws.ec2_customer_gateway:
59 bgp_asn: 12345
60 ip_address: 1.2.3.4
61 name: IndianapolisOffice
62 region: us-east-1
63 register: cgw
64
65 - name: Delete Customer Gateway
66 community.aws.ec2_customer_gateway:
67 ip_address: 1.2.3.4
68 name: IndianapolisOffice
69 state: absent
70 region: us-east-1
71 register: cgw
72 '''
73
74 RETURN = '''
75 gateway.customer_gateways:
76 description: details about the gateway that was created.
77 returned: success
78 type: complex
79 contains:
80 bgp_asn:
81 description: The Border Gateway Autonomous System Number.
82 returned: when exists and gateway is available.
83 sample: 65123
84 type: str
85 customer_gateway_id:
86 description: gateway id assigned by amazon.
87 returned: when exists and gateway is available.
88 sample: cgw-cb6386a2
89 type: str
90 ip_address:
91 description: ip address of your gateway device.
92 returned: when exists and gateway is available.
93 sample: 1.2.3.4
94 type: str
95 state:
96 description: state of gateway.
97 returned: when gateway exists and is available.
98 sample: available
99 type: str
100 tags:
101 description: Any tags on the gateway.
102 returned: when gateway exists and is available, and when tags exist.
103 type: list
104 type:
105 description: encryption type.
106 returned: when gateway exists and is available.
107 sample: ipsec.1
108 type: str
109 '''
110
111 try:
112 import botocore
113 except ImportError:
114 pass # Handled by AnsibleAWSModule
115
116 from ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict
117
118 from ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule
119 from ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry
120
121
122 class Ec2CustomerGatewayManager:
123
124 def __init__(self, module):
125 self.module = module
126
127 try:
128 self.ec2 = module.client('ec2')
129 except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:
130 module.fail_json_aws(e, msg='Failed to connect to AWS')
131
132 @AWSRetry.jittered_backoff(delay=2, max_delay=30, retries=6, catch_extra_error_codes=['IncorrectState'])
133 def ensure_cgw_absent(self, gw_id):
134 response = self.ec2.delete_customer_gateway(
135 DryRun=False,
136 CustomerGatewayId=gw_id
137 )
138 return response
139
140 def ensure_cgw_present(self, bgp_asn, ip_address):
141 if not bgp_asn:
142 bgp_asn = 65000
143 response = self.ec2.create_customer_gateway(
144 DryRun=False,
145 Type='ipsec.1',
146 PublicIp=ip_address,
147 BgpAsn=bgp_asn,
148 )
149 return response
150
151 def tag_cgw_name(self, gw_id, name):
152 response = self.ec2.create_tags(
153 DryRun=False,
154 Resources=[
155 gw_id,
156 ],
157 Tags=[
158 {
159 'Key': 'Name',
160 'Value': name
161 },
162 ]
163 )
164 return response
165
166 def describe_gateways(self, ip_address):
167 response = self.ec2.describe_customer_gateways(
168 DryRun=False,
169 Filters=[
170 {
171 'Name': 'state',
172 'Values': [
173 'available',
174 ]
175 },
176 {
177 'Name': 'ip-address',
178 'Values': [
179 ip_address,
180 ]
181 }
182 ]
183 )
184 return response
185
186
187 def main():
188 argument_spec = dict(
189 bgp_asn=dict(required=False, type='int'),
190 ip_address=dict(required=True),
191 name=dict(required=True),
192 routing=dict(default='dynamic', choices=['dynamic', 'static']),
193 state=dict(default='present', choices=['present', 'absent']),
194 )
195
196 module = AnsibleAWSModule(
197 argument_spec=argument_spec,
198 supports_check_mode=True,
199 required_if=[
200 ('routing', 'dynamic', ['bgp_asn'])
201 ]
202 )
203
204 gw_mgr = Ec2CustomerGatewayManager(module)
205
206 name = module.params.get('name')
207
208 existing = gw_mgr.describe_gateways(module.params['ip_address'])
209
210 results = dict(changed=False)
211 if module.params['state'] == 'present':
212 if existing['CustomerGateways']:
213 existing['CustomerGateway'] = existing['CustomerGateways'][0]
214 results['gateway'] = existing
215 if existing['CustomerGateway']['Tags']:
216 tag_array = existing['CustomerGateway']['Tags']
217 for key, value in enumerate(tag_array):
218 if value['Key'] == 'Name':
219 current_name = value['Value']
220 if current_name != name:
221 results['name'] = gw_mgr.tag_cgw_name(
222 results['gateway']['CustomerGateway']['CustomerGatewayId'],
223 module.params['name'],
224 )
225 results['changed'] = True
226 else:
227 if not module.check_mode:
228 results['gateway'] = gw_mgr.ensure_cgw_present(
229 module.params['bgp_asn'],
230 module.params['ip_address'],
231 )
232 results['name'] = gw_mgr.tag_cgw_name(
233 results['gateway']['CustomerGateway']['CustomerGatewayId'],
234 module.params['name'],
235 )
236 results['changed'] = True
237
238 elif module.params['state'] == 'absent':
239 if existing['CustomerGateways']:
240 existing['CustomerGateway'] = existing['CustomerGateways'][0]
241 results['gateway'] = existing
242 if not module.check_mode:
243 results['gateway'] = gw_mgr.ensure_cgw_absent(
244 existing['CustomerGateway']['CustomerGatewayId']
245 )
246 results['changed'] = True
247
248 pretty_results = camel_dict_to_snake_dict(results)
249 module.exit_json(**pretty_results)
250
251
252 if __name__ == '__main__':
253 main()
254
[end of plugins/modules/ec2_customer_gateway.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/plugins/modules/ec2_customer_gateway.py b/plugins/modules/ec2_customer_gateway.py
--- a/plugins/modules/ec2_customer_gateway.py
+++ b/plugins/modules/ec2_customer_gateway.py
@@ -23,7 +23,8 @@
options:
bgp_asn:
description:
- - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).
+ - Border Gateway Protocol (BGP) Autonomous System Number (ASN).
+ - Defaults to C(65000) if not specified when I(state=present).
type: int
ip_address:
description:
|
{"golden_diff": "diff --git a/plugins/modules/ec2_customer_gateway.py b/plugins/modules/ec2_customer_gateway.py\n--- a/plugins/modules/ec2_customer_gateway.py\n+++ b/plugins/modules/ec2_customer_gateway.py\n@@ -23,7 +23,8 @@\n options:\n bgp_asn:\n description:\n- - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).\n+ - Border Gateway Protocol (BGP) Autonomous System Number (ASN).\n+ - Defaults to C(65000) if not specified when I(state=present).\n type: int\n ip_address:\n description:\n", "issue": "ec2_customer_gateway: bgp_asn is not required\n### Summary\n\nThe ec2_customer_gateway module has incorrect documentation for the bgp_asn parameter.\r\n\r\nIt says the ASN must be passed when state=present, but the code defaults to 25000 if the parameter is absent. See the ensure_cgw_present() method:\r\n\r\n```\r\n def ensure_cgw_present(self, bgp_asn, ip_address):\r\n if not bgp_asn:\r\n bgp_asn = 65000\r\n response = self.ec2.create_customer_gateway(\r\n DryRun=False,\r\n Type='ipsec.1',\r\n PublicIp=ip_address,\r\n BgpAsn=bgp_asn,\r\n )\r\n return response\n\n### Issue Type\n\nDocumentation Report\n\n### Component Name\n\nec2_customer_gateway\n\n### Ansible Version\n\n```console (paste below)\r\n$ ansible --version\r\nansible [core 2.12.4]\r\n config file = None\r\n configured module search path = ['/home/neil/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/lib/python3.10/site-packages/ansible\r\n ansible collection location = /home/neil/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /home/neil/.local/share/virtualenvs/community.aws-uRL047Ho/bin/ansible\r\n python version = 3.10.1 (main, Jan 10 2022, 00:00:00) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]\r\n jinja version = 3.1.1\r\n libyaml = True\r\n```\r\n\n\n### Collection Versions\n\n```console (paste below)\r\n$ ansible-galaxy collection list\r\n```\r\n\n\n### Configuration\n\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\n\n### OS / Environment\n\nmain branch, as of 2022-04-18.\n\n### Additional Information\n\nSuggested rewording:\r\n\r\n```\r\noptions:\r\n bgp_asn:\r\n description:\r\n - Border Gateway Protocol (BGP) Autonomous System Number (ASN), defaults to 25000.\r\n type: int\r\n```\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct\n", "before_files": [{"content": "#!/usr/bin/python\n#\n# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)\n\nfrom __future__ import absolute_import, division, print_function\n__metaclass__ = type\n\n\nDOCUMENTATION = '''\n---\nmodule: ec2_customer_gateway\nversion_added: 1.0.0\nshort_description: Manage an AWS customer gateway\ndescription:\n - Manage an AWS customer gateway.\nauthor: Michael Baydoun (@MichaelBaydoun)\nnotes:\n - You cannot create more than one customer gateway with the same IP address. If you run an identical request more than one time, the\n first request creates the customer gateway, and subsequent requests return information about the existing customer gateway. The subsequent\n requests do not create new customer gateway resources.\n - Return values contain customer_gateway and customer_gateways keys which are identical dicts. You should use\n customer_gateway. See U(https://github.com/ansible/ansible-modules-extras/issues/2773) for details.\noptions:\n bgp_asn:\n description:\n - Border Gateway Protocol (BGP) Autonomous System Number (ASN), required when I(state=present).\n type: int\n ip_address:\n description:\n - Internet-routable IP address for customers gateway, must be a static address.\n required: true\n type: str\n name:\n description:\n - Name of the customer gateway.\n required: true\n type: str\n routing:\n description:\n - The type of routing.\n choices: ['static', 'dynamic']\n default: dynamic\n type: str\n state:\n description:\n - Create or terminate the Customer Gateway.\n default: present\n choices: [ 'present', 'absent' ]\n type: str\nextends_documentation_fragment:\n- amazon.aws.aws\n- amazon.aws.ec2\n\n'''\n\nEXAMPLES = '''\n- name: Create Customer Gateway\n community.aws.ec2_customer_gateway:\n bgp_asn: 12345\n ip_address: 1.2.3.4\n name: IndianapolisOffice\n region: us-east-1\n register: cgw\n\n- name: Delete Customer Gateway\n community.aws.ec2_customer_gateway:\n ip_address: 1.2.3.4\n name: IndianapolisOffice\n state: absent\n region: us-east-1\n register: cgw\n'''\n\nRETURN = '''\ngateway.customer_gateways:\n description: details about the gateway that was created.\n returned: success\n type: complex\n contains:\n bgp_asn:\n description: The Border Gateway Autonomous System Number.\n returned: when exists and gateway is available.\n sample: 65123\n type: str\n customer_gateway_id:\n description: gateway id assigned by amazon.\n returned: when exists and gateway is available.\n sample: cgw-cb6386a2\n type: str\n ip_address:\n description: ip address of your gateway device.\n returned: when exists and gateway is available.\n sample: 1.2.3.4\n type: str\n state:\n description: state of gateway.\n returned: when gateway exists and is available.\n sample: available\n type: str\n tags:\n description: Any tags on the gateway.\n returned: when gateway exists and is available, and when tags exist.\n type: list\n type:\n description: encryption type.\n returned: when gateway exists and is available.\n sample: ipsec.1\n type: str\n'''\n\ntry:\n import botocore\nexcept ImportError:\n pass # Handled by AnsibleAWSModule\n\nfrom ansible.module_utils.common.dict_transformations import camel_dict_to_snake_dict\n\nfrom ansible_collections.amazon.aws.plugins.module_utils.core import AnsibleAWSModule\nfrom ansible_collections.amazon.aws.plugins.module_utils.ec2 import AWSRetry\n\n\nclass Ec2CustomerGatewayManager:\n\n def __init__(self, module):\n self.module = module\n\n try:\n self.ec2 = module.client('ec2')\n except (botocore.exceptions.ClientError, botocore.exceptions.BotoCoreError) as e:\n module.fail_json_aws(e, msg='Failed to connect to AWS')\n\n @AWSRetry.jittered_backoff(delay=2, max_delay=30, retries=6, catch_extra_error_codes=['IncorrectState'])\n def ensure_cgw_absent(self, gw_id):\n response = self.ec2.delete_customer_gateway(\n DryRun=False,\n CustomerGatewayId=gw_id\n )\n return response\n\n def ensure_cgw_present(self, bgp_asn, ip_address):\n if not bgp_asn:\n bgp_asn = 65000\n response = self.ec2.create_customer_gateway(\n DryRun=False,\n Type='ipsec.1',\n PublicIp=ip_address,\n BgpAsn=bgp_asn,\n )\n return response\n\n def tag_cgw_name(self, gw_id, name):\n response = self.ec2.create_tags(\n DryRun=False,\n Resources=[\n gw_id,\n ],\n Tags=[\n {\n 'Key': 'Name',\n 'Value': name\n },\n ]\n )\n return response\n\n def describe_gateways(self, ip_address):\n response = self.ec2.describe_customer_gateways(\n DryRun=False,\n Filters=[\n {\n 'Name': 'state',\n 'Values': [\n 'available',\n ]\n },\n {\n 'Name': 'ip-address',\n 'Values': [\n ip_address,\n ]\n }\n ]\n )\n return response\n\n\ndef main():\n argument_spec = dict(\n bgp_asn=dict(required=False, type='int'),\n ip_address=dict(required=True),\n name=dict(required=True),\n routing=dict(default='dynamic', choices=['dynamic', 'static']),\n state=dict(default='present', choices=['present', 'absent']),\n )\n\n module = AnsibleAWSModule(\n argument_spec=argument_spec,\n supports_check_mode=True,\n required_if=[\n ('routing', 'dynamic', ['bgp_asn'])\n ]\n )\n\n gw_mgr = Ec2CustomerGatewayManager(module)\n\n name = module.params.get('name')\n\n existing = gw_mgr.describe_gateways(module.params['ip_address'])\n\n results = dict(changed=False)\n if module.params['state'] == 'present':\n if existing['CustomerGateways']:\n existing['CustomerGateway'] = existing['CustomerGateways'][0]\n results['gateway'] = existing\n if existing['CustomerGateway']['Tags']:\n tag_array = existing['CustomerGateway']['Tags']\n for key, value in enumerate(tag_array):\n if value['Key'] == 'Name':\n current_name = value['Value']\n if current_name != name:\n results['name'] = gw_mgr.tag_cgw_name(\n results['gateway']['CustomerGateway']['CustomerGatewayId'],\n module.params['name'],\n )\n results['changed'] = True\n else:\n if not module.check_mode:\n results['gateway'] = gw_mgr.ensure_cgw_present(\n module.params['bgp_asn'],\n module.params['ip_address'],\n )\n results['name'] = gw_mgr.tag_cgw_name(\n results['gateway']['CustomerGateway']['CustomerGatewayId'],\n module.params['name'],\n )\n results['changed'] = True\n\n elif module.params['state'] == 'absent':\n if existing['CustomerGateways']:\n existing['CustomerGateway'] = existing['CustomerGateways'][0]\n results['gateway'] = existing\n if not module.check_mode:\n results['gateway'] = gw_mgr.ensure_cgw_absent(\n existing['CustomerGateway']['CustomerGatewayId']\n )\n results['changed'] = True\n\n pretty_results = camel_dict_to_snake_dict(results)\n module.exit_json(**pretty_results)\n\n\nif __name__ == '__main__':\n main()\n", "path": "plugins/modules/ec2_customer_gateway.py"}]}
| 3,462 | 136 |
gh_patches_debug_18332
|
rasdani/github-patches
|
git_diff
|
pyjanitor-devs__pyjanitor-569
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[INF] Make requirements.txt smaller
Follow-up from #257
The idea is to have feature-specific requirements.txt. Such as for biology, specifically requires biopython.
so we can install the package per feature as needed, such as with extra biology. It goes `pip install "pyjanitor[biology]"`
The example of such implementations in `setup.py` is available at this link: https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
[INF] Make requirements.txt smaller
Follow-up from #257
The idea is to have feature-specific requirements.txt. Such as for biology, specifically requires biopython.
so we can install the package per feature as needed, such as with extra biology. It goes `pip install "pyjanitor[biology]"`
The example of such implementations in `setup.py` is available at this link: https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies
</issue>
<code>
[start of setup.py]
1 import re
2 from pathlib import Path
3
4 from setuptools import setup
5
6
7 def requirements():
8 with open("requirements.txt", "r+") as f:
9 return f.read()
10
11
12 def generate_long_description() -> str:
13 """
14 Extra chunks from README for PyPI description.
15
16 Target chunks must be contained within `.. pypi-doc` pair comments,
17 so there must be an even number of comments in README.
18
19 :returns: Extracted description from README
20
21 """
22 # Read the contents of README file
23 this_directory = Path(__file__).parent
24 with open(this_directory / "README.rst", encoding="utf-8") as f:
25 readme = f.read()
26
27 # Find pypi-doc comments in README
28 indices = [m.start() for m in re.finditer(".. pypi-doc", readme)]
29 if len(indices) % 2 != 0:
30 raise Exception("Odd number of `.. pypi-doc` comments in README")
31
32 # Loop through pairs of comments and save text between pairs
33 long_description = ""
34 for i in range(0, len(indices), 2):
35 start_index = indices[i] + 11
36 end_index = indices[i + 1]
37 long_description += readme[start_index:end_index]
38 return long_description
39
40
41 setup(
42 name="pyjanitor",
43 version="0.18.2",
44 description="Tools for cleaning pandas DataFrames",
45 author="Eric J. Ma",
46 author_email="[email protected]",
47 url="https://github.com/ericmjl/pyjanitor",
48 packages=["janitor"],
49 install_requires=requirements(),
50 python_requires=">=3.6",
51 long_description=generate_long_description(),
52 long_description_content_type="text/x-rst",
53 )
54
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -38,6 +38,12 @@
return long_description
+extra_spark = ["pyspark"]
+extra_biology = ["biopython"]
+extra_chemistry = ["rdkit"]
+extra_engineering = ["unyt"]
+extra_all = extra_biology + extra_engineering + extra_spark
+
setup(
name="pyjanitor",
version="0.18.2",
@@ -47,6 +53,14 @@
url="https://github.com/ericmjl/pyjanitor",
packages=["janitor"],
install_requires=requirements(),
+ extras_require={
+ "all": extra_all,
+ "biology": extra_biology,
+ # "chemistry": extra_chemistry, should be inserted once rdkit
+ # fixes https://github.com/rdkit/rdkit/issues/1812
+ "engineering": extra_engineering,
+ "spark": extra_spark,
+ },
python_requires=">=3.6",
long_description=generate_long_description(),
long_description_content_type="text/x-rst",
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -38,6 +38,12 @@\n return long_description\n \n \n+extra_spark = [\"pyspark\"]\n+extra_biology = [\"biopython\"]\n+extra_chemistry = [\"rdkit\"]\n+extra_engineering = [\"unyt\"]\n+extra_all = extra_biology + extra_engineering + extra_spark\n+\n setup(\n name=\"pyjanitor\",\n version=\"0.18.2\",\n@@ -47,6 +53,14 @@\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n+ extras_require={\n+ \"all\": extra_all,\n+ \"biology\": extra_biology,\n+ # \"chemistry\": extra_chemistry, should be inserted once rdkit\n+ # fixes https://github.com/rdkit/rdkit/issues/1812\n+ \"engineering\": extra_engineering,\n+ \"spark\": extra_spark,\n+ },\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n", "issue": "[INF] Make requirements.txt smaller\nFollow-up from #257 \r\n\r\nThe idea is to have feature-specific requirements.txt. Such as for biology, specifically requires biopython.\r\n\r\nso we can install the package per feature as needed, such as with extra biology. It goes `pip install \"pyjanitor[biology]\"`\r\n\r\nThe example of such implementations in `setup.py` is available at this link: https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies\n[INF] Make requirements.txt smaller\nFollow-up from #257 \r\n\r\nThe idea is to have feature-specific requirements.txt. Such as for biology, specifically requires biopython.\r\n\r\nso we can install the package per feature as needed, such as with extra biology. It goes `pip install \"pyjanitor[biology]\"`\r\n\r\nThe example of such implementations in `setup.py` is available at this link: https://setuptools.readthedocs.io/en/latest/setuptools.html#declaring-extras-optional-features-with-their-own-dependencies\n", "before_files": [{"content": "import re\nfrom pathlib import Path\n\nfrom setuptools import setup\n\n\ndef requirements():\n with open(\"requirements.txt\", \"r+\") as f:\n return f.read()\n\n\ndef generate_long_description() -> str:\n \"\"\"\n Extra chunks from README for PyPI description.\n\n Target chunks must be contained within `.. pypi-doc` pair comments,\n so there must be an even number of comments in README.\n\n :returns: Extracted description from README\n\n \"\"\"\n # Read the contents of README file\n this_directory = Path(__file__).parent\n with open(this_directory / \"README.rst\", encoding=\"utf-8\") as f:\n readme = f.read()\n\n # Find pypi-doc comments in README\n indices = [m.start() for m in re.finditer(\".. pypi-doc\", readme)]\n if len(indices) % 2 != 0:\n raise Exception(\"Odd number of `.. pypi-doc` comments in README\")\n\n # Loop through pairs of comments and save text between pairs\n long_description = \"\"\n for i in range(0, len(indices), 2):\n start_index = indices[i] + 11\n end_index = indices[i + 1]\n long_description += readme[start_index:end_index]\n return long_description\n\n\nsetup(\n name=\"pyjanitor\",\n version=\"0.18.2\",\n description=\"Tools for cleaning pandas DataFrames\",\n author=\"Eric J. Ma\",\n author_email=\"[email protected]\",\n url=\"https://github.com/ericmjl/pyjanitor\",\n packages=[\"janitor\"],\n install_requires=requirements(),\n python_requires=\">=3.6\",\n long_description=generate_long_description(),\n long_description_content_type=\"text/x-rst\",\n)\n", "path": "setup.py"}]}
| 1,238 | 263 |
gh_patches_debug_20555
|
rasdani/github-patches
|
git_diff
|
pytorch__vision-8124
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`VisionDataset` abstract class forces to set 'root' parameter, even if it is unused
### 🐛 Describe the bug
`TypeError: __init__() missing 1 required positional argument: 'root'`
when initializing VisionDataset without `root` param.
```python
from torchvision.transforms import ToTensor
from torchvision.datasets import VisionDataset
class ExtendedVisionDataset(VisionDataset):
def __init__(self, **kwargs):
super().__init__(**kwargs)
transforms = ToTensor()
dataset =ExtendedVisionDataset(transforms =transforms) # I dont really need root param
```
### Versions
```
PyTorch version: 2.0.0
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A
OS: Ubuntu 18.04.3 LTS (x86_64)
GCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Clang version: Could not collect
CMake version: version 3.10.2
Libc version: glibc-2.27
Python version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.27
Is CUDA available: True
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla V100-DGXS-32GB
GPU 1: Tesla V100-DGXS-32GB
GPU 2: Tesla V100-DGXS-32GB
GPU 3: Tesla V100-DGXS-32GB
Nvidia driver version: 515.105.01
cuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 40
On-line CPU(s) list: 0-39
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 79
Model name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz
Stepping: 1
CPU MHz: 2468.528
CPU max MHz: 3600.0000
CPU min MHz: 1200.0000
BogoMIPS: 4397.69
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 51200K
NUMA node0 CPU(s): 0-39
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d
Versions of relevant libraries:
[pip3] numpy==1.24.4
[pip3] torch==2.0.0
[pip3] torchmetrics==0.10.3
[pip3] torchvision==0.15.0
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-service 2.4.0 py39h5eee18b_1
[conda] mkl_fft 1.3.8 py39h5eee18b_0
[conda] mkl_random 1.2.4 py39hdb19cb5_0
[conda] numpy 1.24.4 pypi_0 pypi
[conda] pytorch 2.0.0 py3.9_cuda11.7_cudnn8.5.0_0 pytorch
[conda] pytorch-cuda 11.7 h778d358_5 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchmetrics 0.10.3 pyhd8ed1ab_0 conda-forge
[conda] torchtriton 2.0.0 py39 pytorch
[conda] torchvision 0.15.0 py39_cu117 pytorch```
cc @pmeier
</issue>
<code>
[start of torchvision/datasets/vision.py]
1 import os
2 from typing import Any, Callable, List, Optional, Tuple
3
4 import torch.utils.data as data
5
6 from ..utils import _log_api_usage_once
7
8
9 class VisionDataset(data.Dataset):
10 """
11 Base Class For making datasets which are compatible with torchvision.
12 It is necessary to override the ``__getitem__`` and ``__len__`` method.
13
14 Args:
15 root (string): Root directory of dataset.
16 transforms (callable, optional): A function/transforms that takes in
17 an image and a label and returns the transformed versions of both.
18 transform (callable, optional): A function/transform that takes in an PIL image
19 and returns a transformed version. E.g, ``transforms.RandomCrop``
20 target_transform (callable, optional): A function/transform that takes in the
21 target and transforms it.
22
23 .. note::
24
25 :attr:`transforms` and the combination of :attr:`transform` and :attr:`target_transform` are mutually exclusive.
26 """
27
28 _repr_indent = 4
29
30 def __init__(
31 self,
32 root: str,
33 transforms: Optional[Callable] = None,
34 transform: Optional[Callable] = None,
35 target_transform: Optional[Callable] = None,
36 ) -> None:
37 _log_api_usage_once(self)
38 if isinstance(root, str):
39 root = os.path.expanduser(root)
40 self.root = root
41
42 has_transforms = transforms is not None
43 has_separate_transform = transform is not None or target_transform is not None
44 if has_transforms and has_separate_transform:
45 raise ValueError("Only transforms or transform/target_transform can be passed as argument")
46
47 # for backwards-compatibility
48 self.transform = transform
49 self.target_transform = target_transform
50
51 if has_separate_transform:
52 transforms = StandardTransform(transform, target_transform)
53 self.transforms = transforms
54
55 def __getitem__(self, index: int) -> Any:
56 """
57 Args:
58 index (int): Index
59
60 Returns:
61 (Any): Sample and meta data, optionally transformed by the respective transforms.
62 """
63 raise NotImplementedError
64
65 def __len__(self) -> int:
66 raise NotImplementedError
67
68 def __repr__(self) -> str:
69 head = "Dataset " + self.__class__.__name__
70 body = [f"Number of datapoints: {self.__len__()}"]
71 if self.root is not None:
72 body.append(f"Root location: {self.root}")
73 body += self.extra_repr().splitlines()
74 if hasattr(self, "transforms") and self.transforms is not None:
75 body += [repr(self.transforms)]
76 lines = [head] + [" " * self._repr_indent + line for line in body]
77 return "\n".join(lines)
78
79 def _format_transform_repr(self, transform: Callable, head: str) -> List[str]:
80 lines = transform.__repr__().splitlines()
81 return [f"{head}{lines[0]}"] + ["{}{}".format(" " * len(head), line) for line in lines[1:]]
82
83 def extra_repr(self) -> str:
84 return ""
85
86
87 class StandardTransform:
88 def __init__(self, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None) -> None:
89 self.transform = transform
90 self.target_transform = target_transform
91
92 def __call__(self, input: Any, target: Any) -> Tuple[Any, Any]:
93 if self.transform is not None:
94 input = self.transform(input)
95 if self.target_transform is not None:
96 target = self.target_transform(target)
97 return input, target
98
99 def _format_transform_repr(self, transform: Callable, head: str) -> List[str]:
100 lines = transform.__repr__().splitlines()
101 return [f"{head}{lines[0]}"] + ["{}{}".format(" " * len(head), line) for line in lines[1:]]
102
103 def __repr__(self) -> str:
104 body = [self.__class__.__name__]
105 if self.transform is not None:
106 body += self._format_transform_repr(self.transform, "Transform: ")
107 if self.target_transform is not None:
108 body += self._format_transform_repr(self.target_transform, "Target transform: ")
109
110 return "\n".join(body)
111
[end of torchvision/datasets/vision.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/torchvision/datasets/vision.py b/torchvision/datasets/vision.py
--- a/torchvision/datasets/vision.py
+++ b/torchvision/datasets/vision.py
@@ -12,7 +12,7 @@
It is necessary to override the ``__getitem__`` and ``__len__`` method.
Args:
- root (string): Root directory of dataset.
+ root (string, optional): Root directory of dataset. Only used for `__repr__`.
transforms (callable, optional): A function/transforms that takes in
an image and a label and returns the transformed versions of both.
transform (callable, optional): A function/transform that takes in an PIL image
@@ -29,7 +29,7 @@
def __init__(
self,
- root: str,
+ root: Optional[str] = None,
transforms: Optional[Callable] = None,
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
|
{"golden_diff": "diff --git a/torchvision/datasets/vision.py b/torchvision/datasets/vision.py\n--- a/torchvision/datasets/vision.py\n+++ b/torchvision/datasets/vision.py\n@@ -12,7 +12,7 @@\n It is necessary to override the ``__getitem__`` and ``__len__`` method.\n \n Args:\n- root (string): Root directory of dataset.\n+ root (string, optional): Root directory of dataset. Only used for `__repr__`.\n transforms (callable, optional): A function/transforms that takes in\n an image and a label and returns the transformed versions of both.\n transform (callable, optional): A function/transform that takes in an PIL image\n@@ -29,7 +29,7 @@\n \n def __init__(\n self,\n- root: str,\n+ root: Optional[str] = None,\n transforms: Optional[Callable] = None,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n", "issue": "`VisionDataset` abstract class forces to set 'root' parameter, even if it is unused\n### \ud83d\udc1b Describe the bug\n\n`TypeError: __init__() missing 1 required positional argument: 'root'`\r\n\r\nwhen initializing VisionDataset without `root` param.\r\n\r\n```python\r\n\r\nfrom torchvision.transforms import ToTensor\r\nfrom torchvision.datasets import VisionDataset\r\n\r\nclass ExtendedVisionDataset(VisionDataset):\r\n def __init__(self, **kwargs):\r\n super().__init__(**kwargs) \r\n\r\ntransforms = ToTensor()\r\ndataset =ExtendedVisionDataset(transforms =transforms) # I dont really need root param\r\n```\n\n### Versions\n\n```\r\nPyTorch version: 2.0.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: 11.7\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 18.04.3 LTS (x86_64)\r\nGCC version: (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0\r\nClang version: Could not collect\r\nCMake version: version 3.10.2\r\nLibc version: glibc-2.27\r\n\r\nPython version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-88-generic-x86_64-with-glibc2.27\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nCUDA_MODULE_LOADING set to: LAZY\r\nGPU models and configuration: \r\nGPU 0: Tesla V100-DGXS-32GB\r\nGPU 1: Tesla V100-DGXS-32GB\r\nGPU 2: Tesla V100-DGXS-32GB\r\nGPU 3: Tesla V100-DGXS-32GB\r\n\r\nNvidia driver version: 515.105.01\r\ncuDNN version: /usr/lib/x86_64-linux-gnu/libcudnn.so.7.6.3\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: x86_64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 40\r\nOn-line CPU(s) list: 0-39\r\nThread(s) per core: 2\r\nCore(s) per socket: 20\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: GenuineIntel\r\nCPU family: 6\r\nModel: 79\r\nModel name: Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz\r\nStepping: 1\r\nCPU MHz: 2468.528\r\nCPU max MHz: 3600.0000\r\nCPU min MHz: 1200.0000\r\nBogoMIPS: 4397.69\r\nL1d cache: 32K\r\nL1i cache: 32K\r\nL2 cache: 256K\r\nL3 cache: 51200K\r\nNUMA node0 CPU(s): 0-39\r\nFlags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts md_clear flush_l1d\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.4\r\n[pip3] torch==2.0.0\r\n[pip3] torchmetrics==0.10.3\r\n[pip3] torchvision==0.15.0\r\n[pip3] triton==2.0.0\r\n[conda] blas 1.0 mkl \r\n[conda] mkl 2023.1.0 h213fc3f_46344 \r\n[conda] mkl-service 2.4.0 py39h5eee18b_1 \r\n[conda] mkl_fft 1.3.8 py39h5eee18b_0 \r\n[conda] mkl_random 1.2.4 py39hdb19cb5_0 \r\n[conda] numpy 1.24.4 pypi_0 pypi\r\n[conda] pytorch 2.0.0 py3.9_cuda11.7_cudnn8.5.0_0 pytorch\r\n[conda] pytorch-cuda 11.7 h778d358_5 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] torchmetrics 0.10.3 pyhd8ed1ab_0 conda-forge\r\n[conda] torchtriton 2.0.0 py39 pytorch\r\n[conda] torchvision 0.15.0 py39_cu117 pytorch```\n\ncc @pmeier\n", "before_files": [{"content": "import os\nfrom typing import Any, Callable, List, Optional, Tuple\n\nimport torch.utils.data as data\n\nfrom ..utils import _log_api_usage_once\n\n\nclass VisionDataset(data.Dataset):\n \"\"\"\n Base Class For making datasets which are compatible with torchvision.\n It is necessary to override the ``__getitem__`` and ``__len__`` method.\n\n Args:\n root (string): Root directory of dataset.\n transforms (callable, optional): A function/transforms that takes in\n an image and a label and returns the transformed versions of both.\n transform (callable, optional): A function/transform that takes in an PIL image\n and returns a transformed version. E.g, ``transforms.RandomCrop``\n target_transform (callable, optional): A function/transform that takes in the\n target and transforms it.\n\n .. note::\n\n :attr:`transforms` and the combination of :attr:`transform` and :attr:`target_transform` are mutually exclusive.\n \"\"\"\n\n _repr_indent = 4\n\n def __init__(\n self,\n root: str,\n transforms: Optional[Callable] = None,\n transform: Optional[Callable] = None,\n target_transform: Optional[Callable] = None,\n ) -> None:\n _log_api_usage_once(self)\n if isinstance(root, str):\n root = os.path.expanduser(root)\n self.root = root\n\n has_transforms = transforms is not None\n has_separate_transform = transform is not None or target_transform is not None\n if has_transforms and has_separate_transform:\n raise ValueError(\"Only transforms or transform/target_transform can be passed as argument\")\n\n # for backwards-compatibility\n self.transform = transform\n self.target_transform = target_transform\n\n if has_separate_transform:\n transforms = StandardTransform(transform, target_transform)\n self.transforms = transforms\n\n def __getitem__(self, index: int) -> Any:\n \"\"\"\n Args:\n index (int): Index\n\n Returns:\n (Any): Sample and meta data, optionally transformed by the respective transforms.\n \"\"\"\n raise NotImplementedError\n\n def __len__(self) -> int:\n raise NotImplementedError\n\n def __repr__(self) -> str:\n head = \"Dataset \" + self.__class__.__name__\n body = [f\"Number of datapoints: {self.__len__()}\"]\n if self.root is not None:\n body.append(f\"Root location: {self.root}\")\n body += self.extra_repr().splitlines()\n if hasattr(self, \"transforms\") and self.transforms is not None:\n body += [repr(self.transforms)]\n lines = [head] + [\" \" * self._repr_indent + line for line in body]\n return \"\\n\".join(lines)\n\n def _format_transform_repr(self, transform: Callable, head: str) -> List[str]:\n lines = transform.__repr__().splitlines()\n return [f\"{head}{lines[0]}\"] + [\"{}{}\".format(\" \" * len(head), line) for line in lines[1:]]\n\n def extra_repr(self) -> str:\n return \"\"\n\n\nclass StandardTransform:\n def __init__(self, transform: Optional[Callable] = None, target_transform: Optional[Callable] = None) -> None:\n self.transform = transform\n self.target_transform = target_transform\n\n def __call__(self, input: Any, target: Any) -> Tuple[Any, Any]:\n if self.transform is not None:\n input = self.transform(input)\n if self.target_transform is not None:\n target = self.target_transform(target)\n return input, target\n\n def _format_transform_repr(self, transform: Callable, head: str) -> List[str]:\n lines = transform.__repr__().splitlines()\n return [f\"{head}{lines[0]}\"] + [\"{}{}\".format(\" \" * len(head), line) for line in lines[1:]]\n\n def __repr__(self) -> str:\n body = [self.__class__.__name__]\n if self.transform is not None:\n body += self._format_transform_repr(self.transform, \"Transform: \")\n if self.target_transform is not None:\n body += self._format_transform_repr(self.target_transform, \"Target transform: \")\n\n return \"\\n\".join(body)\n", "path": "torchvision/datasets/vision.py"}]}
| 3,120 | 228 |
gh_patches_debug_30083
|
rasdani/github-patches
|
git_diff
|
mathesar-foundation__mathesar-1188
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
The type inference algorithm should use `TEXT` rather than `VARCHAR`
## Reproduce
1. "New Table" > "Import Data" > "Copy and Paste Text"
1. Paste the following data and proceed to create and view the table.
```txt
first_name
Marge
Homer
Lisa
Bart
Maggie
```
1. From the `columns` API, expect the response for the `first_name` column to have `"type": "TEXT"`
1. Observe instead that the column is `VARCHAR` without a length set.
## Rationale
- I spoke with @kgodey about the Mathesar Text type today and she say that Mathesar should only be configuring either: `TEXT` columns or `VARCHAR` columns with a length specified. She may be able to elaborate on the thinking that went into this decision.
## Additional context
- In #1118, we are doing some work to bring the front end into alignment with the above expectations when the user manually configures the DB settings for the Text type.
</issue>
<code>
[start of db/columns/operations/infer_types.py]
1 import logging
2
3 from sqlalchemy import VARCHAR, TEXT, Text
4 from sqlalchemy.exc import DatabaseError
5
6 from db.columns.exceptions import DagCycleError
7 from db.columns.operations.alter import alter_column_type
8 from db.tables.operations.select import get_oid_from_table, reflect_table
9 from db.types.operations.cast import get_supported_alter_column_types
10 from db.types import base
11
12
13 logger = logging.getLogger(__name__)
14
15 MAX_INFERENCE_DAG_DEPTH = 100
16
17 TYPE_INFERENCE_DAG = {
18 base.PostgresType.BOOLEAN.value: [],
19 base.MathesarCustomType.EMAIL.value: [],
20 base.PostgresType.INTERVAL.value: [],
21 base.PostgresType.NUMERIC.value: [
22 base.PostgresType.BOOLEAN.value,
23 ],
24 base.STRING: [
25 base.PostgresType.BOOLEAN.value,
26 base.PostgresType.DATE.value,
27 base.PostgresType.NUMERIC.value,
28 base.MathesarCustomType.MATHESAR_MONEY.value,
29 base.PostgresType.TIMESTAMP_WITHOUT_TIME_ZONE.value,
30 base.PostgresType.TIMESTAMP_WITH_TIME_ZONE.value,
31 # We only infer to TIME_WITHOUT_TIME_ZONE as time zones don't make much sense
32 # without additional date information. See postgres documentation for further
33 # details: https://www.postgresql.org/docs/13/datatype-datetime.html
34 base.PostgresType.TIME_WITHOUT_TIME_ZONE.value,
35 base.PostgresType.INTERVAL.value,
36 base.MathesarCustomType.EMAIL.value,
37 base.MathesarCustomType.URI.value,
38 ],
39 }
40
41
42 def _get_reverse_type_map(engine):
43 supported_types = get_supported_alter_column_types(engine)
44 reverse_type_map = {v: k for k, v in supported_types.items()}
45 reverse_type_map.update(
46 {
47 Text: base.STRING,
48 TEXT: base.STRING,
49 VARCHAR: base.STRING,
50 }
51 )
52 return reverse_type_map
53
54
55 def infer_column_type(schema, table_name, column_name, engine, depth=0, type_inference_dag=TYPE_INFERENCE_DAG):
56 if depth > MAX_INFERENCE_DAG_DEPTH:
57 raise DagCycleError("The type_inference_dag likely has a cycle")
58 reverse_type_map = _get_reverse_type_map(engine)
59
60 table = reflect_table(table_name, schema, engine)
61 column_type = table.columns[column_name].type.__class__
62 column_type_str = reverse_type_map.get(column_type)
63
64 logger.debug(f"column_type_str: {column_type_str}")
65 table_oid = get_oid_from_table(table_name, schema, engine)
66 for type_str in type_inference_dag.get(column_type_str, []):
67 try:
68 with engine.begin() as conn:
69 alter_column_type(table_oid, column_name, engine, conn, type_str)
70 logger.info(f"Column {column_name} altered to type {type_str}")
71 column_type = infer_column_type(
72 schema,
73 table_name,
74 column_name,
75 engine,
76 depth=depth + 1,
77 type_inference_dag=type_inference_dag,
78 )
79 break
80 # It's expected we catch this error when the test to see whether
81 # a type is appropriate for a column fails.
82 except DatabaseError:
83 logger.info(
84 f"Cannot alter column {column_name} to type {type_str}"
85 )
86 return column_type
87
[end of db/columns/operations/infer_types.py]
[start of db/tables/operations/create.py]
1 from sqlalchemy import Column, String, Table, MetaData
2 from sqlalchemy.ext import compiler
3 from sqlalchemy.schema import DDLElement
4
5 from db.columns.utils import init_mathesar_table_column_list_with_defaults
6 from db.schemas.operations.create import create_schema
7
8
9 def create_mathesar_table(name, schema, columns, engine, metadata=None):
10 """
11 This method creates a Postgres table in the specified schema using the
12 given name and column list. It adds internal mathesar columns to the
13 table.
14 """
15 columns = init_mathesar_table_column_list_with_defaults(columns)
16 create_schema(schema, engine)
17 # We need this so that we can create multiple mathesar tables in the
18 # same MetaData, enabling them to reference each other in the
19 # SQLAlchemy context (e.g., for creating a ForeignKey relationship)
20 if metadata is None:
21 metadata = MetaData(bind=engine, schema=schema)
22 table = Table(
23 name,
24 metadata,
25 *columns,
26 schema=schema
27 )
28 table.create(engine)
29 return table
30
31
32 def create_string_column_table(name, schema, column_names, engine):
33 """
34 This method creates a Postgres table in the specified schema, with all
35 columns being String type.
36 """
37 columns_ = [Column(name=column_name, type_=String) for column_name in column_names]
38 table = create_mathesar_table(name, schema, columns_, engine)
39 return table
40
41
42 class CreateTableAs(DDLElement):
43 def __init__(self, name, selectable):
44 self.name = name
45 self.selectable = selectable
46
47
48 @compiler.compiles(CreateTableAs)
49 def compile_create_table_as(element, compiler, **_):
50 return "CREATE TABLE %s AS (%s)" % (
51 element.name,
52 compiler.sql_compiler.process(element.selectable, literal_binds=True),
53 )
54
[end of db/tables/operations/create.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/db/columns/operations/infer_types.py b/db/columns/operations/infer_types.py
--- a/db/columns/operations/infer_types.py
+++ b/db/columns/operations/infer_types.py
@@ -21,7 +21,7 @@
base.PostgresType.NUMERIC.value: [
base.PostgresType.BOOLEAN.value,
],
- base.STRING: [
+ base.PostgresType.TEXT.value: [
base.PostgresType.BOOLEAN.value,
base.PostgresType.DATE.value,
base.PostgresType.NUMERIC.value,
@@ -44,9 +44,9 @@
reverse_type_map = {v: k for k, v in supported_types.items()}
reverse_type_map.update(
{
- Text: base.STRING,
- TEXT: base.STRING,
- VARCHAR: base.STRING,
+ Text: base.PostgresType.TEXT.value,
+ TEXT: base.PostgresType.TEXT.value,
+ VARCHAR: base.PostgresType.TEXT.value,
}
)
return reverse_type_map
diff --git a/db/tables/operations/create.py b/db/tables/operations/create.py
--- a/db/tables/operations/create.py
+++ b/db/tables/operations/create.py
@@ -1,4 +1,4 @@
-from sqlalchemy import Column, String, Table, MetaData
+from sqlalchemy import Column, TEXT, Table, MetaData
from sqlalchemy.ext import compiler
from sqlalchemy.schema import DDLElement
@@ -34,7 +34,7 @@
This method creates a Postgres table in the specified schema, with all
columns being String type.
"""
- columns_ = [Column(name=column_name, type_=String) for column_name in column_names]
+ columns_ = [Column(name=column_name, type_=TEXT) for column_name in column_names]
table = create_mathesar_table(name, schema, columns_, engine)
return table
|
{"golden_diff": "diff --git a/db/columns/operations/infer_types.py b/db/columns/operations/infer_types.py\n--- a/db/columns/operations/infer_types.py\n+++ b/db/columns/operations/infer_types.py\n@@ -21,7 +21,7 @@\n base.PostgresType.NUMERIC.value: [\n base.PostgresType.BOOLEAN.value,\n ],\n- base.STRING: [\n+ base.PostgresType.TEXT.value: [\n base.PostgresType.BOOLEAN.value,\n base.PostgresType.DATE.value,\n base.PostgresType.NUMERIC.value,\n@@ -44,9 +44,9 @@\n reverse_type_map = {v: k for k, v in supported_types.items()}\n reverse_type_map.update(\n {\n- Text: base.STRING,\n- TEXT: base.STRING,\n- VARCHAR: base.STRING,\n+ Text: base.PostgresType.TEXT.value,\n+ TEXT: base.PostgresType.TEXT.value,\n+ VARCHAR: base.PostgresType.TEXT.value,\n }\n )\n return reverse_type_map\ndiff --git a/db/tables/operations/create.py b/db/tables/operations/create.py\n--- a/db/tables/operations/create.py\n+++ b/db/tables/operations/create.py\n@@ -1,4 +1,4 @@\n-from sqlalchemy import Column, String, Table, MetaData\n+from sqlalchemy import Column, TEXT, Table, MetaData\n from sqlalchemy.ext import compiler\n from sqlalchemy.schema import DDLElement\n \n@@ -34,7 +34,7 @@\n This method creates a Postgres table in the specified schema, with all\n columns being String type.\n \"\"\"\n- columns_ = [Column(name=column_name, type_=String) for column_name in column_names]\n+ columns_ = [Column(name=column_name, type_=TEXT) for column_name in column_names]\n table = create_mathesar_table(name, schema, columns_, engine)\n return table\n", "issue": "The type inference algorithm should use `TEXT` rather than `VARCHAR`\n## Reproduce\r\n\r\n1. \"New Table\" > \"Import Data\" > \"Copy and Paste Text\"\r\n\r\n1. Paste the following data and proceed to create and view the table.\r\n\r\n ```txt\r\n first_name\r\n Marge\r\n Homer\r\n Lisa\r\n Bart\r\n Maggie\r\n ```\r\n\r\n1. From the `columns` API, expect the response for the `first_name` column to have `\"type\": \"TEXT\"`\r\n\r\n1. Observe instead that the column is `VARCHAR` without a length set.\r\n\r\n## Rationale\r\n\r\n- I spoke with @kgodey about the Mathesar Text type today and she say that Mathesar should only be configuring either: `TEXT` columns or `VARCHAR` columns with a length specified. She may be able to elaborate on the thinking that went into this decision.\r\n\r\n\r\n## Additional context\r\n\r\n- In #1118, we are doing some work to bring the front end into alignment with the above expectations when the user manually configures the DB settings for the Text type.\r\n\r\n\n", "before_files": [{"content": "import logging\n\nfrom sqlalchemy import VARCHAR, TEXT, Text\nfrom sqlalchemy.exc import DatabaseError\n\nfrom db.columns.exceptions import DagCycleError\nfrom db.columns.operations.alter import alter_column_type\nfrom db.tables.operations.select import get_oid_from_table, reflect_table\nfrom db.types.operations.cast import get_supported_alter_column_types\nfrom db.types import base\n\n\nlogger = logging.getLogger(__name__)\n\nMAX_INFERENCE_DAG_DEPTH = 100\n\nTYPE_INFERENCE_DAG = {\n base.PostgresType.BOOLEAN.value: [],\n base.MathesarCustomType.EMAIL.value: [],\n base.PostgresType.INTERVAL.value: [],\n base.PostgresType.NUMERIC.value: [\n base.PostgresType.BOOLEAN.value,\n ],\n base.STRING: [\n base.PostgresType.BOOLEAN.value,\n base.PostgresType.DATE.value,\n base.PostgresType.NUMERIC.value,\n base.MathesarCustomType.MATHESAR_MONEY.value,\n base.PostgresType.TIMESTAMP_WITHOUT_TIME_ZONE.value,\n base.PostgresType.TIMESTAMP_WITH_TIME_ZONE.value,\n # We only infer to TIME_WITHOUT_TIME_ZONE as time zones don't make much sense\n # without additional date information. See postgres documentation for further\n # details: https://www.postgresql.org/docs/13/datatype-datetime.html\n base.PostgresType.TIME_WITHOUT_TIME_ZONE.value,\n base.PostgresType.INTERVAL.value,\n base.MathesarCustomType.EMAIL.value,\n base.MathesarCustomType.URI.value,\n ],\n}\n\n\ndef _get_reverse_type_map(engine):\n supported_types = get_supported_alter_column_types(engine)\n reverse_type_map = {v: k for k, v in supported_types.items()}\n reverse_type_map.update(\n {\n Text: base.STRING,\n TEXT: base.STRING,\n VARCHAR: base.STRING,\n }\n )\n return reverse_type_map\n\n\ndef infer_column_type(schema, table_name, column_name, engine, depth=0, type_inference_dag=TYPE_INFERENCE_DAG):\n if depth > MAX_INFERENCE_DAG_DEPTH:\n raise DagCycleError(\"The type_inference_dag likely has a cycle\")\n reverse_type_map = _get_reverse_type_map(engine)\n\n table = reflect_table(table_name, schema, engine)\n column_type = table.columns[column_name].type.__class__\n column_type_str = reverse_type_map.get(column_type)\n\n logger.debug(f\"column_type_str: {column_type_str}\")\n table_oid = get_oid_from_table(table_name, schema, engine)\n for type_str in type_inference_dag.get(column_type_str, []):\n try:\n with engine.begin() as conn:\n alter_column_type(table_oid, column_name, engine, conn, type_str)\n logger.info(f\"Column {column_name} altered to type {type_str}\")\n column_type = infer_column_type(\n schema,\n table_name,\n column_name,\n engine,\n depth=depth + 1,\n type_inference_dag=type_inference_dag,\n )\n break\n # It's expected we catch this error when the test to see whether\n # a type is appropriate for a column fails.\n except DatabaseError:\n logger.info(\n f\"Cannot alter column {column_name} to type {type_str}\"\n )\n return column_type\n", "path": "db/columns/operations/infer_types.py"}, {"content": "from sqlalchemy import Column, String, Table, MetaData\nfrom sqlalchemy.ext import compiler\nfrom sqlalchemy.schema import DDLElement\n\nfrom db.columns.utils import init_mathesar_table_column_list_with_defaults\nfrom db.schemas.operations.create import create_schema\n\n\ndef create_mathesar_table(name, schema, columns, engine, metadata=None):\n \"\"\"\n This method creates a Postgres table in the specified schema using the\n given name and column list. It adds internal mathesar columns to the\n table.\n \"\"\"\n columns = init_mathesar_table_column_list_with_defaults(columns)\n create_schema(schema, engine)\n # We need this so that we can create multiple mathesar tables in the\n # same MetaData, enabling them to reference each other in the\n # SQLAlchemy context (e.g., for creating a ForeignKey relationship)\n if metadata is None:\n metadata = MetaData(bind=engine, schema=schema)\n table = Table(\n name,\n metadata,\n *columns,\n schema=schema\n )\n table.create(engine)\n return table\n\n\ndef create_string_column_table(name, schema, column_names, engine):\n \"\"\"\n This method creates a Postgres table in the specified schema, with all\n columns being String type.\n \"\"\"\n columns_ = [Column(name=column_name, type_=String) for column_name in column_names]\n table = create_mathesar_table(name, schema, columns_, engine)\n return table\n\n\nclass CreateTableAs(DDLElement):\n def __init__(self, name, selectable):\n self.name = name\n self.selectable = selectable\n\n\[email protected](CreateTableAs)\ndef compile_create_table_as(element, compiler, **_):\n return \"CREATE TABLE %s AS (%s)\" % (\n element.name,\n compiler.sql_compiler.process(element.selectable, literal_binds=True),\n )\n", "path": "db/tables/operations/create.py"}]}
| 2,157 | 415 |
gh_patches_debug_36545
|
rasdani/github-patches
|
git_diff
|
translate__pootle-6680
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Try simpler language code as fallback before settings.LANGUAGE_CODE
In https://github.com/translate/pootle/blob/10913224/pootle/i18n/override.py#L87-L101 if the language code `it-IT` (for example) is tried and eventually falls back to `settings.LANGUAGE_CODE`, but it makes sense to first try `it` (simpler version of `it-IT`) before falling back to `settings.LANGUAGE_CODE`.
</issue>
<code>
[start of pootle/i18n/override.py]
1 # -*- coding: utf-8 -*-
2 #
3 # Copyright (C) Pootle contributors.
4 #
5 # This file is a part of the Pootle project. It is distributed under the GPL3
6 # or later license. See the LICENSE file for a copy of the license and the
7 # AUTHORS file for copyright and authorship information.
8
9 """Overrides and support functions for arbitrary locale support."""
10
11 import os
12
13 from translate.lang import data
14
15 from django.utils import translation
16 from django.utils.translation import LANGUAGE_SESSION_KEY, trans_real
17
18 from pootle.i18n import gettext
19
20
21 def find_languages(locale_path):
22 """Generate supported languages list from the :param:`locale_path`
23 directory.
24 """
25 dirs = os.listdir(locale_path)
26 langs = []
27 for lang in dirs:
28 if (data.langcode_re.match(lang) and
29 os.path.isdir(os.path.join(locale_path, lang))):
30 langs.append((trans_real.to_language(lang),
31 data.languages.get(lang, (lang,))[0]))
32 return langs
33
34
35 def supported_langs():
36 """Returns a list of supported locales."""
37 from django.conf import settings
38 return settings.LANGUAGES
39
40
41 def get_lang_from_session(request, supported):
42 if hasattr(request, 'session'):
43 lang_code = request.session.get(LANGUAGE_SESSION_KEY, None)
44 if lang_code and lang_code in supported:
45 return lang_code
46
47 return None
48
49
50 def get_lang_from_cookie(request, supported):
51 """See if the user's browser sent a cookie with a preferred language."""
52 from django.conf import settings
53 lang_code = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)
54
55 if lang_code and lang_code in supported:
56 return lang_code
57
58 return None
59
60
61 def get_lang_from_http_header(request, supported):
62 """If the user's browser sends a list of preferred languages in the
63 HTTP_ACCEPT_LANGUAGE header, parse it into a list. Then walk through
64 the list, and for each entry, we check whether we have a matching
65 pootle translation project. If so, we return it.
66
67 If nothing is found, return None.
68 """
69 accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '')
70 for accept_lang, __ in trans_real.parse_accept_lang_header(accept):
71 if accept_lang == '*':
72 return None
73
74 normalized = data.normalize_code(data.simplify_to_common(accept_lang))
75 if normalized in ['en-us', 'en']:
76 return None
77 if normalized in supported:
78 return normalized
79
80 # FIXME: horribly slow way of dealing with languages with @ in them
81 for lang in supported.keys():
82 if normalized == data.normalize_code(lang):
83 return lang
84 return None
85
86
87 def get_language_from_request(request, check_path=False):
88 """Try to get the user's preferred language by first checking the
89 cookie and then by checking the HTTP language headers.
90
91 If all fails, try fall back to default language.
92 """
93 supported = dict(supported_langs())
94 for lang_getter in (get_lang_from_session,
95 get_lang_from_cookie,
96 get_lang_from_http_header):
97 lang = lang_getter(request, supported)
98 if lang is not None:
99 return lang
100 from django.conf import settings
101 return settings.LANGUAGE_CODE
102
103
104 def get_language_bidi():
105 """Override for Django's get_language_bidi that's aware of more RTL
106 languages.
107 """
108 return gettext.language_dir(translation.get_language()) == 'rtl'
109
110
111 def hijack_translation():
112 """Sabotage Django's fascist linguistical regime."""
113 # Override functions that check if language is known to Django
114 translation.check_for_language = lambda lang_code: True
115 trans_real.check_for_language = lambda lang_code: True
116 translation.get_language_from_request = get_language_from_request
117
118 # Override django's inadequate bidi detection
119 translation.get_language_bidi = get_language_bidi
120
[end of pootle/i18n/override.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pootle/i18n/override.py b/pootle/i18n/override.py
--- a/pootle/i18n/override.py
+++ b/pootle/i18n/override.py
@@ -38,24 +38,35 @@
return settings.LANGUAGES
-def get_lang_from_session(request, supported):
- if hasattr(request, 'session'):
- lang_code = request.session.get(LANGUAGE_SESSION_KEY, None)
- if lang_code and lang_code in supported:
- return lang_code
+def get_language_supported(lang_code, supported):
+ normalized = data.normalize_code(data.simplify_to_common(lang_code))
+ if normalized in supported:
+ return normalized
+
+ # FIXME: horribly slow way of dealing with languages with @ in them
+ for lang in supported.keys():
+ if normalized == data.normalize_code(lang):
+ return lang
return None
+def get_lang_from_session(request, supported):
+ if not hasattr(request, 'session'):
+ return None
+ lang_code = request.session.get(LANGUAGE_SESSION_KEY, None)
+ if not lang_code:
+ return None
+ return get_language_supported(lang_code, supported)
+
+
def get_lang_from_cookie(request, supported):
"""See if the user's browser sent a cookie with a preferred language."""
from django.conf import settings
lang_code = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)
-
- if lang_code and lang_code in supported:
- return lang_code
-
- return None
+ if not lang_code:
+ return None
+ return get_language_supported(lang_code, supported)
def get_lang_from_http_header(request, supported):
@@ -70,17 +81,9 @@
for accept_lang, __ in trans_real.parse_accept_lang_header(accept):
if accept_lang == '*':
return None
-
- normalized = data.normalize_code(data.simplify_to_common(accept_lang))
- if normalized in ['en-us', 'en']:
- return None
- if normalized in supported:
- return normalized
-
- # FIXME: horribly slow way of dealing with languages with @ in them
- for lang in supported.keys():
- if normalized == data.normalize_code(lang):
- return lang
+ supported_lang = get_language_supported(accept_lang, supported)
+ if supported_lang:
+ return supported_lang
return None
@@ -98,7 +101,9 @@
if lang is not None:
return lang
from django.conf import settings
- return settings.LANGUAGE_CODE
+ if settings.LANGUAGE_CODE in supported:
+ return settings.LANGUAGE_CODE
+ return 'en-us'
def get_language_bidi():
|
{"golden_diff": "diff --git a/pootle/i18n/override.py b/pootle/i18n/override.py\n--- a/pootle/i18n/override.py\n+++ b/pootle/i18n/override.py\n@@ -38,24 +38,35 @@\n return settings.LANGUAGES\n \n \n-def get_lang_from_session(request, supported):\n- if hasattr(request, 'session'):\n- lang_code = request.session.get(LANGUAGE_SESSION_KEY, None)\n- if lang_code and lang_code in supported:\n- return lang_code\n+def get_language_supported(lang_code, supported):\n+ normalized = data.normalize_code(data.simplify_to_common(lang_code))\n+ if normalized in supported:\n+ return normalized\n+\n+ # FIXME: horribly slow way of dealing with languages with @ in them\n+ for lang in supported.keys():\n+ if normalized == data.normalize_code(lang):\n+ return lang\n \n return None\n \n \n+def get_lang_from_session(request, supported):\n+ if not hasattr(request, 'session'):\n+ return None\n+ lang_code = request.session.get(LANGUAGE_SESSION_KEY, None)\n+ if not lang_code:\n+ return None\n+ return get_language_supported(lang_code, supported)\n+\n+\n def get_lang_from_cookie(request, supported):\n \"\"\"See if the user's browser sent a cookie with a preferred language.\"\"\"\n from django.conf import settings\n lang_code = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)\n-\n- if lang_code and lang_code in supported:\n- return lang_code\n-\n- return None\n+ if not lang_code:\n+ return None\n+ return get_language_supported(lang_code, supported)\n \n \n def get_lang_from_http_header(request, supported):\n@@ -70,17 +81,9 @@\n for accept_lang, __ in trans_real.parse_accept_lang_header(accept):\n if accept_lang == '*':\n return None\n-\n- normalized = data.normalize_code(data.simplify_to_common(accept_lang))\n- if normalized in ['en-us', 'en']:\n- return None\n- if normalized in supported:\n- return normalized\n-\n- # FIXME: horribly slow way of dealing with languages with @ in them\n- for lang in supported.keys():\n- if normalized == data.normalize_code(lang):\n- return lang\n+ supported_lang = get_language_supported(accept_lang, supported)\n+ if supported_lang:\n+ return supported_lang\n return None\n \n \n@@ -98,7 +101,9 @@\n if lang is not None:\n return lang\n from django.conf import settings\n- return settings.LANGUAGE_CODE\n+ if settings.LANGUAGE_CODE in supported:\n+ return settings.LANGUAGE_CODE\n+ return 'en-us'\n \n \n def get_language_bidi():\n", "issue": "Try simpler language code as fallback before settings.LANGUAGE_CODE\nIn https://github.com/translate/pootle/blob/10913224/pootle/i18n/override.py#L87-L101 if the language code `it-IT` (for example) is tried and eventually falls back to `settings.LANGUAGE_CODE`, but it makes sense to first try `it` (simpler version of `it-IT`) before falling back to `settings.LANGUAGE_CODE`.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\n\"\"\"Overrides and support functions for arbitrary locale support.\"\"\"\n\nimport os\n\nfrom translate.lang import data\n\nfrom django.utils import translation\nfrom django.utils.translation import LANGUAGE_SESSION_KEY, trans_real\n\nfrom pootle.i18n import gettext\n\n\ndef find_languages(locale_path):\n \"\"\"Generate supported languages list from the :param:`locale_path`\n directory.\n \"\"\"\n dirs = os.listdir(locale_path)\n langs = []\n for lang in dirs:\n if (data.langcode_re.match(lang) and\n os.path.isdir(os.path.join(locale_path, lang))):\n langs.append((trans_real.to_language(lang),\n data.languages.get(lang, (lang,))[0]))\n return langs\n\n\ndef supported_langs():\n \"\"\"Returns a list of supported locales.\"\"\"\n from django.conf import settings\n return settings.LANGUAGES\n\n\ndef get_lang_from_session(request, supported):\n if hasattr(request, 'session'):\n lang_code = request.session.get(LANGUAGE_SESSION_KEY, None)\n if lang_code and lang_code in supported:\n return lang_code\n\n return None\n\n\ndef get_lang_from_cookie(request, supported):\n \"\"\"See if the user's browser sent a cookie with a preferred language.\"\"\"\n from django.conf import settings\n lang_code = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME)\n\n if lang_code and lang_code in supported:\n return lang_code\n\n return None\n\n\ndef get_lang_from_http_header(request, supported):\n \"\"\"If the user's browser sends a list of preferred languages in the\n HTTP_ACCEPT_LANGUAGE header, parse it into a list. Then walk through\n the list, and for each entry, we check whether we have a matching\n pootle translation project. If so, we return it.\n\n If nothing is found, return None.\n \"\"\"\n accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '')\n for accept_lang, __ in trans_real.parse_accept_lang_header(accept):\n if accept_lang == '*':\n return None\n\n normalized = data.normalize_code(data.simplify_to_common(accept_lang))\n if normalized in ['en-us', 'en']:\n return None\n if normalized in supported:\n return normalized\n\n # FIXME: horribly slow way of dealing with languages with @ in them\n for lang in supported.keys():\n if normalized == data.normalize_code(lang):\n return lang\n return None\n\n\ndef get_language_from_request(request, check_path=False):\n \"\"\"Try to get the user's preferred language by first checking the\n cookie and then by checking the HTTP language headers.\n\n If all fails, try fall back to default language.\n \"\"\"\n supported = dict(supported_langs())\n for lang_getter in (get_lang_from_session,\n get_lang_from_cookie,\n get_lang_from_http_header):\n lang = lang_getter(request, supported)\n if lang is not None:\n return lang\n from django.conf import settings\n return settings.LANGUAGE_CODE\n\n\ndef get_language_bidi():\n \"\"\"Override for Django's get_language_bidi that's aware of more RTL\n languages.\n \"\"\"\n return gettext.language_dir(translation.get_language()) == 'rtl'\n\n\ndef hijack_translation():\n \"\"\"Sabotage Django's fascist linguistical regime.\"\"\"\n # Override functions that check if language is known to Django\n translation.check_for_language = lambda lang_code: True\n trans_real.check_for_language = lambda lang_code: True\n translation.get_language_from_request = get_language_from_request\n\n # Override django's inadequate bidi detection\n translation.get_language_bidi = get_language_bidi\n", "path": "pootle/i18n/override.py"}]}
| 1,742 | 607 |
gh_patches_debug_7348
|
rasdani/github-patches
|
git_diff
|
Kinto__kinto-1131
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Memory backend sometimes show empty permissions
If you do multiple requests on memory backend, the empty permissions cycle between showing and not showing. The same does not happen with postgres.
```json
gsurita-30820:kinto gsurita$ echo '{"permissions": {"read": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a
{
"data": {
"id": "b1",
"last_modified": 1485553456205
},
"permissions": {
"write": [
"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb"
]
}
}
gsurita-30820:kinto gsurita$ echo '{"permissions": {"read": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a
{
"data": {
"id": "b1",
"last_modified": 1485553470501
},
"permissions": {
"collection:create": [],
"group:create": [],
"read": [],
"write": [
"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb"
]
}
}
gsurita-30820:kinto gsurita$ echo '{"permissions": {"read": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a
{
"data": {
"id": "b1",
"last_modified": 1485553471419
},
"permissions": {
"write": [
"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb"
]
}
}
gsurita-30820:kinto gsurita$ echo '{"permissions": {"read": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a
{
"data": {
"id": "b1",
"last_modified": 1485553472203
},
"permissions": {
"collection:create": [],
"group:create": [],
"read": [],
"write": [
"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb"
]
}
}
</issue>
<code>
[start of kinto/core/permission/memory.py]
1 import re
2
3 from kinto.core.decorators import synchronized
4 from kinto.core.permission import PermissionBase
5
6
7 class Permission(PermissionBase):
8 """Permission backend implementation in local process memory.
9
10 Enable in configuration::
11
12 kinto.permission_backend = kinto.core.permission.memory
13
14 :noindex:
15 """
16
17 def __init__(self, *args, **kwargs):
18 super().__init__(*args, **kwargs)
19 self.flush()
20
21 def initialize_schema(self, dry_run=False):
22 # Nothing to do.
23 pass
24
25 def flush(self):
26 self._store = {}
27
28 @synchronized
29 def add_user_principal(self, user_id, principal):
30 user_key = 'user:{}'.format(user_id)
31 user_principals = self._store.get(user_key, set())
32 user_principals.add(principal)
33 self._store[user_key] = user_principals
34
35 @synchronized
36 def remove_user_principal(self, user_id, principal):
37 user_key = 'user:{}'.format(user_id)
38 user_principals = self._store.get(user_key, set())
39 try:
40 user_principals.remove(principal)
41 except KeyError:
42 pass
43 if len(user_principals) == 0:
44 if user_key in self._store:
45 del self._store[user_key]
46 else:
47 self._store[user_key] = user_principals
48
49 @synchronized
50 def remove_principal(self, principal):
51 for user_principals in self._store.values():
52 try:
53 user_principals.remove(principal)
54 except KeyError:
55 pass
56
57 @synchronized
58 def get_user_principals(self, user_id):
59 # Fetch the groups the user is in.
60 user_key = 'user:{}'.format(user_id)
61 members = self._store.get(user_key, set())
62 # Fetch the groups system.Authenticated is in.
63 group_authenticated = self._store.get('user:system.Authenticated', set())
64 return members | group_authenticated
65
66 @synchronized
67 def add_principal_to_ace(self, object_id, permission, principal):
68 permission_key = 'permission:{}:{}'.format(object_id, permission)
69 object_permission_principals = self._store.get(permission_key, set())
70 object_permission_principals.add(principal)
71 self._store[permission_key] = object_permission_principals
72
73 @synchronized
74 def remove_principal_from_ace(self, object_id, permission, principal):
75 permission_key = 'permission:{}:{}'.format(object_id, permission)
76 object_permission_principals = self._store.get(permission_key, set())
77 try:
78 object_permission_principals.remove(principal)
79 except KeyError:
80 pass
81 if len(object_permission_principals) == 0:
82 if permission_key in self._store:
83 del self._store[permission_key]
84 else:
85 self._store[permission_key] = object_permission_principals
86
87 @synchronized
88 def get_object_permission_principals(self, object_id, permission):
89 permission_key = 'permission:{}:{}'.format(object_id, permission)
90 members = self._store.get(permission_key, set())
91 return members
92
93 @synchronized
94 def get_accessible_objects(self, principals, bound_permissions=None, with_children=True):
95 principals = set(principals)
96 candidates = []
97 if bound_permissions is None:
98 for key, value in self._store.items():
99 _, object_id, permission = key.split(':', 2)
100 candidates.append((object_id, permission, value))
101 else:
102 for pattern, perm in bound_permissions:
103 id_match = '.*' if with_children else '[^/]+'
104 regexp = re.compile('^{}$'.format(pattern.replace('*', id_match)))
105 for key, value in self._store.items():
106 if key.endswith(perm):
107 object_id = key.split(':')[1]
108 if regexp.match(object_id):
109 candidates.append((object_id, perm, value))
110
111 perms_by_object_id = {}
112 for (object_id, perm, value) in candidates:
113 if len(principals & value) > 0:
114 perms_by_object_id.setdefault(object_id, set()).add(perm)
115 return perms_by_object_id
116
117 @synchronized
118 def get_authorized_principals(self, bound_permissions):
119 principals = set()
120 for obj_id, perm in bound_permissions:
121 principals |= self.get_object_permission_principals(obj_id, perm)
122 return principals
123
124 @synchronized
125 def get_objects_permissions(self, objects_ids, permissions=None):
126 result = []
127 for object_id in objects_ids:
128 if permissions is None:
129 aces = [k for k in self._store.keys()
130 if k.startswith('permission:{}:'.format(object_id))]
131 else:
132 aces = ['permission:{}:{}'.format(object_id, permission)
133 for permission in permissions]
134 perms = {}
135 for ace in aces:
136 # Should work with 'permission:/url/id:record:create'.
137 permission = ace.split(':', 2)[2]
138 perms[permission] = set(self._store[ace])
139 result.append(perms)
140 return result
141
142 @synchronized
143 def replace_object_permissions(self, object_id, permissions):
144 for permission, principals in permissions.items():
145 permission_key = 'permission:{}:{}'.format(object_id, permission)
146 if permission_key in self._store and len(principals) == 0:
147 del self._store[permission_key]
148 else:
149 self._store[permission_key] = set(principals)
150 return permissions
151
152 @synchronized
153 def delete_object_permissions(self, *object_id_list):
154 to_delete = []
155 for key in self._store.keys():
156 object_id = key.split(':')[1]
157 for pattern in object_id_list:
158 regexp = re.compile('^{}$'.format(pattern.replace('*', '.*')))
159 if regexp.match(object_id):
160 to_delete.append(key)
161 for k in to_delete:
162 del self._store[k]
163
164
165 def load_from_config(config):
166 return Permission()
167
[end of kinto/core/permission/memory.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kinto/core/permission/memory.py b/kinto/core/permission/memory.py
--- a/kinto/core/permission/memory.py
+++ b/kinto/core/permission/memory.py
@@ -145,7 +145,7 @@
permission_key = 'permission:{}:{}'.format(object_id, permission)
if permission_key in self._store and len(principals) == 0:
del self._store[permission_key]
- else:
+ elif principals:
self._store[permission_key] = set(principals)
return permissions
|
{"golden_diff": "diff --git a/kinto/core/permission/memory.py b/kinto/core/permission/memory.py\n--- a/kinto/core/permission/memory.py\n+++ b/kinto/core/permission/memory.py\n@@ -145,7 +145,7 @@\n permission_key = 'permission:{}:{}'.format(object_id, permission)\n if permission_key in self._store and len(principals) == 0:\n del self._store[permission_key]\n- else:\n+ elif principals:\n self._store[permission_key] = set(principals)\n return permissions\n", "issue": "Memory backend sometimes show empty permissions\nIf you do multiple requests on memory backend, the empty permissions cycle between showing and not showing. The same does not happen with postgres.\r\n\r\n```json\r\ngsurita-30820:kinto gsurita$ echo '{\"permissions\": {\"read\": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a\r\n\r\n{\r\n \"data\": {\r\n \"id\": \"b1\",\r\n \"last_modified\": 1485553456205\r\n },\r\n \"permissions\": {\r\n \"write\": [\r\n \"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb\"\r\n ]\r\n }\r\n}\r\n\r\ngsurita-30820:kinto gsurita$ echo '{\"permissions\": {\"read\": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a\r\n\r\n{\r\n \"data\": {\r\n \"id\": \"b1\",\r\n \"last_modified\": 1485553470501\r\n },\r\n \"permissions\": {\r\n \"collection:create\": [],\r\n \"group:create\": [],\r\n \"read\": [],\r\n \"write\": [\r\n \"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb\"\r\n ]\r\n }\r\n}\r\n\r\ngsurita-30820:kinto gsurita$ echo '{\"permissions\": {\"read\": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a\r\n\r\n{\r\n \"data\": {\r\n \"id\": \"b1\",\r\n \"last_modified\": 1485553471419\r\n },\r\n \"permissions\": {\r\n \"write\": [\r\n \"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb\"\r\n ]\r\n }\r\n}\r\n\r\ngsurita-30820:kinto gsurita$ echo '{\"permissions\": {\"read\": []}}' | http put localhost:8888/v1/buckets/b1 -a a:a\r\n\r\n{\r\n \"data\": {\r\n \"id\": \"b1\",\r\n \"last_modified\": 1485553472203\r\n },\r\n \"permissions\": {\r\n \"collection:create\": [],\r\n \"group:create\": [],\r\n \"read\": [],\r\n \"write\": [\r\n \"basicauth:80866b4d0726f35eda20b90bc479a38727c99c68d7c88a87f3b860726a79daeb\"\r\n ]\r\n }\r\n}\r\n\n", "before_files": [{"content": "import re\n\nfrom kinto.core.decorators import synchronized\nfrom kinto.core.permission import PermissionBase\n\n\nclass Permission(PermissionBase):\n \"\"\"Permission backend implementation in local process memory.\n\n Enable in configuration::\n\n kinto.permission_backend = kinto.core.permission.memory\n\n :noindex:\n \"\"\"\n\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self.flush()\n\n def initialize_schema(self, dry_run=False):\n # Nothing to do.\n pass\n\n def flush(self):\n self._store = {}\n\n @synchronized\n def add_user_principal(self, user_id, principal):\n user_key = 'user:{}'.format(user_id)\n user_principals = self._store.get(user_key, set())\n user_principals.add(principal)\n self._store[user_key] = user_principals\n\n @synchronized\n def remove_user_principal(self, user_id, principal):\n user_key = 'user:{}'.format(user_id)\n user_principals = self._store.get(user_key, set())\n try:\n user_principals.remove(principal)\n except KeyError:\n pass\n if len(user_principals) == 0:\n if user_key in self._store:\n del self._store[user_key]\n else:\n self._store[user_key] = user_principals\n\n @synchronized\n def remove_principal(self, principal):\n for user_principals in self._store.values():\n try:\n user_principals.remove(principal)\n except KeyError:\n pass\n\n @synchronized\n def get_user_principals(self, user_id):\n # Fetch the groups the user is in.\n user_key = 'user:{}'.format(user_id)\n members = self._store.get(user_key, set())\n # Fetch the groups system.Authenticated is in.\n group_authenticated = self._store.get('user:system.Authenticated', set())\n return members | group_authenticated\n\n @synchronized\n def add_principal_to_ace(self, object_id, permission, principal):\n permission_key = 'permission:{}:{}'.format(object_id, permission)\n object_permission_principals = self._store.get(permission_key, set())\n object_permission_principals.add(principal)\n self._store[permission_key] = object_permission_principals\n\n @synchronized\n def remove_principal_from_ace(self, object_id, permission, principal):\n permission_key = 'permission:{}:{}'.format(object_id, permission)\n object_permission_principals = self._store.get(permission_key, set())\n try:\n object_permission_principals.remove(principal)\n except KeyError:\n pass\n if len(object_permission_principals) == 0:\n if permission_key in self._store:\n del self._store[permission_key]\n else:\n self._store[permission_key] = object_permission_principals\n\n @synchronized\n def get_object_permission_principals(self, object_id, permission):\n permission_key = 'permission:{}:{}'.format(object_id, permission)\n members = self._store.get(permission_key, set())\n return members\n\n @synchronized\n def get_accessible_objects(self, principals, bound_permissions=None, with_children=True):\n principals = set(principals)\n candidates = []\n if bound_permissions is None:\n for key, value in self._store.items():\n _, object_id, permission = key.split(':', 2)\n candidates.append((object_id, permission, value))\n else:\n for pattern, perm in bound_permissions:\n id_match = '.*' if with_children else '[^/]+'\n regexp = re.compile('^{}$'.format(pattern.replace('*', id_match)))\n for key, value in self._store.items():\n if key.endswith(perm):\n object_id = key.split(':')[1]\n if regexp.match(object_id):\n candidates.append((object_id, perm, value))\n\n perms_by_object_id = {}\n for (object_id, perm, value) in candidates:\n if len(principals & value) > 0:\n perms_by_object_id.setdefault(object_id, set()).add(perm)\n return perms_by_object_id\n\n @synchronized\n def get_authorized_principals(self, bound_permissions):\n principals = set()\n for obj_id, perm in bound_permissions:\n principals |= self.get_object_permission_principals(obj_id, perm)\n return principals\n\n @synchronized\n def get_objects_permissions(self, objects_ids, permissions=None):\n result = []\n for object_id in objects_ids:\n if permissions is None:\n aces = [k for k in self._store.keys()\n if k.startswith('permission:{}:'.format(object_id))]\n else:\n aces = ['permission:{}:{}'.format(object_id, permission)\n for permission in permissions]\n perms = {}\n for ace in aces:\n # Should work with 'permission:/url/id:record:create'.\n permission = ace.split(':', 2)[2]\n perms[permission] = set(self._store[ace])\n result.append(perms)\n return result\n\n @synchronized\n def replace_object_permissions(self, object_id, permissions):\n for permission, principals in permissions.items():\n permission_key = 'permission:{}:{}'.format(object_id, permission)\n if permission_key in self._store and len(principals) == 0:\n del self._store[permission_key]\n else:\n self._store[permission_key] = set(principals)\n return permissions\n\n @synchronized\n def delete_object_permissions(self, *object_id_list):\n to_delete = []\n for key in self._store.keys():\n object_id = key.split(':')[1]\n for pattern in object_id_list:\n regexp = re.compile('^{}$'.format(pattern.replace('*', '.*')))\n if regexp.match(object_id):\n to_delete.append(key)\n for k in to_delete:\n del self._store[k]\n\n\ndef load_from_config(config):\n return Permission()\n", "path": "kinto/core/permission/memory.py"}]}
| 2,955 | 123 |
gh_patches_debug_23153
|
rasdani/github-patches
|
git_diff
|
jazzband__pip-tools-1172
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Feature: pip-sync --dry-run should return a non-zero exit code if changes were to occur
#### What's the problem this feature will solve?
I'm looking to add a pre-commit hook to check the environment is up to date
```
- repo: local
hooks:
- id: pip-sync
name: pip-sync check
entry: pip-sync --dry-run
language: system
always_run: true
pass_filenames: false
```
#### Describe the solution you'd like
```
$ pip-sync --dry-run
Would install:
numpy==1.18.5
$ $?
2
```
#### Alternative Solutions
various | awk stuff
#### Additional context
NA
</issue>
<code>
[start of piptools/sync.py]
1 import collections
2 import os
3 import sys
4 import tempfile
5 from subprocess import check_call # nosec
6
7 from pip._internal.commands.freeze import DEV_PKGS
8 from pip._internal.utils.compat import stdlib_pkgs
9
10 from . import click
11 from .exceptions import IncompatibleRequirements
12 from .utils import (
13 flat_map,
14 format_requirement,
15 get_hashes_from_ireq,
16 is_url_requirement,
17 key_from_ireq,
18 key_from_req,
19 )
20
21 PACKAGES_TO_IGNORE = (
22 ["-markerlib", "pip", "pip-tools", "pip-review", "pkg-resources"]
23 + list(stdlib_pkgs)
24 + list(DEV_PKGS)
25 )
26
27
28 def dependency_tree(installed_keys, root_key):
29 """
30 Calculate the dependency tree for the package `root_key` and return
31 a collection of all its dependencies. Uses a DFS traversal algorithm.
32
33 `installed_keys` should be a {key: requirement} mapping, e.g.
34 {'django': from_line('django==1.8')}
35 `root_key` should be the key to return the dependency tree for.
36 """
37 dependencies = set()
38 queue = collections.deque()
39
40 if root_key in installed_keys:
41 dep = installed_keys[root_key]
42 queue.append(dep)
43
44 while queue:
45 v = queue.popleft()
46 key = key_from_req(v)
47 if key in dependencies:
48 continue
49
50 dependencies.add(key)
51
52 for dep_specifier in v.requires():
53 dep_name = key_from_req(dep_specifier)
54 if dep_name in installed_keys:
55 dep = installed_keys[dep_name]
56
57 if dep_specifier.specifier.contains(dep.version):
58 queue.append(dep)
59
60 return dependencies
61
62
63 def get_dists_to_ignore(installed):
64 """
65 Returns a collection of package names to ignore when performing pip-sync,
66 based on the currently installed environment. For example, when pip-tools
67 is installed in the local environment, it should be ignored, including all
68 of its dependencies (e.g. click). When pip-tools is not installed
69 locally, click should also be installed/uninstalled depending on the given
70 requirements.
71 """
72 installed_keys = {key_from_req(r): r for r in installed}
73 return list(
74 flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE)
75 )
76
77
78 def merge(requirements, ignore_conflicts):
79 by_key = {}
80
81 for ireq in requirements:
82 # Limitation: URL requirements are merged by precise string match, so
83 # "file:///example.zip#egg=example", "file:///example.zip", and
84 # "example==1.0" will not merge with each other
85 if ireq.match_markers():
86 key = key_from_ireq(ireq)
87
88 if not ignore_conflicts:
89 existing_ireq = by_key.get(key)
90 if existing_ireq:
91 # NOTE: We check equality here since we can assume that the
92 # requirements are all pinned
93 if ireq.specifier != existing_ireq.specifier:
94 raise IncompatibleRequirements(ireq, existing_ireq)
95
96 # TODO: Always pick the largest specifier in case of a conflict
97 by_key[key] = ireq
98 return by_key.values()
99
100
101 def diff_key_from_ireq(ireq):
102 """
103 Calculate a key for comparing a compiled requirement with installed modules.
104 For URL requirements, only provide a useful key if the url includes
105 #egg=name==version, which will set ireq.req.name and ireq.specifier.
106 Otherwise return ireq.link so the key will not match and the package will
107 reinstall. Reinstall is necessary to ensure that packages will reinstall
108 if the URL is changed but the version is not.
109 """
110 if is_url_requirement(ireq):
111 if (
112 ireq.req
113 and (getattr(ireq.req, "key", None) or getattr(ireq.req, "name", None))
114 and ireq.specifier
115 ):
116 return key_from_ireq(ireq)
117 return str(ireq.link)
118 return key_from_ireq(ireq)
119
120
121 def diff(compiled_requirements, installed_dists):
122 """
123 Calculate which packages should be installed or uninstalled, given a set
124 of compiled requirements and a list of currently installed modules.
125 """
126 requirements_lut = {diff_key_from_ireq(r): r for r in compiled_requirements}
127
128 satisfied = set() # holds keys
129 to_install = set() # holds InstallRequirement objects
130 to_uninstall = set() # holds keys
131
132 pkgs_to_ignore = get_dists_to_ignore(installed_dists)
133 for dist in installed_dists:
134 key = key_from_req(dist)
135 if key not in requirements_lut or not requirements_lut[key].match_markers():
136 to_uninstall.add(key)
137 elif requirements_lut[key].specifier.contains(dist.version):
138 satisfied.add(key)
139
140 for key, requirement in requirements_lut.items():
141 if key not in satisfied and requirement.match_markers():
142 to_install.add(requirement)
143
144 # Make sure to not uninstall any packages that should be ignored
145 to_uninstall -= set(pkgs_to_ignore)
146
147 return (to_install, to_uninstall)
148
149
150 def sync(
151 to_install,
152 to_uninstall,
153 verbose=False,
154 dry_run=False,
155 install_flags=None,
156 ask=False,
157 ):
158 """
159 Install and uninstalls the given sets of modules.
160 """
161 if not to_uninstall and not to_install:
162 if verbose:
163 click.echo("Everything up-to-date")
164 return 0
165
166 pip_flags = []
167 if not verbose:
168 pip_flags += ["-q"]
169
170 if ask:
171 dry_run = True
172
173 if dry_run:
174 if to_uninstall:
175 click.echo("Would uninstall:")
176 for pkg in sorted(to_uninstall):
177 click.echo(" {}".format(pkg))
178
179 if to_install:
180 click.echo("Would install:")
181 for ireq in sorted(to_install, key=key_from_ireq):
182 click.echo(" {}".format(format_requirement(ireq)))
183
184 if ask and click.confirm("Would you like to proceed with these changes?"):
185 dry_run = False
186
187 if not dry_run:
188 if to_uninstall:
189 check_call( # nosec
190 [sys.executable, "-m", "pip", "uninstall", "-y"]
191 + pip_flags
192 + sorted(to_uninstall)
193 )
194
195 if to_install:
196 if install_flags is None:
197 install_flags = []
198 # prepare requirement lines
199 req_lines = []
200 for ireq in sorted(to_install, key=key_from_ireq):
201 ireq_hashes = get_hashes_from_ireq(ireq)
202 req_lines.append(format_requirement(ireq, hashes=ireq_hashes))
203
204 # save requirement lines to a temporary file
205 tmp_req_file = tempfile.NamedTemporaryFile(mode="wt", delete=False)
206 tmp_req_file.write("\n".join(req_lines))
207 tmp_req_file.close()
208
209 try:
210 check_call( # nosec
211 [sys.executable, "-m", "pip", "install", "-r", tmp_req_file.name]
212 + pip_flags
213 + install_flags
214 )
215 finally:
216 os.unlink(tmp_req_file.name)
217
218 return 0
219
[end of piptools/sync.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/piptools/sync.py b/piptools/sync.py
--- a/piptools/sync.py
+++ b/piptools/sync.py
@@ -158,10 +158,12 @@
"""
Install and uninstalls the given sets of modules.
"""
+ exit_code = 0
+
if not to_uninstall and not to_install:
if verbose:
click.echo("Everything up-to-date")
- return 0
+ return exit_code
pip_flags = []
if not verbose:
@@ -181,8 +183,11 @@
for ireq in sorted(to_install, key=key_from_ireq):
click.echo(" {}".format(format_requirement(ireq)))
+ exit_code = 1
+
if ask and click.confirm("Would you like to proceed with these changes?"):
dry_run = False
+ exit_code = 0
if not dry_run:
if to_uninstall:
@@ -215,4 +220,4 @@
finally:
os.unlink(tmp_req_file.name)
- return 0
+ return exit_code
|
{"golden_diff": "diff --git a/piptools/sync.py b/piptools/sync.py\n--- a/piptools/sync.py\n+++ b/piptools/sync.py\n@@ -158,10 +158,12 @@\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n+ exit_code = 0\n+\n if not to_uninstall and not to_install:\n if verbose:\n click.echo(\"Everything up-to-date\")\n- return 0\n+ return exit_code\n \n pip_flags = []\n if not verbose:\n@@ -181,8 +183,11 @@\n for ireq in sorted(to_install, key=key_from_ireq):\n click.echo(\" {}\".format(format_requirement(ireq)))\n \n+ exit_code = 1\n+\n if ask and click.confirm(\"Would you like to proceed with these changes?\"):\n dry_run = False\n+ exit_code = 0\n \n if not dry_run:\n if to_uninstall:\n@@ -215,4 +220,4 @@\n finally:\n os.unlink(tmp_req_file.name)\n \n- return 0\n+ return exit_code\n", "issue": "Feature: pip-sync --dry-run should return a non-zero exit code if changes were to occur\n#### What's the problem this feature will solve?\r\nI'm looking to add a pre-commit hook to check the environment is up to date\r\n\r\n```\r\n- repo: local\r\n hooks:\r\n - id: pip-sync\r\n name: pip-sync check\r\n entry: pip-sync --dry-run\r\n language: system\r\n always_run: true\r\n pass_filenames: false\r\n```\r\n\r\n#### Describe the solution you'd like\r\n```\r\n$ pip-sync --dry-run\r\nWould install:\r\n numpy==1.18.5\r\n$ $?\r\n2\r\n```\r\n\r\n\r\n#### Alternative Solutions\r\nvarious | awk stuff\r\n\r\n#### Additional context\r\nNA\r\n\n", "before_files": [{"content": "import collections\nimport os\nimport sys\nimport tempfile\nfrom subprocess import check_call # nosec\n\nfrom pip._internal.commands.freeze import DEV_PKGS\nfrom pip._internal.utils.compat import stdlib_pkgs\n\nfrom . import click\nfrom .exceptions import IncompatibleRequirements\nfrom .utils import (\n flat_map,\n format_requirement,\n get_hashes_from_ireq,\n is_url_requirement,\n key_from_ireq,\n key_from_req,\n)\n\nPACKAGES_TO_IGNORE = (\n [\"-markerlib\", \"pip\", \"pip-tools\", \"pip-review\", \"pkg-resources\"]\n + list(stdlib_pkgs)\n + list(DEV_PKGS)\n)\n\n\ndef dependency_tree(installed_keys, root_key):\n \"\"\"\n Calculate the dependency tree for the package `root_key` and return\n a collection of all its dependencies. Uses a DFS traversal algorithm.\n\n `installed_keys` should be a {key: requirement} mapping, e.g.\n {'django': from_line('django==1.8')}\n `root_key` should be the key to return the dependency tree for.\n \"\"\"\n dependencies = set()\n queue = collections.deque()\n\n if root_key in installed_keys:\n dep = installed_keys[root_key]\n queue.append(dep)\n\n while queue:\n v = queue.popleft()\n key = key_from_req(v)\n if key in dependencies:\n continue\n\n dependencies.add(key)\n\n for dep_specifier in v.requires():\n dep_name = key_from_req(dep_specifier)\n if dep_name in installed_keys:\n dep = installed_keys[dep_name]\n\n if dep_specifier.specifier.contains(dep.version):\n queue.append(dep)\n\n return dependencies\n\n\ndef get_dists_to_ignore(installed):\n \"\"\"\n Returns a collection of package names to ignore when performing pip-sync,\n based on the currently installed environment. For example, when pip-tools\n is installed in the local environment, it should be ignored, including all\n of its dependencies (e.g. click). When pip-tools is not installed\n locally, click should also be installed/uninstalled depending on the given\n requirements.\n \"\"\"\n installed_keys = {key_from_req(r): r for r in installed}\n return list(\n flat_map(lambda req: dependency_tree(installed_keys, req), PACKAGES_TO_IGNORE)\n )\n\n\ndef merge(requirements, ignore_conflicts):\n by_key = {}\n\n for ireq in requirements:\n # Limitation: URL requirements are merged by precise string match, so\n # \"file:///example.zip#egg=example\", \"file:///example.zip\", and\n # \"example==1.0\" will not merge with each other\n if ireq.match_markers():\n key = key_from_ireq(ireq)\n\n if not ignore_conflicts:\n existing_ireq = by_key.get(key)\n if existing_ireq:\n # NOTE: We check equality here since we can assume that the\n # requirements are all pinned\n if ireq.specifier != existing_ireq.specifier:\n raise IncompatibleRequirements(ireq, existing_ireq)\n\n # TODO: Always pick the largest specifier in case of a conflict\n by_key[key] = ireq\n return by_key.values()\n\n\ndef diff_key_from_ireq(ireq):\n \"\"\"\n Calculate a key for comparing a compiled requirement with installed modules.\n For URL requirements, only provide a useful key if the url includes\n #egg=name==version, which will set ireq.req.name and ireq.specifier.\n Otherwise return ireq.link so the key will not match and the package will\n reinstall. Reinstall is necessary to ensure that packages will reinstall\n if the URL is changed but the version is not.\n \"\"\"\n if is_url_requirement(ireq):\n if (\n ireq.req\n and (getattr(ireq.req, \"key\", None) or getattr(ireq.req, \"name\", None))\n and ireq.specifier\n ):\n return key_from_ireq(ireq)\n return str(ireq.link)\n return key_from_ireq(ireq)\n\n\ndef diff(compiled_requirements, installed_dists):\n \"\"\"\n Calculate which packages should be installed or uninstalled, given a set\n of compiled requirements and a list of currently installed modules.\n \"\"\"\n requirements_lut = {diff_key_from_ireq(r): r for r in compiled_requirements}\n\n satisfied = set() # holds keys\n to_install = set() # holds InstallRequirement objects\n to_uninstall = set() # holds keys\n\n pkgs_to_ignore = get_dists_to_ignore(installed_dists)\n for dist in installed_dists:\n key = key_from_req(dist)\n if key not in requirements_lut or not requirements_lut[key].match_markers():\n to_uninstall.add(key)\n elif requirements_lut[key].specifier.contains(dist.version):\n satisfied.add(key)\n\n for key, requirement in requirements_lut.items():\n if key not in satisfied and requirement.match_markers():\n to_install.add(requirement)\n\n # Make sure to not uninstall any packages that should be ignored\n to_uninstall -= set(pkgs_to_ignore)\n\n return (to_install, to_uninstall)\n\n\ndef sync(\n to_install,\n to_uninstall,\n verbose=False,\n dry_run=False,\n install_flags=None,\n ask=False,\n):\n \"\"\"\n Install and uninstalls the given sets of modules.\n \"\"\"\n if not to_uninstall and not to_install:\n if verbose:\n click.echo(\"Everything up-to-date\")\n return 0\n\n pip_flags = []\n if not verbose:\n pip_flags += [\"-q\"]\n\n if ask:\n dry_run = True\n\n if dry_run:\n if to_uninstall:\n click.echo(\"Would uninstall:\")\n for pkg in sorted(to_uninstall):\n click.echo(\" {}\".format(pkg))\n\n if to_install:\n click.echo(\"Would install:\")\n for ireq in sorted(to_install, key=key_from_ireq):\n click.echo(\" {}\".format(format_requirement(ireq)))\n\n if ask and click.confirm(\"Would you like to proceed with these changes?\"):\n dry_run = False\n\n if not dry_run:\n if to_uninstall:\n check_call( # nosec\n [sys.executable, \"-m\", \"pip\", \"uninstall\", \"-y\"]\n + pip_flags\n + sorted(to_uninstall)\n )\n\n if to_install:\n if install_flags is None:\n install_flags = []\n # prepare requirement lines\n req_lines = []\n for ireq in sorted(to_install, key=key_from_ireq):\n ireq_hashes = get_hashes_from_ireq(ireq)\n req_lines.append(format_requirement(ireq, hashes=ireq_hashes))\n\n # save requirement lines to a temporary file\n tmp_req_file = tempfile.NamedTemporaryFile(mode=\"wt\", delete=False)\n tmp_req_file.write(\"\\n\".join(req_lines))\n tmp_req_file.close()\n\n try:\n check_call( # nosec\n [sys.executable, \"-m\", \"pip\", \"install\", \"-r\", tmp_req_file.name]\n + pip_flags\n + install_flags\n )\n finally:\n os.unlink(tmp_req_file.name)\n\n return 0\n", "path": "piptools/sync.py"}]}
| 2,838 | 259 |
gh_patches_debug_8007
|
rasdani/github-patches
|
git_diff
|
medtagger__MedTagger-401
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Error indicator when logging in or registering went wrong
## Current Behaviour
- currently, only error icon is displayed when something went wrong during logging in or registering new account
## Expected Behaviour
- an error message should be displayed next to the error icon, so that user knows what went wrong
</issue>
<code>
[start of backend/medtagger/api/auth/business.py]
1 """Module responsible for business logic in all Auth endpoint."""
2 from medtagger.api import InvalidArgumentsException
3 from medtagger.api.security import hash_password, verify_user_password, generate_auth_token
4 from medtagger.database.models import User
5 from medtagger.repositories import roles as RolesRepository, users as UsersRepository
6
7
8 def create_user(email: str, password: str, first_name: str, last_name: str) -> int:
9 """Create user with the given user information. Password is being hashed.
10
11 :param email: user email in string format
12 :param password: user password in string format
13 :param first_name: user first name in string format
14 :param last_name: user last name in string format
15
16 :return: id of the new user
17 """
18 user = UsersRepository.get_user_by_email(email)
19 if user:
20 raise InvalidArgumentsException('User with this email already exist')
21 password_hash = hash_password(password)
22 new_user = User(email, password_hash, first_name, last_name)
23 role = RolesRepository.get_role_with_name('volunteer')
24 if not role:
25 raise InvalidArgumentsException('Role does not exist.')
26 new_user.roles.append(role)
27 return UsersRepository.add_new_user(new_user)
28
29
30 def sign_in_user(email: str, password: str) -> str:
31 """Sign in user using given username and password.
32
33 :param email: user email in string format
34 :param password: user password in string format
35
36 :return: authentication token
37 """
38 user = UsersRepository.get_user_by_email(email)
39 if not user:
40 raise InvalidArgumentsException('User does not exist.')
41 if not verify_user_password(user, password):
42 raise InvalidArgumentsException('Password does not match.')
43 return generate_auth_token(user)
44
[end of backend/medtagger/api/auth/business.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py
--- a/backend/medtagger/api/auth/business.py
+++ b/backend/medtagger/api/auth/business.py
@@ -17,7 +17,7 @@
"""
user = UsersRepository.get_user_by_email(email)
if user:
- raise InvalidArgumentsException('User with this email already exist')
+ raise InvalidArgumentsException('User with this email already exists')
password_hash = hash_password(password)
new_user = User(email, password_hash, first_name, last_name)
role = RolesRepository.get_role_with_name('volunteer')
|
{"golden_diff": "diff --git a/backend/medtagger/api/auth/business.py b/backend/medtagger/api/auth/business.py\n--- a/backend/medtagger/api/auth/business.py\n+++ b/backend/medtagger/api/auth/business.py\n@@ -17,7 +17,7 @@\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n- raise InvalidArgumentsException('User with this email already exist')\n+ raise InvalidArgumentsException('User with this email already exists')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n", "issue": "Error indicator when logging in or registering went wrong\n## Current Behaviour\r\n - currently, only error icon is displayed when something went wrong during logging in or registering new account\r\n\r\n## Expected Behaviour \r\n - an error message should be displayed next to the error icon, so that user knows what went wrong\r\n\n", "before_files": [{"content": "\"\"\"Module responsible for business logic in all Auth endpoint.\"\"\"\nfrom medtagger.api import InvalidArgumentsException\nfrom medtagger.api.security import hash_password, verify_user_password, generate_auth_token\nfrom medtagger.database.models import User\nfrom medtagger.repositories import roles as RolesRepository, users as UsersRepository\n\n\ndef create_user(email: str, password: str, first_name: str, last_name: str) -> int:\n \"\"\"Create user with the given user information. Password is being hashed.\n\n :param email: user email in string format\n :param password: user password in string format\n :param first_name: user first name in string format\n :param last_name: user last name in string format\n\n :return: id of the new user\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if user:\n raise InvalidArgumentsException('User with this email already exist')\n password_hash = hash_password(password)\n new_user = User(email, password_hash, first_name, last_name)\n role = RolesRepository.get_role_with_name('volunteer')\n if not role:\n raise InvalidArgumentsException('Role does not exist.')\n new_user.roles.append(role)\n return UsersRepository.add_new_user(new_user)\n\n\ndef sign_in_user(email: str, password: str) -> str:\n \"\"\"Sign in user using given username and password.\n\n :param email: user email in string format\n :param password: user password in string format\n\n :return: authentication token\n \"\"\"\n user = UsersRepository.get_user_by_email(email)\n if not user:\n raise InvalidArgumentsException('User does not exist.')\n if not verify_user_password(user, password):\n raise InvalidArgumentsException('Password does not match.')\n return generate_auth_token(user)\n", "path": "backend/medtagger/api/auth/business.py"}]}
| 1,063 | 142 |
gh_patches_debug_40450
|
rasdani/github-patches
|
git_diff
|
obspy__obspy-2950
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
GCF read fails on windows (recent python + numpy 1.22)
https://tests.obspy.org/115141/ & https://tests.obspy.org/115136/ on two independent machines show the error
I suspect it's numpy-dtype related:
```
Traceback (most recent call last):
File "C:\Miniconda3\envs\test\lib\site-packages\numpy\core\fromnumeric.py", line 57, in _wrapfunc
return bound(*args, **kwds)
TypeError: the resolved dtypes are not compatible with add.accumulate. Resolved (dtype('int32'), dtype('int32'), dtype('int32'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\a\obspy\obspy\obspy\io\gcf\tests\test_core.py", line 72, in test_read_via_module
st = _read_gcf(filename)
File "D:\a\obspy\obspy\obspy\io\gcf\core.py", line 89, in _read_gcf
hd = libgcf.read(f, **kwargs)
File "D:\a\obspy\obspy\obspy\io\gcf\libgcf.py", line 167, in read
return read_data_block(f, headonly=False, **kwargs)
File "D:\a\obspy\obspy\obspy\io\gcf\libgcf.py", line 144, in read_data_block
data = (fic + np.cumsum(data)).astype('i4')
File "<__array_function__ internals>", line 180, in cumsum
File "C:\Miniconda3\envs\test\lib\site-packages\numpy\core\fromnumeric.py", line 2569, in cumsum
return _wrapfunc(a, 'cumsum', axis=axis, dtype=dtype, out=out)
File "C:\Miniconda3\envs\test\lib\site-packages\numpy\core\fromnumeric.py", line 66, in _wrapfunc
return _wrapit(obj, method, *args, **kwds)
File "C:\Miniconda3\envs\test\lib\site-packages\numpy\core\fromnumeric.py", line 43, in _wrapit
result = getattr(asarray(obj), method)(*args, **kwds)
TypeError: the resolved dtypes are not compatible with add.accumulate. Resolved (dtype('int32'), dtype('int32'), dtype('int32'))
```
</issue>
<code>
[start of obspy/io/gcf/libgcf.py]
1 # -*- coding: utf-8 -*-
2 # reads Guralp Compressed Format (GCF) Files
3 # By Ran Novitsky Nof @ BSL, 2016
4 # [email protected]
5 # Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)
6 # more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro
7 # last access: June, 2016
8 import numpy as np
9
10 from obspy import UTCDateTime
11
12 SPS_D = { # Table 3.1: special sample rates
13 157: 0.1,
14 161: 0.125,
15 162: 0.2,
16 164: 0.25,
17 167: 0.5,
18 171: 400,
19 174: 500,
20 176: 1000,
21 179: 2000,
22 181: 4000}
23 TIME_OFFSETS_D = { # Table 3.1: Time fractional offset denominator
24 171: 8.,
25 174: 2.,
26 176: 4.,
27 179: 8.,
28 181: 16.}
29 COMPRESSION_D = { # Table 3.2: format field to data type
30 1: '>i4',
31 2: '>i2',
32 4: '>i1'}
33
34
35 def is_gcf(f):
36 """
37 Test if file is GCF by reading at least 1 data block
38 """
39 header, data = read_data_block(f)
40
41
42 def decode36(data):
43 """
44 Converts an integer into a base36 string.
45 """
46 # http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/Decoding_Base_36_numbers_C.htm
47 s = ''
48 while data:
49 imed = data % 36
50 if imed > 9:
51 pos = imed - 10 + ord('A')
52 else:
53 pos = imed + ord('0')
54 c = chr(pos)
55 s = c + s
56 data = data // 36
57 return s
58
59
60 def decode_date_time(data):
61 """
62 Decode date and time field.
63
64 The date code is a 32 bit value specifying the start time of the block.
65 Bits 0-16 contain the number of seconds since midnight,
66 and bits 17-31 the number of days since 17th November 1989.
67 """
68 # prevent numpy array
69 days = int(data >> 17)
70 secs = int(data & 0x1FFFF)
71 starttime = UTCDateTime('1989-11-17') + days * 86400 + secs
72 return starttime
73
74
75 def read_data_block(f, headonly=False, channel_prefix="HH", **kwargs):
76 """
77 Read one data block from GCF file.
78
79 more details can be found here:
80 http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/GCF_Specification.htm
81 f - file object to read from
82 if skipData is True, Only header is returned.
83 if not a data block (SPS=0) - returns None.
84 """
85 # get ID
86 sysid = f.read(4)
87 if not sysid:
88 raise EOFError # got to EOF
89 sysid = np.frombuffer(sysid, count=1, dtype='>u4')
90 if sysid >> 31 & 0b1 > 0:
91 sysid = (sysid << 6) >> 6
92 if isinstance(sysid, np.ndarray) and sysid.shape == (1,):
93 sysid = sysid[0]
94 else:
95 raise ValueError('sysid should be a single element np.ndarray')
96 sysid = decode36(sysid)
97 # get Stream ID
98 stid = np.frombuffer(f.read(4), count=1, dtype='>u4')
99 if isinstance(stid, np.ndarray) and stid.shape == (1,):
100 stid = stid[0]
101 else:
102 raise ValueError('stid should be a single element np.ndarray')
103 stid = decode36(stid)
104 # get Date & Time
105 data = np.frombuffer(f.read(4), count=1, dtype='>u4')
106 starttime = decode_date_time(data)
107 # get data format
108 # get reserved, SPS, data type compression,
109 # number of 32bit records (num_records)
110 reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,
111 dtype='>u1')
112 compression = compress & 0b00000111 # get compression code
113 t_offset = compress >> 4 # get time offset
114 if t_offset > 0:
115 starttime = starttime + t_offset / TIME_OFFSETS_D[sps]
116 if sps in SPS_D:
117 sps = SPS_D[sps] # get special SPS value if needed
118 if not sps:
119 f.seek(num_records * 4, 1) # skip if not a data block
120 if 1008 - num_records * 4 > 0:
121 # keep skipping to get 1008 record
122 f.seek(1008 - num_records * 4, 1)
123 return None
124 npts = num_records * compression # number of samples
125 header = {}
126 header['starttime'] = starttime
127 header['station'] = stid[:-2]
128 header['channel'] = (channel_prefix[:2] + stid[-2]).upper()
129 header['sampling_rate'] = float(sps)
130 header['npts'] = npts
131 if headonly:
132 f.seek(4 * (num_records + 2), 1) # skip data part (inc. FIC and RIC)
133 # skip to end of block if only partly filled with data
134 if 1000 - num_records * 4 > 0:
135 f.seek(1000 - num_records * 4, 1)
136 return header
137 else:
138 # get FIC
139 fic = np.frombuffer(f.read(4), count=1, dtype='>i4')
140 # get incremental data
141 data = np.frombuffer(f.read(4 * num_records), count=npts,
142 dtype=COMPRESSION_D[compression])
143 # construct time series
144 data = (fic + np.cumsum(data)).astype('i4')
145 # get RIC
146 ric = np.frombuffer(f.read(4), count=1, dtype='>i4')
147 # skip to end of block if only partly filled with data
148 if 1000 - num_records * 4 > 0:
149 f.seek(1000 - num_records * 4, 1)
150 # verify last data sample matches RIC
151 if not data[-1] == ric:
152 raise ValueError("Last sample mismatch with RIC")
153 return header, data
154
155
156 def read_header(f, **kwargs):
157 """
158 Reads header only from GCF file.
159 """
160 return read_data_block(f, headonly=True, **kwargs)
161
162
163 def read(f, **kwargs):
164 """
165 Reads header and data from GCF file.
166 """
167 return read_data_block(f, headonly=False, **kwargs)
168
[end of obspy/io/gcf/libgcf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/obspy/io/gcf/libgcf.py b/obspy/io/gcf/libgcf.py
--- a/obspy/io/gcf/libgcf.py
+++ b/obspy/io/gcf/libgcf.py
@@ -5,6 +5,8 @@
# Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)
# more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro
# last access: June, 2016
+import struct
+
import numpy as np
from obspy import UTCDateTime
@@ -27,9 +29,10 @@
179: 8.,
181: 16.}
COMPRESSION_D = { # Table 3.2: format field to data type
- 1: '>i4',
- 2: '>i2',
- 4: '>i1'}
+ 1: 'i', # 4 bytes
+ 2: 'h', # 2 bytes
+ 4: 'b', # 1 byte
+}
def is_gcf(f):
@@ -86,29 +89,20 @@
sysid = f.read(4)
if not sysid:
raise EOFError # got to EOF
- sysid = np.frombuffer(sysid, count=1, dtype='>u4')
+ sysid, = struct.unpack('>I', sysid)
if sysid >> 31 & 0b1 > 0:
sysid = (sysid << 6) >> 6
- if isinstance(sysid, np.ndarray) and sysid.shape == (1,):
- sysid = sysid[0]
- else:
- raise ValueError('sysid should be a single element np.ndarray')
sysid = decode36(sysid)
# get Stream ID
- stid = np.frombuffer(f.read(4), count=1, dtype='>u4')
- if isinstance(stid, np.ndarray) and stid.shape == (1,):
- stid = stid[0]
- else:
- raise ValueError('stid should be a single element np.ndarray')
+ stid, = struct.unpack('>I', f.read(4))
stid = decode36(stid)
# get Date & Time
- data = np.frombuffer(f.read(4), count=1, dtype='>u4')
+ data, = struct.unpack('>I', f.read(4))
starttime = decode_date_time(data)
# get data format
# get reserved, SPS, data type compression,
# number of 32bit records (num_records)
- reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,
- dtype='>u1')
+ reserved, sps, compress, num_records = struct.unpack('>4B', f.read(4))
compression = compress & 0b00000111 # get compression code
t_offset = compress >> 4 # get time offset
if t_offset > 0:
@@ -136,14 +130,14 @@
return header
else:
# get FIC
- fic = np.frombuffer(f.read(4), count=1, dtype='>i4')
+ fic, = struct.unpack('>i', f.read(4))
# get incremental data
- data = np.frombuffer(f.read(4 * num_records), count=npts,
- dtype=COMPRESSION_D[compression])
+ data = struct.unpack(f'>{npts}{COMPRESSION_D[compression]}',
+ f.read(4 * num_records))
# construct time series
data = (fic + np.cumsum(data)).astype('i4')
# get RIC
- ric = np.frombuffer(f.read(4), count=1, dtype='>i4')
+ ric, = struct.unpack('>i', f.read(4))
# skip to end of block if only partly filled with data
if 1000 - num_records * 4 > 0:
f.seek(1000 - num_records * 4, 1)
|
{"golden_diff": "diff --git a/obspy/io/gcf/libgcf.py b/obspy/io/gcf/libgcf.py\n--- a/obspy/io/gcf/libgcf.py\n+++ b/obspy/io/gcf/libgcf.py\n@@ -5,6 +5,8 @@\n # Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)\n # more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro\n # last access: June, 2016\n+import struct\n+\n import numpy as np\n \n from obspy import UTCDateTime\n@@ -27,9 +29,10 @@\n 179: 8.,\n 181: 16.}\n COMPRESSION_D = { # Table 3.2: format field to data type\n- 1: '>i4',\n- 2: '>i2',\n- 4: '>i1'}\n+ 1: 'i', # 4 bytes\n+ 2: 'h', # 2 bytes\n+ 4: 'b', # 1 byte\n+}\n \n \n def is_gcf(f):\n@@ -86,29 +89,20 @@\n sysid = f.read(4)\n if not sysid:\n raise EOFError # got to EOF\n- sysid = np.frombuffer(sysid, count=1, dtype='>u4')\n+ sysid, = struct.unpack('>I', sysid)\n if sysid >> 31 & 0b1 > 0:\n sysid = (sysid << 6) >> 6\n- if isinstance(sysid, np.ndarray) and sysid.shape == (1,):\n- sysid = sysid[0]\n- else:\n- raise ValueError('sysid should be a single element np.ndarray')\n sysid = decode36(sysid)\n # get Stream ID\n- stid = np.frombuffer(f.read(4), count=1, dtype='>u4')\n- if isinstance(stid, np.ndarray) and stid.shape == (1,):\n- stid = stid[0]\n- else:\n- raise ValueError('stid should be a single element np.ndarray')\n+ stid, = struct.unpack('>I', f.read(4))\n stid = decode36(stid)\n # get Date & Time\n- data = np.frombuffer(f.read(4), count=1, dtype='>u4')\n+ data, = struct.unpack('>I', f.read(4))\n starttime = decode_date_time(data)\n # get data format\n # get reserved, SPS, data type compression,\n # number of 32bit records (num_records)\n- reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,\n- dtype='>u1')\n+ reserved, sps, compress, num_records = struct.unpack('>4B', f.read(4))\n compression = compress & 0b00000111 # get compression code\n t_offset = compress >> 4 # get time offset\n if t_offset > 0:\n@@ -136,14 +130,14 @@\n return header\n else:\n # get FIC\n- fic = np.frombuffer(f.read(4), count=1, dtype='>i4')\n+ fic, = struct.unpack('>i', f.read(4))\n # get incremental data\n- data = np.frombuffer(f.read(4 * num_records), count=npts,\n- dtype=COMPRESSION_D[compression])\n+ data = struct.unpack(f'>{npts}{COMPRESSION_D[compression]}',\n+ f.read(4 * num_records))\n # construct time series\n data = (fic + np.cumsum(data)).astype('i4')\n # get RIC\n- ric = np.frombuffer(f.read(4), count=1, dtype='>i4')\n+ ric, = struct.unpack('>i', f.read(4))\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n", "issue": "GCF read fails on windows (recent python + numpy 1.22)\nhttps://tests.obspy.org/115141/ & https://tests.obspy.org/115136/ on two independent machines show the error\r\n\r\nI suspect it's numpy-dtype related:\r\n\r\n```\r\nTraceback (most recent call last):\r\nFile \"C:\\Miniconda3\\envs\\test\\lib\\site-packages\\numpy\\core\\fromnumeric.py\", line 57, in _wrapfunc\r\nreturn bound(*args, **kwds)\r\nTypeError: the resolved dtypes are not compatible with add.accumulate. Resolved (dtype('int32'), dtype('int32'), dtype('int32'))\r\nDuring handling of the above exception, another exception occurred:\r\nTraceback (most recent call last):\r\nFile \"D:\\a\\obspy\\obspy\\obspy\\io\\gcf\\tests\\test_core.py\", line 72, in test_read_via_module\r\nst = _read_gcf(filename)\r\nFile \"D:\\a\\obspy\\obspy\\obspy\\io\\gcf\\core.py\", line 89, in _read_gcf\r\nhd = libgcf.read(f, **kwargs)\r\nFile \"D:\\a\\obspy\\obspy\\obspy\\io\\gcf\\libgcf.py\", line 167, in read\r\nreturn read_data_block(f, headonly=False, **kwargs)\r\nFile \"D:\\a\\obspy\\obspy\\obspy\\io\\gcf\\libgcf.py\", line 144, in read_data_block\r\ndata = (fic + np.cumsum(data)).astype('i4')\r\nFile \"<__array_function__ internals>\", line 180, in cumsum\r\nFile \"C:\\Miniconda3\\envs\\test\\lib\\site-packages\\numpy\\core\\fromnumeric.py\", line 2569, in cumsum\r\nreturn _wrapfunc(a, 'cumsum', axis=axis, dtype=dtype, out=out)\r\nFile \"C:\\Miniconda3\\envs\\test\\lib\\site-packages\\numpy\\core\\fromnumeric.py\", line 66, in _wrapfunc\r\nreturn _wrapit(obj, method, *args, **kwds)\r\nFile \"C:\\Miniconda3\\envs\\test\\lib\\site-packages\\numpy\\core\\fromnumeric.py\", line 43, in _wrapit\r\nresult = getattr(asarray(obj), method)(*args, **kwds)\r\nTypeError: the resolved dtypes are not compatible with add.accumulate. Resolved (dtype('int32'), dtype('int32'), dtype('int32'))\r\n```\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# reads Guralp Compressed Format (GCF) Files\n# By Ran Novitsky Nof @ BSL, 2016\n# [email protected]\n# Based on Guralp's GCF reference (GCF-RFC-GCFR, Issue C, 2011-01-05)\n# more details available from: http://www.guralp.com/apps/ok?doc=GCF_Intro\n# last access: June, 2016\nimport numpy as np\n\nfrom obspy import UTCDateTime\n\nSPS_D = { # Table 3.1: special sample rates\n 157: 0.1,\n 161: 0.125,\n 162: 0.2,\n 164: 0.25,\n 167: 0.5,\n 171: 400,\n 174: 500,\n 176: 1000,\n 179: 2000,\n 181: 4000}\nTIME_OFFSETS_D = { # Table 3.1: Time fractional offset denominator\n 171: 8.,\n 174: 2.,\n 176: 4.,\n 179: 8.,\n 181: 16.}\nCOMPRESSION_D = { # Table 3.2: format field to data type\n 1: '>i4',\n 2: '>i2',\n 4: '>i1'}\n\n\ndef is_gcf(f):\n \"\"\"\n Test if file is GCF by reading at least 1 data block\n \"\"\"\n header, data = read_data_block(f)\n\n\ndef decode36(data):\n \"\"\"\n Converts an integer into a base36 string.\n \"\"\"\n # http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/Decoding_Base_36_numbers_C.htm\n s = ''\n while data:\n imed = data % 36\n if imed > 9:\n pos = imed - 10 + ord('A')\n else:\n pos = imed + ord('0')\n c = chr(pos)\n s = c + s\n data = data // 36\n return s\n\n\ndef decode_date_time(data):\n \"\"\"\n Decode date and time field.\n\n The date code is a 32 bit value specifying the start time of the block.\n Bits 0-16 contain the number of seconds since midnight,\n and bits 17-31 the number of days since 17th November 1989.\n \"\"\"\n # prevent numpy array\n days = int(data >> 17)\n secs = int(data & 0x1FFFF)\n starttime = UTCDateTime('1989-11-17') + days * 86400 + secs\n return starttime\n\n\ndef read_data_block(f, headonly=False, channel_prefix=\"HH\", **kwargs):\n \"\"\"\n Read one data block from GCF file.\n\n more details can be found here:\n http://geophysics.eas.gatech.edu/GTEQ/Scream4.4/GCF_Specification.htm\n f - file object to read from\n if skipData is True, Only header is returned.\n if not a data block (SPS=0) - returns None.\n \"\"\"\n # get ID\n sysid = f.read(4)\n if not sysid:\n raise EOFError # got to EOF\n sysid = np.frombuffer(sysid, count=1, dtype='>u4')\n if sysid >> 31 & 0b1 > 0:\n sysid = (sysid << 6) >> 6\n if isinstance(sysid, np.ndarray) and sysid.shape == (1,):\n sysid = sysid[0]\n else:\n raise ValueError('sysid should be a single element np.ndarray')\n sysid = decode36(sysid)\n # get Stream ID\n stid = np.frombuffer(f.read(4), count=1, dtype='>u4')\n if isinstance(stid, np.ndarray) and stid.shape == (1,):\n stid = stid[0]\n else:\n raise ValueError('stid should be a single element np.ndarray')\n stid = decode36(stid)\n # get Date & Time\n data = np.frombuffer(f.read(4), count=1, dtype='>u4')\n starttime = decode_date_time(data)\n # get data format\n # get reserved, SPS, data type compression,\n # number of 32bit records (num_records)\n reserved, sps, compress, num_records = np.frombuffer(f.read(4), count=4,\n dtype='>u1')\n compression = compress & 0b00000111 # get compression code\n t_offset = compress >> 4 # get time offset\n if t_offset > 0:\n starttime = starttime + t_offset / TIME_OFFSETS_D[sps]\n if sps in SPS_D:\n sps = SPS_D[sps] # get special SPS value if needed\n if not sps:\n f.seek(num_records * 4, 1) # skip if not a data block\n if 1008 - num_records * 4 > 0:\n # keep skipping to get 1008 record\n f.seek(1008 - num_records * 4, 1)\n return None\n npts = num_records * compression # number of samples\n header = {}\n header['starttime'] = starttime\n header['station'] = stid[:-2]\n header['channel'] = (channel_prefix[:2] + stid[-2]).upper()\n header['sampling_rate'] = float(sps)\n header['npts'] = npts\n if headonly:\n f.seek(4 * (num_records + 2), 1) # skip data part (inc. FIC and RIC)\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n return header\n else:\n # get FIC\n fic = np.frombuffer(f.read(4), count=1, dtype='>i4')\n # get incremental data\n data = np.frombuffer(f.read(4 * num_records), count=npts,\n dtype=COMPRESSION_D[compression])\n # construct time series\n data = (fic + np.cumsum(data)).astype('i4')\n # get RIC\n ric = np.frombuffer(f.read(4), count=1, dtype='>i4')\n # skip to end of block if only partly filled with data\n if 1000 - num_records * 4 > 0:\n f.seek(1000 - num_records * 4, 1)\n # verify last data sample matches RIC\n if not data[-1] == ric:\n raise ValueError(\"Last sample mismatch with RIC\")\n return header, data\n\n\ndef read_header(f, **kwargs):\n \"\"\"\n Reads header only from GCF file.\n \"\"\"\n return read_data_block(f, headonly=True, **kwargs)\n\n\ndef read(f, **kwargs):\n \"\"\"\n Reads header and data from GCF file.\n \"\"\"\n return read_data_block(f, headonly=False, **kwargs)\n", "path": "obspy/io/gcf/libgcf.py"}]}
| 3,225 | 970 |
gh_patches_debug_53690
|
rasdani/github-patches
|
git_diff
|
sql-machine-learning__elasticdl-2180
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Elasticdl client crashes with invalid args
```
$ elasticdl -v
Traceback (most recent call last):
File "/usr/local/bin/elasticdl", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/elasticdl_client/main.py", line 97, in main
args, _ = parser.parse_known_args()
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py", line 1787, in parse_known_args
namespace, args = self._parse_known_args(args, namespace)
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py", line 2022, in _parse_known_args
', '.join(required_actions))
TypeError: sequence item 0: expected str instance, NoneType found
```
</issue>
<code>
[start of elasticdl_client/main.py]
1 # Copyright 2020 The ElasticDL Authors. All rights reserved.
2 # Licensed under the Apache License, Version 2.0 (the "License");
3 # you may not use this file except in compliance with the License.
4 # You may obtain a copy of the License at
5 #
6 # http://www.apache.org/licenses/LICENSE-2.0
7 #
8 # Unless required by applicable law or agreed to in writing, software
9 # distributed under the License is distributed on an "AS IS" BASIS,
10 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 import argparse
15 import sys
16
17 from elasticdl_client.api import (
18 build_zoo,
19 evaluate,
20 init_zoo,
21 predict,
22 push_zoo,
23 train,
24 )
25 from elasticdl_client.common import args
26
27
28 def build_argument_parser():
29 parser = argparse.ArgumentParser()
30 subparsers = parser.add_subparsers()
31 subparsers.required = True
32
33 # Initialize the parser for the `elasticdl zoo` commands
34 zoo_parser = subparsers.add_parser(
35 "zoo",
36 help="Initialize | Build | Push a docker image for the model zoo.",
37 )
38 zoo_subparsers = zoo_parser.add_subparsers()
39 zoo_subparsers.required = True
40
41 # elasticdl zoo init
42 zoo_init_parser = zoo_subparsers.add_parser(
43 "init", help="Initialize the model zoo."
44 )
45 zoo_init_parser.set_defaults(func=init_zoo)
46 args.add_zoo_init_params(zoo_init_parser)
47
48 # elasticdl zoo build
49 zoo_build_parser = zoo_subparsers.add_parser(
50 "build", help="Build a docker image for the model zoo."
51 )
52 zoo_build_parser.set_defaults(func=build_zoo)
53 args.add_zoo_build_params(zoo_build_parser)
54
55 # elasticdl zoo push
56 zoo_push_parser = zoo_subparsers.add_parser(
57 "push",
58 help="Push the docker image to a remote registry for the distributed"
59 "ElasticDL job.",
60 )
61 zoo_push_parser.set_defaults(func=push_zoo)
62 args.add_zoo_push_params(zoo_push_parser)
63
64 # elasticdl train
65 train_parser = subparsers.add_parser(
66 "train", help="Submit a ElasticDL distributed training job"
67 )
68 train_parser.set_defaults(func=train)
69 args.add_common_params(train_parser)
70 args.add_train_params(train_parser)
71
72 # elasticdl evaluate
73 evaluate_parser = subparsers.add_parser(
74 "evaluate", help="Submit a ElasticDL distributed evaluation job"
75 )
76 evaluate_parser.set_defaults(func=evaluate)
77 args.add_common_params(evaluate_parser)
78 args.add_evaluate_params(evaluate_parser)
79
80 # elasticdl predict
81 predict_parser = subparsers.add_parser(
82 "predict", help="Submit a ElasticDL distributed prediction job"
83 )
84 predict_parser.set_defaults(func=predict)
85 args.add_common_params(predict_parser)
86 args.add_predict_params(predict_parser)
87
88 return parser
89
90
91 def main():
92 parser = build_argument_parser()
93 if len(sys.argv) == 1:
94 parser.print_help(sys.stderr)
95 sys.exit(1)
96
97 args, _ = parser.parse_known_args()
98 args.func(args)
99
100
101 if __name__ == "__main__":
102 main()
103
[end of elasticdl_client/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/elasticdl_client/main.py b/elasticdl_client/main.py
--- a/elasticdl_client/main.py
+++ b/elasticdl_client/main.py
@@ -94,7 +94,12 @@
parser.print_help(sys.stderr)
sys.exit(1)
- args, _ = parser.parse_known_args()
+ try:
+ args, _ = parser.parse_known_args()
+ except TypeError:
+ parser.print_help(sys.stderr)
+ sys.exit(1)
+
args.func(args)
|
{"golden_diff": "diff --git a/elasticdl_client/main.py b/elasticdl_client/main.py\n--- a/elasticdl_client/main.py\n+++ b/elasticdl_client/main.py\n@@ -94,7 +94,12 @@\n parser.print_help(sys.stderr)\n sys.exit(1)\n \n- args, _ = parser.parse_known_args()\n+ try:\n+ args, _ = parser.parse_known_args()\n+ except TypeError:\n+ parser.print_help(sys.stderr)\n+ sys.exit(1)\n+\n args.func(args)\n", "issue": "Elasticdl client crashes with invalid args\n```\r\n$ elasticdl -v\r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/elasticdl\", line 8, in <module>\r\n sys.exit(main())\r\n File \"/usr/local/lib/python3.7/site-packages/elasticdl_client/main.py\", line 97, in main\r\n args, _ = parser.parse_known_args()\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py\", line 1787, in parse_known_args\r\n namespace, args = self._parse_known_args(args, namespace)\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/argparse.py\", line 2022, in _parse_known_args\r\n ', '.join(required_actions))\r\nTypeError: sequence item 0: expected str instance, NoneType found\r\n```\n", "before_files": [{"content": "# Copyright 2020 The ElasticDL Authors. All rights reserved.\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport sys\n\nfrom elasticdl_client.api import (\n build_zoo,\n evaluate,\n init_zoo,\n predict,\n push_zoo,\n train,\n)\nfrom elasticdl_client.common import args\n\n\ndef build_argument_parser():\n parser = argparse.ArgumentParser()\n subparsers = parser.add_subparsers()\n subparsers.required = True\n\n # Initialize the parser for the `elasticdl zoo` commands\n zoo_parser = subparsers.add_parser(\n \"zoo\",\n help=\"Initialize | Build | Push a docker image for the model zoo.\",\n )\n zoo_subparsers = zoo_parser.add_subparsers()\n zoo_subparsers.required = True\n\n # elasticdl zoo init\n zoo_init_parser = zoo_subparsers.add_parser(\n \"init\", help=\"Initialize the model zoo.\"\n )\n zoo_init_parser.set_defaults(func=init_zoo)\n args.add_zoo_init_params(zoo_init_parser)\n\n # elasticdl zoo build\n zoo_build_parser = zoo_subparsers.add_parser(\n \"build\", help=\"Build a docker image for the model zoo.\"\n )\n zoo_build_parser.set_defaults(func=build_zoo)\n args.add_zoo_build_params(zoo_build_parser)\n\n # elasticdl zoo push\n zoo_push_parser = zoo_subparsers.add_parser(\n \"push\",\n help=\"Push the docker image to a remote registry for the distributed\"\n \"ElasticDL job.\",\n )\n zoo_push_parser.set_defaults(func=push_zoo)\n args.add_zoo_push_params(zoo_push_parser)\n\n # elasticdl train\n train_parser = subparsers.add_parser(\n \"train\", help=\"Submit a ElasticDL distributed training job\"\n )\n train_parser.set_defaults(func=train)\n args.add_common_params(train_parser)\n args.add_train_params(train_parser)\n\n # elasticdl evaluate\n evaluate_parser = subparsers.add_parser(\n \"evaluate\", help=\"Submit a ElasticDL distributed evaluation job\"\n )\n evaluate_parser.set_defaults(func=evaluate)\n args.add_common_params(evaluate_parser)\n args.add_evaluate_params(evaluate_parser)\n\n # elasticdl predict\n predict_parser = subparsers.add_parser(\n \"predict\", help=\"Submit a ElasticDL distributed prediction job\"\n )\n predict_parser.set_defaults(func=predict)\n args.add_common_params(predict_parser)\n args.add_predict_params(predict_parser)\n\n return parser\n\n\ndef main():\n parser = build_argument_parser()\n if len(sys.argv) == 1:\n parser.print_help(sys.stderr)\n sys.exit(1)\n\n args, _ = parser.parse_known_args()\n args.func(args)\n\n\nif __name__ == \"__main__\":\n main()\n", "path": "elasticdl_client/main.py"}]}
| 1,662 | 115 |
gh_patches_debug_7059
|
rasdani/github-patches
|
git_diff
|
modin-project__modin-6283
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Refactor ci.yml to reduce the amount of copy-pasting
</issue>
<code>
[start of setup.py]
1 from setuptools import setup, find_packages
2 import versioneer
3
4 with open("README.md", "r", encoding="utf-8") as fh:
5 long_description = fh.read()
6
7 dask_deps = ["dask>=2.22.0", "distributed>=2.22.0"]
8 ray_deps = ["ray[default]>=1.13.0", "pyarrow"]
9 unidist_deps = ["unidist[mpi]>=0.2.1"]
10 remote_deps = ["rpyc==4.1.5", "cloudpickle", "boto3"]
11 spreadsheet_deps = ["modin-spreadsheet>=0.1.0"]
12 sql_deps = ["dfsql>=0.4.2", "pyparsing<=2.4.7"]
13 all_deps = dask_deps + ray_deps + unidist_deps + remote_deps + spreadsheet_deps
14
15 # Distribute 'modin-autoimport-pandas.pth' along with binary and source distributions.
16 # This file provides the "import pandas before Ray init" feature if specific
17 # environment variable is set (see https://github.com/modin-project/modin/issues/4564).
18 cmdclass = versioneer.get_cmdclass()
19 extra_files = ["modin-autoimport-pandas.pth"]
20
21
22 class AddPthFileBuild(cmdclass["build_py"]):
23 def _get_data_files(self):
24 return (super()._get_data_files() or []) + [
25 (".", ".", self.build_lib, extra_files)
26 ]
27
28
29 class AddPthFileSDist(cmdclass["sdist"]):
30 def make_distribution(self):
31 self.filelist.extend(extra_files)
32 return super().make_distribution()
33
34
35 cmdclass["build_py"] = AddPthFileBuild
36 cmdclass["sdist"] = AddPthFileSDist
37
38 setup(
39 name="modin",
40 version=versioneer.get_version(),
41 cmdclass=cmdclass,
42 description="Modin: Make your pandas code run faster by changing one line of code.",
43 packages=find_packages(exclude=["scripts", "scripts.*"]),
44 include_package_data=True,
45 license="Apache 2",
46 url="https://github.com/modin-project/modin",
47 long_description=long_description,
48 long_description_content_type="text/markdown",
49 install_requires=[
50 "pandas>=2,<2.1",
51 "packaging",
52 "numpy>=1.18.5",
53 "fsspec",
54 "psutil",
55 ],
56 extras_require={
57 # can be installed by pip install modin[dask]
58 "dask": dask_deps,
59 "ray": ray_deps,
60 "unidist": unidist_deps,
61 "remote": remote_deps,
62 "spreadsheet": spreadsheet_deps,
63 "sql": sql_deps,
64 "all": all_deps,
65 },
66 python_requires=">=3.8",
67 )
68
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -5,7 +5,8 @@
long_description = fh.read()
dask_deps = ["dask>=2.22.0", "distributed>=2.22.0"]
-ray_deps = ["ray[default]>=1.13.0", "pyarrow"]
+# ray==2.5.0 broken: https://github.com/conda-forge/ray-packages-feedstock/issues/100
+ray_deps = ["ray[default]>=1.13.0,!=2.5.0", "pyarrow"]
unidist_deps = ["unidist[mpi]>=0.2.1"]
remote_deps = ["rpyc==4.1.5", "cloudpickle", "boto3"]
spreadsheet_deps = ["modin-spreadsheet>=0.1.0"]
|
{"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -5,7 +5,8 @@\n long_description = fh.read()\n \n dask_deps = [\"dask>=2.22.0\", \"distributed>=2.22.0\"]\n-ray_deps = [\"ray[default]>=1.13.0\", \"pyarrow\"]\n+# ray==2.5.0 broken: https://github.com/conda-forge/ray-packages-feedstock/issues/100\n+ray_deps = [\"ray[default]>=1.13.0,!=2.5.0\", \"pyarrow\"]\n unidist_deps = [\"unidist[mpi]>=0.2.1\"]\n remote_deps = [\"rpyc==4.1.5\", \"cloudpickle\", \"boto3\"]\n spreadsheet_deps = [\"modin-spreadsheet>=0.1.0\"]\n", "issue": "Refactor ci.yml to reduce the amount of copy-pasting\n\n", "before_files": [{"content": "from setuptools import setup, find_packages\nimport versioneer\n\nwith open(\"README.md\", \"r\", encoding=\"utf-8\") as fh:\n long_description = fh.read()\n\ndask_deps = [\"dask>=2.22.0\", \"distributed>=2.22.0\"]\nray_deps = [\"ray[default]>=1.13.0\", \"pyarrow\"]\nunidist_deps = [\"unidist[mpi]>=0.2.1\"]\nremote_deps = [\"rpyc==4.1.5\", \"cloudpickle\", \"boto3\"]\nspreadsheet_deps = [\"modin-spreadsheet>=0.1.0\"]\nsql_deps = [\"dfsql>=0.4.2\", \"pyparsing<=2.4.7\"]\nall_deps = dask_deps + ray_deps + unidist_deps + remote_deps + spreadsheet_deps\n\n# Distribute 'modin-autoimport-pandas.pth' along with binary and source distributions.\n# This file provides the \"import pandas before Ray init\" feature if specific\n# environment variable is set (see https://github.com/modin-project/modin/issues/4564).\ncmdclass = versioneer.get_cmdclass()\nextra_files = [\"modin-autoimport-pandas.pth\"]\n\n\nclass AddPthFileBuild(cmdclass[\"build_py\"]):\n def _get_data_files(self):\n return (super()._get_data_files() or []) + [\n (\".\", \".\", self.build_lib, extra_files)\n ]\n\n\nclass AddPthFileSDist(cmdclass[\"sdist\"]):\n def make_distribution(self):\n self.filelist.extend(extra_files)\n return super().make_distribution()\n\n\ncmdclass[\"build_py\"] = AddPthFileBuild\ncmdclass[\"sdist\"] = AddPthFileSDist\n\nsetup(\n name=\"modin\",\n version=versioneer.get_version(),\n cmdclass=cmdclass,\n description=\"Modin: Make your pandas code run faster by changing one line of code.\",\n packages=find_packages(exclude=[\"scripts\", \"scripts.*\"]),\n include_package_data=True,\n license=\"Apache 2\",\n url=\"https://github.com/modin-project/modin\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n install_requires=[\n \"pandas>=2,<2.1\",\n \"packaging\",\n \"numpy>=1.18.5\",\n \"fsspec\",\n \"psutil\",\n ],\n extras_require={\n # can be installed by pip install modin[dask]\n \"dask\": dask_deps,\n \"ray\": ray_deps,\n \"unidist\": unidist_deps,\n \"remote\": remote_deps,\n \"spreadsheet\": spreadsheet_deps,\n \"sql\": sql_deps,\n \"all\": all_deps,\n },\n python_requires=\">=3.8\",\n)\n", "path": "setup.py"}]}
| 1,278 | 195 |
gh_patches_debug_1143
|
rasdani/github-patches
|
git_diff
|
akvo__akvo-rsr-3132
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
When logged in landing page should be "myRSR"
</issue>
<code>
[start of akvo/rsr/views/__init__.py]
1 # -*- coding: utf-8 -*-
2
3 """Akvo RSR is covered by the GNU Affero General Public License.
4
5 See more details in the license.txt file located at the root folder of the Akvo RSR module.
6 For additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.
7 """
8
9 from django.core.urlresolvers import reverse
10 from django.http import HttpResponseRedirect
11
12
13 def index(request):
14 """."""
15 return HttpResponseRedirect(reverse('project-directory', args=[]))
16
[end of akvo/rsr/views/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/akvo/rsr/views/__init__.py b/akvo/rsr/views/__init__.py
--- a/akvo/rsr/views/__init__.py
+++ b/akvo/rsr/views/__init__.py
@@ -11,5 +11,7 @@
def index(request):
- """."""
- return HttpResponseRedirect(reverse('project-directory', args=[]))
+ """Redirect user to project directory or My RSR."""
+
+ redirect_url = 'project-directory' if request.user.is_anonymous() else 'my_rsr'
+ return HttpResponseRedirect(reverse(redirect_url, args=[]))
|
{"golden_diff": "diff --git a/akvo/rsr/views/__init__.py b/akvo/rsr/views/__init__.py\n--- a/akvo/rsr/views/__init__.py\n+++ b/akvo/rsr/views/__init__.py\n@@ -11,5 +11,7 @@\n \n \n def index(request):\n- \"\"\".\"\"\"\n- return HttpResponseRedirect(reverse('project-directory', args=[]))\n+ \"\"\"Redirect user to project directory or My RSR.\"\"\"\n+\n+ redirect_url = 'project-directory' if request.user.is_anonymous() else 'my_rsr'\n+ return HttpResponseRedirect(reverse(redirect_url, args=[]))\n", "issue": "When logged in landing page should be \"myRSR\"\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\n\"\"\"Akvo RSR is covered by the GNU Affero General Public License.\n\nSee more details in the license.txt file located at the root folder of the Akvo RSR module.\nFor additional details on the GNU license please see < http://www.gnu.org/licenses/agpl.html >.\n\"\"\"\n\nfrom django.core.urlresolvers import reverse\nfrom django.http import HttpResponseRedirect\n\n\ndef index(request):\n \"\"\".\"\"\"\n return HttpResponseRedirect(reverse('project-directory', args=[]))\n", "path": "akvo/rsr/views/__init__.py"}]}
| 682 | 133 |
gh_patches_debug_23977
|
rasdani/github-patches
|
git_diff
|
sktime__sktime-6005
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] `SARIMAX` fails when `X` is passed in `predict` but not used in `fit`
## Minimal Reproducible Example
```pycon
>>>
>>> from sktime.datasets import load_longley
>>> from sktime.forecasting.sarimax import SARIMAX
>>> from sktime.split import temporal_train_test_split
>>>
>>> y, X = load_longley()
>>>
>>> y_train, _, _, X_test = temporal_train_test_split(y, X)
>>>
>>> forecaster = SARIMAX()
>>>
>>> forecaster.fit(y_train)
SARIMAX()
>>>
>>> # works
>>> forecaster.predict(fh=[1, 2, 3, 4])
1959 66061.176439
1960 65682.034815
1961 65363.883253
1962 65096.910677
Freq: A-DEC, Name: TOTEMP, dtype: float64
>>>
>>> # fails
>>> forecaster.predict(fh=[1, 2, 3, 4], X=X_test)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/anirban/sktime-fork/sktime/forecasting/base/_base.py", line 412, in predict
y_pred = self._predict(fh=fh, X=X_inner)
File "/home/anirban/sktime-fork/sktime/forecasting/base/adapters/_statsmodels.py", line 108, in _predict
ind_drop = self._X.index
AttributeError: 'NoneType' object has no attribute 'index'
>>>
```
## Expectation
I was expecting no failures, and identical behaviour in both cases. I do get that behaviour if I try with `sktime.forecasting.arima.ARIMA`.
## Version
Operating System: Ubuntu 22.04.3 LTS (WSL)
Python: 3.10.12
Sktime: e51ec2472a
</issue>
<code>
[start of sktime/forecasting/base/adapters/_statsmodels.py]
1 # !/usr/bin/env python3 -u
2 # copyright: sktime developers, BSD-3-Clause License (see LICENSE file)
3 """Implements adapter for statsmodels forecasters to be used in sktime framework."""
4
5 __author__ = ["mloning", "ciaran-g"]
6 __all__ = ["_StatsModelsAdapter"]
7
8 import inspect
9
10 import numpy as np
11 import pandas as pd
12
13 from sktime.forecasting.base import BaseForecaster
14 from sktime.utils.warnings import warn
15
16
17 class _StatsModelsAdapter(BaseForecaster):
18 """Base class for interfacing statsmodels forecasting algorithms."""
19
20 _fitted_param_names = ()
21 _tags = {
22 # packaging info
23 # --------------
24 "authors": ["mloning", "ciaran-g"],
25 "maintainers": ["ciaran-g"],
26 "python_dependencies": "statsmodels",
27 # estimator type
28 # --------------
29 "ignores-exogeneous-X": True,
30 "requires-fh-in-fit": False,
31 "handles-missing-data": False,
32 }
33
34 def __init__(self, random_state=None):
35 self._forecaster = None
36 self.random_state = random_state
37 self._fitted_forecaster = None
38 super().__init__()
39
40 def _fit(self, y, X, fh):
41 """Fit to training data.
42
43 Parameters
44 ----------
45 y : pd.Series
46 Target time series to which to fit the forecaster.
47 fh : int, list or np.array, optional (default=None)
48 The forecasters horizon with the steps ahead to to predict.
49 X : pd.DataFrame, optional (default=None)
50 Exogenous variables are ignored
51
52 Returns
53 -------
54 self : returns an instance of self.
55 """
56 # statsmodels does not support the pd.Int64Index as required,
57 # so we coerce them here to pd.RangeIndex
58 if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):
59 y, X = _coerce_int_to_range_index(y, X)
60 self._fit_forecaster(y, X)
61 return self
62
63 def _fit_forecaster(self, y_train, X_train=None):
64 """Log used internally in fit."""
65 raise NotImplementedError("abstract method")
66
67 def _update(self, y, X=None, update_params=True):
68 """Update used internally in update."""
69 if update_params or self.is_composite():
70 super()._update(y, X, update_params=update_params)
71 else:
72 if not hasattr(self._fitted_forecaster, "append"):
73 warn(
74 f"NotImplementedWarning: {self.__class__.__name__} "
75 f"can not accept new data when update_params=False. "
76 f"Call with update_params=True to refit with new data.",
77 obj=self,
78 )
79 else:
80 # only append unseen data to fitted forecaster
81 index_diff = y.index.difference(
82 self._fitted_forecaster.fittedvalues.index
83 )
84 if index_diff.isin(y.index).all():
85 y = y.loc[index_diff]
86 self._fitted_forecaster = self._fitted_forecaster.append(y)
87
88 def _predict(self, fh, X):
89 """Make forecasts.
90
91 Parameters
92 ----------
93 fh : ForecastingHorizon
94 The forecasters horizon with the steps ahead to to predict.
95 Default is one-step ahead forecast,
96 i.e. np.array([1])
97 X : pd.DataFrame, optional (default=None)
98 Exogenous variables are ignored.
99
100 Returns
101 -------
102 y_pred : pd.Series
103 Returns series of predicted values.
104 """
105 # statsmodels requires zero-based indexing starting at the
106 # beginning of the training series when passing integers
107 start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
108 fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
109
110 # bug fix for evaluate function as test_plus_train indices are passed
111 # statsmodels exog must contain test indices only.
112 # For discussion see https://github.com/sktime/sktime/issues/3830
113 if X is not None:
114 ind_drop = self._X.index
115 X = X.loc[~X.index.isin(ind_drop)]
116 # Entire range of the forecast horizon is required
117 X = X.iloc[: (fh_int[-1] + 1)] # include end point
118
119 if "exog" in inspect.signature(self._forecaster.__init__).parameters.keys():
120 y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)
121 else:
122 y_pred = self._fitted_forecaster.predict(start=start, end=end)
123
124 # statsmodels forecasts all periods from start to end of forecasting
125 # horizon, but only return given time points in forecasting horizon
126 # if fh[0] > 1 steps ahead of cutoff then make relative to `start`
127 fh_int = fh_int - fh_int[0]
128 y_pred = y_pred.iloc[fh_int]
129 # ensure that name is not added nor removed
130 # otherwise this may upset conversion to pd.DataFrame
131 y_pred.name = self._y.name
132 return y_pred
133
134 @staticmethod
135 def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:
136 """Construct confidence interval at specified `alpha` for each timestep.
137
138 Parameters
139 ----------
140 prediction_results : PredictionResults
141 results class, as returned by ``self._fitted_forecaster.get_prediction``
142 alpha : float
143 one minus nominal coverage
144
145 Returns
146 -------
147 pd.DataFrame
148 confidence intervals at each timestep
149
150 The dataframe must have at least two columns ``lower`` and ``upper``, and
151 the row indices must be integers relative to ``self.cutoff``. Order of
152 columns do not matter, and row indices must be a superset of relative
153 integer horizon of ``fh``.
154 """
155 del prediction_results, alpha # tools like ``vulture`` may complain as unused
156
157 raise NotImplementedError("abstract method")
158
159 def _predict_interval(self, fh, X, coverage):
160 """Compute/return prediction interval forecasts.
161
162 private _predict_interval containing the core logic,
163 called from predict_interval and default _predict_quantiles
164
165 Parameters
166 ----------
167 fh : guaranteed to be ForecastingHorizon
168 The forecasting horizon with the steps ahead to to predict.
169 X : optional (default=None)
170 guaranteed to be of a type in self.get_tag("X_inner_mtype")
171 Exogeneous time series to predict from.
172 coverage : float or list of float, optional (default=0.95)
173 nominal coverage(s) of predictive interval(s)
174
175 Returns
176 -------
177 pred_int : pd.DataFrame
178 Column has multi-index: first level is variable name from y in fit,
179 second level coverage fractions for which intervals were computed.
180 in the same order as in input `coverage`.
181 Third level is string "lower" or "upper", for lower/upper interval end.
182 Row index is fh, with additional (upper) levels equal to instance levels,
183 from y seen in fit, if y_inner_mtype is Panel or Hierarchical.
184 Entries are forecasts of lower/upper interval end,
185 for var in col index, at nominal coverage in second col index,
186 lower/upper depending on third col index, for the row index.
187 Upper/lower interval end forecasts are equivalent to
188 quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.
189 """
190 implements_interval_adapter = self._has_implementation_of("_extract_conf_int")
191 implements_quantiles = self._has_implementation_of("_predict_quantiles")
192
193 if not implements_interval_adapter and implements_quantiles:
194 return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)
195
196 start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]
197 fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)
198 # if fh > 1 steps ahead of cutoff
199 fh_int = fh_int - fh_int[0]
200
201 get_prediction_arguments = {"start": start, "end": end}
202
203 if hasattr(self, "random_state"):
204 get_prediction_arguments["random_state"] = self.random_state
205
206 if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(
207 "exog"
208 ):
209 get_prediction_arguments["exog"] = X
210
211 prediction_results = self._fitted_forecaster.get_prediction(
212 **get_prediction_arguments
213 )
214
215 var_names = self._get_varnames()
216 var_name = var_names[0]
217 columns = pd.MultiIndex.from_product([var_names, coverage, ["lower", "upper"]])
218 preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))
219 preds_index = preds_index.iloc[fh_int].index
220 pred_int = pd.DataFrame(index=preds_index, columns=columns)
221
222 for c in coverage:
223 pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))
224
225 pred_int[(var_name, c, "lower")] = pred_statsmodels.iloc[fh_int]["lower"]
226 pred_int[(var_name, c, "upper")] = pred_statsmodels.iloc[fh_int]["upper"]
227
228 return pred_int
229
230 def _get_fitted_params(self):
231 """Get fitted parameters.
232
233 Returns
234 -------
235 fitted_params : dict
236 """
237 fitted_params = {}
238 for name in self._get_fitted_param_names():
239 if name in ["aic", "aicc", "bic", "hqic"]:
240 fitted_params[name] = getattr(self._fitted_forecaster, name, None)
241 else:
242 fitted_params[name] = self._fitted_forecaster.params.get(name)
243 return fitted_params
244
245 def _get_fitted_param_names(self):
246 """Get names of fitted parameters."""
247 return self._fitted_param_names
248
249
250 def _coerce_int_to_range_index(y, X=None):
251 new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)
252 try:
253 np.testing.assert_array_equal(y.index, new_index)
254 except AssertionError:
255 raise ValueError(
256 "Coercion of integer pd.Index to pd.RangeIndex "
257 "failed. Please provide `y_train` with a "
258 "pd.RangeIndex."
259 )
260 y.index = new_index
261 if X is not None:
262 X.index = new_index
263 return y, X
264
[end of sktime/forecasting/base/adapters/_statsmodels.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py
--- a/sktime/forecasting/base/adapters/_statsmodels.py
+++ b/sktime/forecasting/base/adapters/_statsmodels.py
@@ -110,13 +110,15 @@
# bug fix for evaluate function as test_plus_train indices are passed
# statsmodels exog must contain test indices only.
# For discussion see https://github.com/sktime/sktime/issues/3830
- if X is not None:
+ if X is not None and self._X is not None:
ind_drop = self._X.index
X = X.loc[~X.index.isin(ind_drop)]
# Entire range of the forecast horizon is required
X = X.iloc[: (fh_int[-1] + 1)] # include end point
if "exog" in inspect.signature(self._forecaster.__init__).parameters.keys():
+ if self._X is None:
+ X = None # change X passed in predict to None if X wasn't passed to fit
y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)
else:
y_pred = self._fitted_forecaster.predict(start=start, end=end)
|
{"golden_diff": "diff --git a/sktime/forecasting/base/adapters/_statsmodels.py b/sktime/forecasting/base/adapters/_statsmodels.py\n--- a/sktime/forecasting/base/adapters/_statsmodels.py\n+++ b/sktime/forecasting/base/adapters/_statsmodels.py\n@@ -110,13 +110,15 @@\n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n # For discussion see https://github.com/sktime/sktime/issues/3830\n- if X is not None:\n+ if X is not None and self._X is not None:\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n X = X.iloc[: (fh_int[-1] + 1)] # include end point\n \n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n+ if self._X is None:\n+ X = None # change X passed in predict to None if X wasn't passed to fit\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n else:\n y_pred = self._fitted_forecaster.predict(start=start, end=end)\n", "issue": "[BUG] `SARIMAX` fails when `X` is passed in `predict` but not used in `fit`\n## Minimal Reproducible Example\r\n\r\n```pycon\r\n>>> \r\n>>> from sktime.datasets import load_longley\r\n>>> from sktime.forecasting.sarimax import SARIMAX\r\n>>> from sktime.split import temporal_train_test_split\r\n>>> \r\n>>> y, X = load_longley()\r\n>>> \r\n>>> y_train, _, _, X_test = temporal_train_test_split(y, X)\r\n>>> \r\n>>> forecaster = SARIMAX()\r\n>>> \r\n>>> forecaster.fit(y_train)\r\nSARIMAX()\r\n>>> \r\n>>> # works\r\n>>> forecaster.predict(fh=[1, 2, 3, 4])\r\n1959 66061.176439\r\n1960 65682.034815\r\n1961 65363.883253\r\n1962 65096.910677\r\nFreq: A-DEC, Name: TOTEMP, dtype: float64\r\n>>> \r\n>>> # fails\r\n>>> forecaster.predict(fh=[1, 2, 3, 4], X=X_test)\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/anirban/sktime-fork/sktime/forecasting/base/_base.py\", line 412, in predict\r\n y_pred = self._predict(fh=fh, X=X_inner)\r\n File \"/home/anirban/sktime-fork/sktime/forecasting/base/adapters/_statsmodels.py\", line 108, in _predict\r\n ind_drop = self._X.index\r\nAttributeError: 'NoneType' object has no attribute 'index'\r\n>>> \r\n```\r\n\r\n## Expectation\r\n\r\nI was expecting no failures, and identical behaviour in both cases. I do get that behaviour if I try with `sktime.forecasting.arima.ARIMA`.\r\n\r\n## Version\r\n\r\nOperating System: Ubuntu 22.04.3 LTS (WSL)\r\nPython: 3.10.12\r\nSktime: e51ec2472a\r\n\n", "before_files": [{"content": "# !/usr/bin/env python3 -u\n# copyright: sktime developers, BSD-3-Clause License (see LICENSE file)\n\"\"\"Implements adapter for statsmodels forecasters to be used in sktime framework.\"\"\"\n\n__author__ = [\"mloning\", \"ciaran-g\"]\n__all__ = [\"_StatsModelsAdapter\"]\n\nimport inspect\n\nimport numpy as np\nimport pandas as pd\n\nfrom sktime.forecasting.base import BaseForecaster\nfrom sktime.utils.warnings import warn\n\n\nclass _StatsModelsAdapter(BaseForecaster):\n \"\"\"Base class for interfacing statsmodels forecasting algorithms.\"\"\"\n\n _fitted_param_names = ()\n _tags = {\n # packaging info\n # --------------\n \"authors\": [\"mloning\", \"ciaran-g\"],\n \"maintainers\": [\"ciaran-g\"],\n \"python_dependencies\": \"statsmodels\",\n # estimator type\n # --------------\n \"ignores-exogeneous-X\": True,\n \"requires-fh-in-fit\": False,\n \"handles-missing-data\": False,\n }\n\n def __init__(self, random_state=None):\n self._forecaster = None\n self.random_state = random_state\n self._fitted_forecaster = None\n super().__init__()\n\n def _fit(self, y, X, fh):\n \"\"\"Fit to training data.\n\n Parameters\n ----------\n y : pd.Series\n Target time series to which to fit the forecaster.\n fh : int, list or np.array, optional (default=None)\n The forecasters horizon with the steps ahead to to predict.\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored\n\n Returns\n -------\n self : returns an instance of self.\n \"\"\"\n # statsmodels does not support the pd.Int64Index as required,\n # so we coerce them here to pd.RangeIndex\n if isinstance(y, pd.Series) and pd.api.types.is_integer_dtype(y.index):\n y, X = _coerce_int_to_range_index(y, X)\n self._fit_forecaster(y, X)\n return self\n\n def _fit_forecaster(self, y_train, X_train=None):\n \"\"\"Log used internally in fit.\"\"\"\n raise NotImplementedError(\"abstract method\")\n\n def _update(self, y, X=None, update_params=True):\n \"\"\"Update used internally in update.\"\"\"\n if update_params or self.is_composite():\n super()._update(y, X, update_params=update_params)\n else:\n if not hasattr(self._fitted_forecaster, \"append\"):\n warn(\n f\"NotImplementedWarning: {self.__class__.__name__} \"\n f\"can not accept new data when update_params=False. \"\n f\"Call with update_params=True to refit with new data.\",\n obj=self,\n )\n else:\n # only append unseen data to fitted forecaster\n index_diff = y.index.difference(\n self._fitted_forecaster.fittedvalues.index\n )\n if index_diff.isin(y.index).all():\n y = y.loc[index_diff]\n self._fitted_forecaster = self._fitted_forecaster.append(y)\n\n def _predict(self, fh, X):\n \"\"\"Make forecasts.\n\n Parameters\n ----------\n fh : ForecastingHorizon\n The forecasters horizon with the steps ahead to to predict.\n Default is one-step ahead forecast,\n i.e. np.array([1])\n X : pd.DataFrame, optional (default=None)\n Exogenous variables are ignored.\n\n Returns\n -------\n y_pred : pd.Series\n Returns series of predicted values.\n \"\"\"\n # statsmodels requires zero-based indexing starting at the\n # beginning of the training series when passing integers\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n\n # bug fix for evaluate function as test_plus_train indices are passed\n # statsmodels exog must contain test indices only.\n # For discussion see https://github.com/sktime/sktime/issues/3830\n if X is not None:\n ind_drop = self._X.index\n X = X.loc[~X.index.isin(ind_drop)]\n # Entire range of the forecast horizon is required\n X = X.iloc[: (fh_int[-1] + 1)] # include end point\n\n if \"exog\" in inspect.signature(self._forecaster.__init__).parameters.keys():\n y_pred = self._fitted_forecaster.predict(start=start, end=end, exog=X)\n else:\n y_pred = self._fitted_forecaster.predict(start=start, end=end)\n\n # statsmodels forecasts all periods from start to end of forecasting\n # horizon, but only return given time points in forecasting horizon\n # if fh[0] > 1 steps ahead of cutoff then make relative to `start`\n fh_int = fh_int - fh_int[0]\n y_pred = y_pred.iloc[fh_int]\n # ensure that name is not added nor removed\n # otherwise this may upset conversion to pd.DataFrame\n y_pred.name = self._y.name\n return y_pred\n\n @staticmethod\n def _extract_conf_int(prediction_results, alpha) -> pd.DataFrame:\n \"\"\"Construct confidence interval at specified `alpha` for each timestep.\n\n Parameters\n ----------\n prediction_results : PredictionResults\n results class, as returned by ``self._fitted_forecaster.get_prediction``\n alpha : float\n one minus nominal coverage\n\n Returns\n -------\n pd.DataFrame\n confidence intervals at each timestep\n\n The dataframe must have at least two columns ``lower`` and ``upper``, and\n the row indices must be integers relative to ``self.cutoff``. Order of\n columns do not matter, and row indices must be a superset of relative\n integer horizon of ``fh``.\n \"\"\"\n del prediction_results, alpha # tools like ``vulture`` may complain as unused\n\n raise NotImplementedError(\"abstract method\")\n\n def _predict_interval(self, fh, X, coverage):\n \"\"\"Compute/return prediction interval forecasts.\n\n private _predict_interval containing the core logic,\n called from predict_interval and default _predict_quantiles\n\n Parameters\n ----------\n fh : guaranteed to be ForecastingHorizon\n The forecasting horizon with the steps ahead to to predict.\n X : optional (default=None)\n guaranteed to be of a type in self.get_tag(\"X_inner_mtype\")\n Exogeneous time series to predict from.\n coverage : float or list of float, optional (default=0.95)\n nominal coverage(s) of predictive interval(s)\n\n Returns\n -------\n pred_int : pd.DataFrame\n Column has multi-index: first level is variable name from y in fit,\n second level coverage fractions for which intervals were computed.\n in the same order as in input `coverage`.\n Third level is string \"lower\" or \"upper\", for lower/upper interval end.\n Row index is fh, with additional (upper) levels equal to instance levels,\n from y seen in fit, if y_inner_mtype is Panel or Hierarchical.\n Entries are forecasts of lower/upper interval end,\n for var in col index, at nominal coverage in second col index,\n lower/upper depending on third col index, for the row index.\n Upper/lower interval end forecasts are equivalent to\n quantile forecasts at alpha = 0.5 - c/2, 0.5 + c/2 for c in coverage.\n \"\"\"\n implements_interval_adapter = self._has_implementation_of(\"_extract_conf_int\")\n implements_quantiles = self._has_implementation_of(\"_predict_quantiles\")\n\n if not implements_interval_adapter and implements_quantiles:\n return BaseForecaster._predict_interval(self, fh, X=X, coverage=coverage)\n\n start, end = fh.to_absolute_int(self._y.index[0], self.cutoff)[[0, -1]]\n fh_int = fh.to_absolute_int(self._y.index[0], self.cutoff) - len(self._y)\n # if fh > 1 steps ahead of cutoff\n fh_int = fh_int - fh_int[0]\n\n get_prediction_arguments = {\"start\": start, \"end\": end}\n\n if hasattr(self, \"random_state\"):\n get_prediction_arguments[\"random_state\"] = self.random_state\n\n if inspect.signature(self._fitted_forecaster.get_prediction).parameters.get(\n \"exog\"\n ):\n get_prediction_arguments[\"exog\"] = X\n\n prediction_results = self._fitted_forecaster.get_prediction(\n **get_prediction_arguments\n )\n\n var_names = self._get_varnames()\n var_name = var_names[0]\n columns = pd.MultiIndex.from_product([var_names, coverage, [\"lower\", \"upper\"]])\n preds_index = self._extract_conf_int(prediction_results, (1 - coverage[0]))\n preds_index = preds_index.iloc[fh_int].index\n pred_int = pd.DataFrame(index=preds_index, columns=columns)\n\n for c in coverage:\n pred_statsmodels = self._extract_conf_int(prediction_results, (1 - c))\n\n pred_int[(var_name, c, \"lower\")] = pred_statsmodels.iloc[fh_int][\"lower\"]\n pred_int[(var_name, c, \"upper\")] = pred_statsmodels.iloc[fh_int][\"upper\"]\n\n return pred_int\n\n def _get_fitted_params(self):\n \"\"\"Get fitted parameters.\n\n Returns\n -------\n fitted_params : dict\n \"\"\"\n fitted_params = {}\n for name in self._get_fitted_param_names():\n if name in [\"aic\", \"aicc\", \"bic\", \"hqic\"]:\n fitted_params[name] = getattr(self._fitted_forecaster, name, None)\n else:\n fitted_params[name] = self._fitted_forecaster.params.get(name)\n return fitted_params\n\n def _get_fitted_param_names(self):\n \"\"\"Get names of fitted parameters.\"\"\"\n return self._fitted_param_names\n\n\ndef _coerce_int_to_range_index(y, X=None):\n new_index = pd.RangeIndex(y.index[0], y.index[-1] + 1)\n try:\n np.testing.assert_array_equal(y.index, new_index)\n except AssertionError:\n raise ValueError(\n \"Coercion of integer pd.Index to pd.RangeIndex \"\n \"failed. Please provide `y_train` with a \"\n \"pd.RangeIndex.\"\n )\n y.index = new_index\n if X is not None:\n X.index = new_index\n return y, X\n", "path": "sktime/forecasting/base/adapters/_statsmodels.py"}]}
| 4,014 | 299 |
gh_patches_debug_5387
|
rasdani/github-patches
|
git_diff
|
pre-commit__pre-commit-1262
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Slightly broken links in output
`pre-commit autoupdate` outputs repository links:
```
Updating https://github.com/psf/black...already up to date.
Updating https://github.com/prettier/prettier...already up to date.
```
In iTerm2 on a Mac using Fish Shell—and probably lots of other setups as well—you can click the repository links (by holding down the _Command_ key):
<img width="668" alt="Screenshot 2020-01-01 at 15 21 32" src="https://user-images.githubusercontent.com/8469540/71642362-6fcd2800-2caa-11ea-9e00-d463dcdf9682.png">
But the link is slightly broken because there is no space after it—we're getting https://github.com/asottile/seed-isort-config...already instead of https://github.com/asottile/seed-isort-config.
This is a tiny issue, but it would be nice if we could fix it. I'll try to make a pull request to show what I mean.
</issue>
<code>
[start of pre_commit/commands/autoupdate.py]
1 from __future__ import print_function
2 from __future__ import unicode_literals
3
4 import collections
5 import os.path
6 import re
7
8 import six
9 from aspy.yaml import ordered_dump
10 from aspy.yaml import ordered_load
11
12 import pre_commit.constants as C
13 from pre_commit import git
14 from pre_commit import output
15 from pre_commit.clientlib import InvalidManifestError
16 from pre_commit.clientlib import load_config
17 from pre_commit.clientlib import load_manifest
18 from pre_commit.clientlib import LOCAL
19 from pre_commit.clientlib import META
20 from pre_commit.commands.migrate_config import migrate_config
21 from pre_commit.util import CalledProcessError
22 from pre_commit.util import cmd_output
23 from pre_commit.util import cmd_output_b
24 from pre_commit.util import tmpdir
25
26
27 class RevInfo(collections.namedtuple('RevInfo', ('repo', 'rev', 'frozen'))):
28 __slots__ = ()
29
30 @classmethod
31 def from_config(cls, config):
32 return cls(config['repo'], config['rev'], None)
33
34 def update(self, tags_only, freeze):
35 if tags_only:
36 tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--abbrev=0')
37 else:
38 tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--exact')
39
40 with tmpdir() as tmp:
41 git.init_repo(tmp, self.repo)
42 cmd_output_b('git', 'fetch', 'origin', 'HEAD', '--tags', cwd=tmp)
43
44 try:
45 rev = cmd_output(*tag_cmd, cwd=tmp)[1].strip()
46 except CalledProcessError:
47 cmd = ('git', 'rev-parse', 'FETCH_HEAD')
48 rev = cmd_output(*cmd, cwd=tmp)[1].strip()
49
50 frozen = None
51 if freeze:
52 exact = cmd_output('git', 'rev-parse', rev, cwd=tmp)[1].strip()
53 if exact != rev:
54 rev, frozen = exact, rev
55 return self._replace(rev=rev, frozen=frozen)
56
57
58 class RepositoryCannotBeUpdatedError(RuntimeError):
59 pass
60
61
62 def _check_hooks_still_exist_at_rev(repo_config, info, store):
63 try:
64 path = store.clone(repo_config['repo'], info.rev)
65 manifest = load_manifest(os.path.join(path, C.MANIFEST_FILE))
66 except InvalidManifestError as e:
67 raise RepositoryCannotBeUpdatedError(six.text_type(e))
68
69 # See if any of our hooks were deleted with the new commits
70 hooks = {hook['id'] for hook in repo_config['hooks']}
71 hooks_missing = hooks - {hook['id'] for hook in manifest}
72 if hooks_missing:
73 raise RepositoryCannotBeUpdatedError(
74 'Cannot update because the tip of master is missing these hooks:\n'
75 '{}'.format(', '.join(sorted(hooks_missing))),
76 )
77
78
79 REV_LINE_RE = re.compile(r'^(\s+)rev:(\s*)([^\s#]+)(.*)(\r?\n)$', re.DOTALL)
80 REV_LINE_FMT = '{}rev:{}{}{}{}'
81
82
83 def _original_lines(path, rev_infos, retry=False):
84 """detect `rev:` lines or reformat the file"""
85 with open(path) as f:
86 original = f.read()
87
88 lines = original.splitlines(True)
89 idxs = [i for i, line in enumerate(lines) if REV_LINE_RE.match(line)]
90 if len(idxs) == len(rev_infos):
91 return lines, idxs
92 elif retry:
93 raise AssertionError('could not find rev lines')
94 else:
95 with open(path, 'w') as f:
96 f.write(ordered_dump(ordered_load(original), **C.YAML_DUMP_KWARGS))
97 return _original_lines(path, rev_infos, retry=True)
98
99
100 def _write_new_config(path, rev_infos):
101 lines, idxs = _original_lines(path, rev_infos)
102
103 for idx, rev_info in zip(idxs, rev_infos):
104 if rev_info is None:
105 continue
106 match = REV_LINE_RE.match(lines[idx])
107 assert match is not None
108 new_rev_s = ordered_dump({'rev': rev_info.rev}, **C.YAML_DUMP_KWARGS)
109 new_rev = new_rev_s.split(':', 1)[1].strip()
110 if rev_info.frozen is not None:
111 comment = ' # {}'.format(rev_info.frozen)
112 else:
113 comment = match.group(4)
114 lines[idx] = REV_LINE_FMT.format(
115 match.group(1), match.group(2), new_rev, comment, match.group(5),
116 )
117
118 with open(path, 'w') as f:
119 f.write(''.join(lines))
120
121
122 def autoupdate(config_file, store, tags_only, freeze, repos=()):
123 """Auto-update the pre-commit config to the latest versions of repos."""
124 migrate_config(config_file, quiet=True)
125 retv = 0
126 rev_infos = []
127 changed = False
128
129 config = load_config(config_file)
130 for repo_config in config['repos']:
131 if repo_config['repo'] in {LOCAL, META}:
132 continue
133
134 info = RevInfo.from_config(repo_config)
135 if repos and info.repo not in repos:
136 rev_infos.append(None)
137 continue
138
139 output.write('Updating {}...'.format(info.repo))
140 new_info = info.update(tags_only=tags_only, freeze=freeze)
141 try:
142 _check_hooks_still_exist_at_rev(repo_config, new_info, store)
143 except RepositoryCannotBeUpdatedError as error:
144 output.write_line(error.args[0])
145 rev_infos.append(None)
146 retv = 1
147 continue
148
149 if new_info.rev != info.rev:
150 changed = True
151 if new_info.frozen:
152 updated_to = '{} (frozen)'.format(new_info.frozen)
153 else:
154 updated_to = new_info.rev
155 msg = 'updating {} -> {}.'.format(info.rev, updated_to)
156 output.write_line(msg)
157 rev_infos.append(new_info)
158 else:
159 output.write_line('already up to date.')
160 rev_infos.append(None)
161
162 if changed:
163 _write_new_config(config_file, rev_infos)
164
165 return retv
166
[end of pre_commit/commands/autoupdate.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py
--- a/pre_commit/commands/autoupdate.py
+++ b/pre_commit/commands/autoupdate.py
@@ -136,7 +136,7 @@
rev_infos.append(None)
continue
- output.write('Updating {}...'.format(info.repo))
+ output.write('Updating {} ... '.format(info.repo))
new_info = info.update(tags_only=tags_only, freeze=freeze)
try:
_check_hooks_still_exist_at_rev(repo_config, new_info, store)
|
{"golden_diff": "diff --git a/pre_commit/commands/autoupdate.py b/pre_commit/commands/autoupdate.py\n--- a/pre_commit/commands/autoupdate.py\n+++ b/pre_commit/commands/autoupdate.py\n@@ -136,7 +136,7 @@\n rev_infos.append(None)\n continue\n \n- output.write('Updating {}...'.format(info.repo))\n+ output.write('Updating {} ... '.format(info.repo))\n new_info = info.update(tags_only=tags_only, freeze=freeze)\n try:\n _check_hooks_still_exist_at_rev(repo_config, new_info, store)\n", "issue": "Slightly broken links in output\n`pre-commit autoupdate` outputs repository links:\r\n\r\n```\r\nUpdating https://github.com/psf/black...already up to date.\r\nUpdating https://github.com/prettier/prettier...already up to date.\r\n```\r\n\r\nIn iTerm2 on a Mac using Fish Shell\u2014and probably lots of other setups as well\u2014you can click the repository links (by holding down the _Command_ key):\r\n\r\n<img width=\"668\" alt=\"Screenshot 2020-01-01 at 15 21 32\" src=\"https://user-images.githubusercontent.com/8469540/71642362-6fcd2800-2caa-11ea-9e00-d463dcdf9682.png\">\r\n\r\nBut the link is slightly broken because there is no space after it\u2014we're getting https://github.com/asottile/seed-isort-config...already instead of https://github.com/asottile/seed-isort-config.\r\n\r\nThis is a tiny issue, but it would be nice if we could fix it. I'll try to make a pull request to show what I mean.\n", "before_files": [{"content": "from __future__ import print_function\nfrom __future__ import unicode_literals\n\nimport collections\nimport os.path\nimport re\n\nimport six\nfrom aspy.yaml import ordered_dump\nfrom aspy.yaml import ordered_load\n\nimport pre_commit.constants as C\nfrom pre_commit import git\nfrom pre_commit import output\nfrom pre_commit.clientlib import InvalidManifestError\nfrom pre_commit.clientlib import load_config\nfrom pre_commit.clientlib import load_manifest\nfrom pre_commit.clientlib import LOCAL\nfrom pre_commit.clientlib import META\nfrom pre_commit.commands.migrate_config import migrate_config\nfrom pre_commit.util import CalledProcessError\nfrom pre_commit.util import cmd_output\nfrom pre_commit.util import cmd_output_b\nfrom pre_commit.util import tmpdir\n\n\nclass RevInfo(collections.namedtuple('RevInfo', ('repo', 'rev', 'frozen'))):\n __slots__ = ()\n\n @classmethod\n def from_config(cls, config):\n return cls(config['repo'], config['rev'], None)\n\n def update(self, tags_only, freeze):\n if tags_only:\n tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--abbrev=0')\n else:\n tag_cmd = ('git', 'describe', 'FETCH_HEAD', '--tags', '--exact')\n\n with tmpdir() as tmp:\n git.init_repo(tmp, self.repo)\n cmd_output_b('git', 'fetch', 'origin', 'HEAD', '--tags', cwd=tmp)\n\n try:\n rev = cmd_output(*tag_cmd, cwd=tmp)[1].strip()\n except CalledProcessError:\n cmd = ('git', 'rev-parse', 'FETCH_HEAD')\n rev = cmd_output(*cmd, cwd=tmp)[1].strip()\n\n frozen = None\n if freeze:\n exact = cmd_output('git', 'rev-parse', rev, cwd=tmp)[1].strip()\n if exact != rev:\n rev, frozen = exact, rev\n return self._replace(rev=rev, frozen=frozen)\n\n\nclass RepositoryCannotBeUpdatedError(RuntimeError):\n pass\n\n\ndef _check_hooks_still_exist_at_rev(repo_config, info, store):\n try:\n path = store.clone(repo_config['repo'], info.rev)\n manifest = load_manifest(os.path.join(path, C.MANIFEST_FILE))\n except InvalidManifestError as e:\n raise RepositoryCannotBeUpdatedError(six.text_type(e))\n\n # See if any of our hooks were deleted with the new commits\n hooks = {hook['id'] for hook in repo_config['hooks']}\n hooks_missing = hooks - {hook['id'] for hook in manifest}\n if hooks_missing:\n raise RepositoryCannotBeUpdatedError(\n 'Cannot update because the tip of master is missing these hooks:\\n'\n '{}'.format(', '.join(sorted(hooks_missing))),\n )\n\n\nREV_LINE_RE = re.compile(r'^(\\s+)rev:(\\s*)([^\\s#]+)(.*)(\\r?\\n)$', re.DOTALL)\nREV_LINE_FMT = '{}rev:{}{}{}{}'\n\n\ndef _original_lines(path, rev_infos, retry=False):\n \"\"\"detect `rev:` lines or reformat the file\"\"\"\n with open(path) as f:\n original = f.read()\n\n lines = original.splitlines(True)\n idxs = [i for i, line in enumerate(lines) if REV_LINE_RE.match(line)]\n if len(idxs) == len(rev_infos):\n return lines, idxs\n elif retry:\n raise AssertionError('could not find rev lines')\n else:\n with open(path, 'w') as f:\n f.write(ordered_dump(ordered_load(original), **C.YAML_DUMP_KWARGS))\n return _original_lines(path, rev_infos, retry=True)\n\n\ndef _write_new_config(path, rev_infos):\n lines, idxs = _original_lines(path, rev_infos)\n\n for idx, rev_info in zip(idxs, rev_infos):\n if rev_info is None:\n continue\n match = REV_LINE_RE.match(lines[idx])\n assert match is not None\n new_rev_s = ordered_dump({'rev': rev_info.rev}, **C.YAML_DUMP_KWARGS)\n new_rev = new_rev_s.split(':', 1)[1].strip()\n if rev_info.frozen is not None:\n comment = ' # {}'.format(rev_info.frozen)\n else:\n comment = match.group(4)\n lines[idx] = REV_LINE_FMT.format(\n match.group(1), match.group(2), new_rev, comment, match.group(5),\n )\n\n with open(path, 'w') as f:\n f.write(''.join(lines))\n\n\ndef autoupdate(config_file, store, tags_only, freeze, repos=()):\n \"\"\"Auto-update the pre-commit config to the latest versions of repos.\"\"\"\n migrate_config(config_file, quiet=True)\n retv = 0\n rev_infos = []\n changed = False\n\n config = load_config(config_file)\n for repo_config in config['repos']:\n if repo_config['repo'] in {LOCAL, META}:\n continue\n\n info = RevInfo.from_config(repo_config)\n if repos and info.repo not in repos:\n rev_infos.append(None)\n continue\n\n output.write('Updating {}...'.format(info.repo))\n new_info = info.update(tags_only=tags_only, freeze=freeze)\n try:\n _check_hooks_still_exist_at_rev(repo_config, new_info, store)\n except RepositoryCannotBeUpdatedError as error:\n output.write_line(error.args[0])\n rev_infos.append(None)\n retv = 1\n continue\n\n if new_info.rev != info.rev:\n changed = True\n if new_info.frozen:\n updated_to = '{} (frozen)'.format(new_info.frozen)\n else:\n updated_to = new_info.rev\n msg = 'updating {} -> {}.'.format(info.rev, updated_to)\n output.write_line(msg)\n rev_infos.append(new_info)\n else:\n output.write_line('already up to date.')\n rev_infos.append(None)\n\n if changed:\n _write_new_config(config_file, rev_infos)\n\n return retv\n", "path": "pre_commit/commands/autoupdate.py"}]}
| 2,512 | 133 |
gh_patches_debug_7097
|
rasdani/github-patches
|
git_diff
|
learningequality__kolibri-3406
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Showing 120% score in exam report
### Observed behavior
After submitting exam, when coach user watching progress of each user in exam report. Coach user see 120% score in exam report. We have attached screenshot and database file,so you can easily re-generate this issue.
### Expected behavior
Score must be between 0-100%.
### Steps to reproduce
1. Copy attached database file in .kolibri folder.
2. login with username "pm" and password "sc".
3. Click on Coach.
4. Click on Class 4A.
5. Click on Exams.
6. See report of the Unit 2B-Final exam.
7. See learner Junaid Shaikh.
### Context
* Kolibri version : Kolibri 0.4.9
* Operating system : Ubuntu 14.04
* Browser : Chrome
### Screenshots

### Database
[db.sqlite3.zip](https://github.com/learningequality/kolibri/files/1617728/db.sqlite3.zip)
</issue>
<code>
[start of kolibri/logger/serializers.py]
1 from django.db.models import Sum
2 from django.utils.timezone import now
3 from kolibri.auth.models import FacilityUser
4 from kolibri.core.serializers import KolibriModelSerializer
5 from kolibri.logger.models import AttemptLog, ContentSessionLog, ContentSummaryLog, ExamAttemptLog, ExamLog, MasteryLog, UserSessionLog
6 from rest_framework import serializers
7
8
9 class ContentSessionLogSerializer(KolibriModelSerializer):
10
11 extra_fields = serializers.JSONField(default='{}')
12
13 class Meta:
14 model = ContentSessionLog
15 fields = ('pk', 'user', 'content_id', 'channel_id', 'start_timestamp',
16 'end_timestamp', 'time_spent', 'kind', 'extra_fields', 'progress')
17
18 class ExamLogSerializer(KolibriModelSerializer):
19 progress = serializers.SerializerMethodField()
20 score = serializers.SerializerMethodField()
21
22 def get_progress(self, obj):
23 return obj.attemptlogs.count()
24
25 def get_score(self, obj):
26 return obj.attemptlogs.aggregate(Sum('correct')).get('correct__sum')
27
28 class Meta:
29 model = ExamLog
30 fields = ('id', 'exam', 'user', 'closed', 'progress', 'score', 'completion_timestamp')
31 read_only_fields = ('completion_timestamp', )
32
33 def update(self, instance, validated_data):
34 # This has changed, set the completion timestamp
35 if validated_data.get('closed') and not instance.closed:
36 instance.completion_timestamp = now()
37 return super(ExamLogSerializer, self).update(instance, validated_data)
38
39 class MasteryLogSerializer(KolibriModelSerializer):
40
41 pastattempts = serializers.SerializerMethodField()
42 totalattempts = serializers.SerializerMethodField()
43 mastery_criterion = serializers.JSONField(default='{}')
44
45 class Meta:
46 model = MasteryLog
47 fields = ('id', 'summarylog', 'start_timestamp', 'pastattempts', 'totalattempts', 'user',
48 'end_timestamp', 'completion_timestamp', 'mastery_criterion', 'mastery_level', 'complete')
49
50 def get_pastattempts(self, obj):
51 # will return a list of the latest 10 correct and hint_taken fields for each attempt.
52 return AttemptLog.objects.filter(masterylog__summarylog=obj.summarylog).values('correct', 'hinted').order_by('-start_timestamp')[:10]
53
54 def get_totalattempts(self, obj):
55 return AttemptLog.objects.filter(masterylog__summarylog=obj.summarylog).count()
56
57 class AttemptLogSerializer(KolibriModelSerializer):
58 answer = serializers.JSONField(default='{}')
59 interaction_history = serializers.JSONField(default='[]')
60
61 class Meta:
62 model = AttemptLog
63 fields = ('id', 'masterylog', 'start_timestamp', 'sessionlog',
64 'end_timestamp', 'completion_timestamp', 'item', 'time_spent', 'user',
65 'complete', 'correct', 'hinted', 'answer', 'simple_answer', 'interaction_history')
66
67 class ExamAttemptLogSerializer(KolibriModelSerializer):
68 answer = serializers.JSONField(default='{}', allow_null=True)
69 interaction_history = serializers.JSONField(default='[]')
70
71 class Meta:
72 model = ExamAttemptLog
73 fields = ('id', 'examlog', 'start_timestamp', 'channel_id', 'content_id',
74 'end_timestamp', 'completion_timestamp', 'item', 'time_spent', 'user',
75 'complete', 'correct', 'hinted', 'answer', 'simple_answer', 'interaction_history')
76
77 def validate(self, data):
78 # Only do this validation when both are being set
79 # not necessary on PATCH, for example
80 if data.get('examlog') and data.get('user'):
81 try:
82 if data['examlog'].user != data['user']:
83 raise serializers.ValidationError('User field and user for related exam log are not the same')
84 except ExamLog.DoesNotExist:
85 raise serializers.ValidationError('Invalid exam log')
86 return data
87
88 class ContentSummaryLogSerializer(KolibriModelSerializer):
89
90 currentmasterylog = serializers.SerializerMethodField()
91 extra_fields = serializers.JSONField(default='{}')
92
93 class Meta:
94 model = ContentSummaryLog
95 fields = ('pk', 'user', 'content_id', 'channel_id', 'start_timestamp', 'currentmasterylog',
96 'end_timestamp', 'completion_timestamp', 'time_spent', 'progress', 'kind', 'extra_fields')
97
98 def get_currentmasterylog(self, obj):
99 try:
100 current_log = obj.masterylogs.latest('end_timestamp')
101 return MasteryLogSerializer(current_log).data
102 except MasteryLog.DoesNotExist:
103 return None
104
105 class UserSessionLogSerializer(KolibriModelSerializer):
106
107 class Meta:
108 model = UserSessionLog
109 fields = ('pk', 'user', 'channels', 'start_timestamp', 'last_interaction_timestamp', 'pages')
110
111 class TotalContentProgressSerializer(serializers.ModelSerializer):
112
113 progress = serializers.SerializerMethodField()
114
115 class Meta:
116 model = FacilityUser
117 fields = ('progress', 'id')
118
119 def get_progress(self, obj):
120 return obj.contentsummarylog_set.filter(progress=1).aggregate(Sum('progress')).get('progress__sum')
121
[end of kolibri/logger/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/kolibri/logger/serializers.py b/kolibri/logger/serializers.py
--- a/kolibri/logger/serializers.py
+++ b/kolibri/logger/serializers.py
@@ -20,10 +20,10 @@
score = serializers.SerializerMethodField()
def get_progress(self, obj):
- return obj.attemptlogs.count()
+ return obj.exam.question_count
def get_score(self, obj):
- return obj.attemptlogs.aggregate(Sum('correct')).get('correct__sum')
+ return obj.attemptlogs.values_list('item').order_by('completion_timestamp').distinct().aggregate(Sum('correct')).get('correct__sum')
class Meta:
model = ExamLog
|
{"golden_diff": "diff --git a/kolibri/logger/serializers.py b/kolibri/logger/serializers.py\n--- a/kolibri/logger/serializers.py\n+++ b/kolibri/logger/serializers.py\n@@ -20,10 +20,10 @@\n score = serializers.SerializerMethodField()\n \n def get_progress(self, obj):\n- return obj.attemptlogs.count()\n+ return obj.exam.question_count\n \n def get_score(self, obj):\n- return obj.attemptlogs.aggregate(Sum('correct')).get('correct__sum')\n+ return obj.attemptlogs.values_list('item').order_by('completion_timestamp').distinct().aggregate(Sum('correct')).get('correct__sum')\n \n class Meta:\n model = ExamLog\n", "issue": "Showing 120% score in exam report\n### Observed behavior\r\nAfter submitting exam, when coach user watching progress of each user in exam report. Coach user see 120% score in exam report. We have attached screenshot and database file,so you can easily re-generate this issue.\r\n\r\n### Expected behavior\r\nScore must be between 0-100%.\r\n\r\n### Steps to reproduce\r\n1. Copy attached database file in .kolibri folder.\r\n2. login with username \"pm\" and password \"sc\".\r\n3. Click on Coach.\r\n4. Click on Class 4A.\r\n5. Click on Exams.\r\n6. See report of the Unit 2B-Final exam.\r\n7. See learner Junaid Shaikh.\r\n\r\n### Context\r\n * Kolibri version : Kolibri 0.4.9\r\n * Operating system : Ubuntu 14.04\r\n * Browser : Chrome \r\n\r\n### Screenshots\r\n\r\n\r\n### Database\r\n[db.sqlite3.zip](https://github.com/learningequality/kolibri/files/1617728/db.sqlite3.zip)\r\n\r\n\r\n\n", "before_files": [{"content": "from django.db.models import Sum\nfrom django.utils.timezone import now\nfrom kolibri.auth.models import FacilityUser\nfrom kolibri.core.serializers import KolibriModelSerializer\nfrom kolibri.logger.models import AttemptLog, ContentSessionLog, ContentSummaryLog, ExamAttemptLog, ExamLog, MasteryLog, UserSessionLog\nfrom rest_framework import serializers\n\n\nclass ContentSessionLogSerializer(KolibriModelSerializer):\n\n extra_fields = serializers.JSONField(default='{}')\n\n class Meta:\n model = ContentSessionLog\n fields = ('pk', 'user', 'content_id', 'channel_id', 'start_timestamp',\n 'end_timestamp', 'time_spent', 'kind', 'extra_fields', 'progress')\n\nclass ExamLogSerializer(KolibriModelSerializer):\n progress = serializers.SerializerMethodField()\n score = serializers.SerializerMethodField()\n\n def get_progress(self, obj):\n return obj.attemptlogs.count()\n\n def get_score(self, obj):\n return obj.attemptlogs.aggregate(Sum('correct')).get('correct__sum')\n\n class Meta:\n model = ExamLog\n fields = ('id', 'exam', 'user', 'closed', 'progress', 'score', 'completion_timestamp')\n read_only_fields = ('completion_timestamp', )\n\n def update(self, instance, validated_data):\n # This has changed, set the completion timestamp\n if validated_data.get('closed') and not instance.closed:\n instance.completion_timestamp = now()\n return super(ExamLogSerializer, self).update(instance, validated_data)\n\nclass MasteryLogSerializer(KolibriModelSerializer):\n\n pastattempts = serializers.SerializerMethodField()\n totalattempts = serializers.SerializerMethodField()\n mastery_criterion = serializers.JSONField(default='{}')\n\n class Meta:\n model = MasteryLog\n fields = ('id', 'summarylog', 'start_timestamp', 'pastattempts', 'totalattempts', 'user',\n 'end_timestamp', 'completion_timestamp', 'mastery_criterion', 'mastery_level', 'complete')\n\n def get_pastattempts(self, obj):\n # will return a list of the latest 10 correct and hint_taken fields for each attempt.\n return AttemptLog.objects.filter(masterylog__summarylog=obj.summarylog).values('correct', 'hinted').order_by('-start_timestamp')[:10]\n\n def get_totalattempts(self, obj):\n return AttemptLog.objects.filter(masterylog__summarylog=obj.summarylog).count()\n\nclass AttemptLogSerializer(KolibriModelSerializer):\n answer = serializers.JSONField(default='{}')\n interaction_history = serializers.JSONField(default='[]')\n\n class Meta:\n model = AttemptLog\n fields = ('id', 'masterylog', 'start_timestamp', 'sessionlog',\n 'end_timestamp', 'completion_timestamp', 'item', 'time_spent', 'user',\n 'complete', 'correct', 'hinted', 'answer', 'simple_answer', 'interaction_history')\n\nclass ExamAttemptLogSerializer(KolibriModelSerializer):\n answer = serializers.JSONField(default='{}', allow_null=True)\n interaction_history = serializers.JSONField(default='[]')\n\n class Meta:\n model = ExamAttemptLog\n fields = ('id', 'examlog', 'start_timestamp', 'channel_id', 'content_id',\n 'end_timestamp', 'completion_timestamp', 'item', 'time_spent', 'user',\n 'complete', 'correct', 'hinted', 'answer', 'simple_answer', 'interaction_history')\n\n def validate(self, data):\n # Only do this validation when both are being set\n # not necessary on PATCH, for example\n if data.get('examlog') and data.get('user'):\n try:\n if data['examlog'].user != data['user']:\n raise serializers.ValidationError('User field and user for related exam log are not the same')\n except ExamLog.DoesNotExist:\n raise serializers.ValidationError('Invalid exam log')\n return data\n\nclass ContentSummaryLogSerializer(KolibriModelSerializer):\n\n currentmasterylog = serializers.SerializerMethodField()\n extra_fields = serializers.JSONField(default='{}')\n\n class Meta:\n model = ContentSummaryLog\n fields = ('pk', 'user', 'content_id', 'channel_id', 'start_timestamp', 'currentmasterylog',\n 'end_timestamp', 'completion_timestamp', 'time_spent', 'progress', 'kind', 'extra_fields')\n\n def get_currentmasterylog(self, obj):\n try:\n current_log = obj.masterylogs.latest('end_timestamp')\n return MasteryLogSerializer(current_log).data\n except MasteryLog.DoesNotExist:\n return None\n\nclass UserSessionLogSerializer(KolibriModelSerializer):\n\n class Meta:\n model = UserSessionLog\n fields = ('pk', 'user', 'channels', 'start_timestamp', 'last_interaction_timestamp', 'pages')\n\nclass TotalContentProgressSerializer(serializers.ModelSerializer):\n\n progress = serializers.SerializerMethodField()\n\n class Meta:\n model = FacilityUser\n fields = ('progress', 'id')\n\n def get_progress(self, obj):\n return obj.contentsummarylog_set.filter(progress=1).aggregate(Sum('progress')).get('progress__sum')\n", "path": "kolibri/logger/serializers.py"}]}
| 2,187 | 162 |
gh_patches_debug_4446
|
rasdani/github-patches
|
git_diff
|
zenml-io__zenml-317
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Repeated Paragraph in the documentation for `core-concepts`
In the file `core-concepts.md`, the section on [`Pipeline`](https://github.com/zenml-io/zenml/blob/b94dff83f0e7c8ab29e99d6b42a0c906a3512b63/docs/book/introduction/core-concepts.md?plain=1#L27-L41) includes a repeated paragraph. The first paragraph in the the pipeline section is repeated in the 3rd paragraph of the same section.
```markdown
Within your repository, you will have one or more pipelines as part of your experimentation workflow. A ZenML
pipeline is a sequence of tasks that execute in a specific order and yield artifacts. The artifacts are stored
within the artifact store and indexed via the metadata store. Each individual task within a pipeline is known as a
step. The standard pipelines within ZenML are designed to have easy interfaces to add pre-decided steps, with the
order also pre-decided. Other sorts of pipelines can be created as well from scratch.
Pipelines are designed as simple functions. They are created by using decorators appropriate to the specific use case
you have. The moment it is `run`, a pipeline is compiled and passed directly to the orchestrator, to be run in the
orchestrator environment.
Within your repository, you will have one or more pipelines as part of your experimentation workflow. A ZenML
pipeline is a sequence of tasks that execute in a specific order and yield artifacts. The artifacts are stored
within the artifact store and indexed via the metadata store. Each individual task within a pipeline is known as a
step. The standard pipelines (like `TrainingPipeline`) within ZenML are designed to have easy interfaces to add
pre-decided steps, with the order also pre-decided. Other sorts of pipelines can be created as well from scratch.
```
</issue>
<code>
[start of src/zenml/materializers/built_in_materializer.py]
1 # Copyright (c) ZenML GmbH 2021. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at:
6 #
7 # https://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express
12 # or implied. See the License for the specific language governing
13 # permissions and limitations under the License.
14 import os
15 from typing import Any, Type
16
17 from zenml.artifacts import DataAnalysisArtifact, DataArtifact
18 from zenml.logger import get_logger
19 from zenml.materializers.base_materializer import BaseMaterializer
20 from zenml.utils import yaml_utils
21
22 logger = get_logger(__name__)
23 DEFAULT_FILENAME = "data.json"
24
25
26 class BuiltInMaterializer(BaseMaterializer):
27 """Read/Write JSON files."""
28
29 # TODO [LOW]: consider adding typing.Dict and typing.List
30 # since these are the 'correct' way to annotate these types.
31
32 ASSOCIATED_ARTIFACT_TYPES = [
33 DataArtifact,
34 DataAnalysisArtifact,
35 ]
36 ASSOCIATED_TYPES = [
37 int,
38 str,
39 bytes,
40 dict,
41 float,
42 list,
43 tuple,
44 bool,
45 ]
46
47 def handle_input(self, data_type: Type[Any]) -> Any:
48 """Reads basic primitive types from json."""
49 super().handle_input(data_type)
50 filepath = os.path.join(self.artifact.uri, DEFAULT_FILENAME)
51 contents = yaml_utils.read_json(filepath)
52 if type(contents) != data_type:
53 # TODO [ENG-142]: Raise error or try to coerce
54 logger.debug(
55 f"Contents {contents} was type {type(contents)} but expected "
56 f"{data_type}"
57 )
58 return contents
59
60 def handle_return(self, data: Any) -> None:
61 """Handles basic built-in types and stores them as json"""
62 super().handle_return(data)
63 filepath = os.path.join(self.artifact.uri, DEFAULT_FILENAME)
64 yaml_utils.write_json(filepath, data)
65
[end of src/zenml/materializers/built_in_materializer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/src/zenml/materializers/built_in_materializer.py b/src/zenml/materializers/built_in_materializer.py
--- a/src/zenml/materializers/built_in_materializer.py
+++ b/src/zenml/materializers/built_in_materializer.py
@@ -26,7 +26,7 @@
class BuiltInMaterializer(BaseMaterializer):
"""Read/Write JSON files."""
- # TODO [LOW]: consider adding typing.Dict and typing.List
+ # TODO [ENG-322]: consider adding typing.Dict and typing.List
# since these are the 'correct' way to annotate these types.
ASSOCIATED_ARTIFACT_TYPES = [
|
{"golden_diff": "diff --git a/src/zenml/materializers/built_in_materializer.py b/src/zenml/materializers/built_in_materializer.py\n--- a/src/zenml/materializers/built_in_materializer.py\n+++ b/src/zenml/materializers/built_in_materializer.py\n@@ -26,7 +26,7 @@\n class BuiltInMaterializer(BaseMaterializer):\n \"\"\"Read/Write JSON files.\"\"\"\n \n- # TODO [LOW]: consider adding typing.Dict and typing.List\n+ # TODO [ENG-322]: consider adding typing.Dict and typing.List\n # since these are the 'correct' way to annotate these types.\n \n ASSOCIATED_ARTIFACT_TYPES = [\n", "issue": "Repeated Paragraph in the documentation for `core-concepts`\nIn the file `core-concepts.md`, the section on [`Pipeline`](https://github.com/zenml-io/zenml/blob/b94dff83f0e7c8ab29e99d6b42a0c906a3512b63/docs/book/introduction/core-concepts.md?plain=1#L27-L41) includes a repeated paragraph. The first paragraph in the the pipeline section is repeated in the 3rd paragraph of the same section. \r\n\r\n```markdown\r\nWithin your repository, you will have one or more pipelines as part of your experimentation workflow. A ZenML \r\npipeline is a sequence of tasks that execute in a specific order and yield artifacts. The artifacts are stored \r\nwithin the artifact store and indexed via the metadata store. Each individual task within a pipeline is known as a \r\nstep. The standard pipelines within ZenML are designed to have easy interfaces to add pre-decided steps, with the \r\norder also pre-decided. Other sorts of pipelines can be created as well from scratch.\r\n\r\nPipelines are designed as simple functions. They are created by using decorators appropriate to the specific use case \r\nyou have. The moment it is `run`, a pipeline is compiled and passed directly to the orchestrator, to be run in the \r\norchestrator environment.\r\n\r\nWithin your repository, you will have one or more pipelines as part of your experimentation workflow. A ZenML \r\npipeline is a sequence of tasks that execute in a specific order and yield artifacts. The artifacts are stored \r\nwithin the artifact store and indexed via the metadata store. Each individual task within a pipeline is known as a \r\nstep. The standard pipelines (like `TrainingPipeline`) within ZenML are designed to have easy interfaces to add \r\npre-decided steps, with the order also pre-decided. Other sorts of pipelines can be created as well from scratch.\r\n```\n", "before_files": [{"content": "# Copyright (c) ZenML GmbH 2021. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at:\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express\n# or implied. See the License for the specific language governing\n# permissions and limitations under the License.\nimport os\nfrom typing import Any, Type\n\nfrom zenml.artifacts import DataAnalysisArtifact, DataArtifact\nfrom zenml.logger import get_logger\nfrom zenml.materializers.base_materializer import BaseMaterializer\nfrom zenml.utils import yaml_utils\n\nlogger = get_logger(__name__)\nDEFAULT_FILENAME = \"data.json\"\n\n\nclass BuiltInMaterializer(BaseMaterializer):\n \"\"\"Read/Write JSON files.\"\"\"\n\n # TODO [LOW]: consider adding typing.Dict and typing.List\n # since these are the 'correct' way to annotate these types.\n\n ASSOCIATED_ARTIFACT_TYPES = [\n DataArtifact,\n DataAnalysisArtifact,\n ]\n ASSOCIATED_TYPES = [\n int,\n str,\n bytes,\n dict,\n float,\n list,\n tuple,\n bool,\n ]\n\n def handle_input(self, data_type: Type[Any]) -> Any:\n \"\"\"Reads basic primitive types from json.\"\"\"\n super().handle_input(data_type)\n filepath = os.path.join(self.artifact.uri, DEFAULT_FILENAME)\n contents = yaml_utils.read_json(filepath)\n if type(contents) != data_type:\n # TODO [ENG-142]: Raise error or try to coerce\n logger.debug(\n f\"Contents {contents} was type {type(contents)} but expected \"\n f\"{data_type}\"\n )\n return contents\n\n def handle_return(self, data: Any) -> None:\n \"\"\"Handles basic built-in types and stores them as json\"\"\"\n super().handle_return(data)\n filepath = os.path.join(self.artifact.uri, DEFAULT_FILENAME)\n yaml_utils.write_json(filepath, data)\n", "path": "src/zenml/materializers/built_in_materializer.py"}]}
| 1,567 | 150 |
gh_patches_debug_237
|
rasdani/github-patches
|
git_diff
|
keras-team__keras-2992
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Why TF-IDF matrix generated by keras.preprocessing.text.Tokenizer() has negative values?
Say, if run the following script:
> > > import keras
> > > tk = keras.preprocessing.text.Tokenizer()
> > > texts = ['I love you.', 'I love you, too.']
> > > tk.fit_on_texts(texts)
> > > tk.texts_to_matrix(texts, mode='tfidf')
The output will be:
array([[ 0. , -1.09861229, -1.09861229, -1.09861229, 0. ],
[ 0. , -1.38629436, -1.38629436, -1.38629436, -1.38629436]])
But tf-idf values seems should be non-negative?
By the way, is there a neat way to get the word by its index, or the vocabulary (in the order of word indices) of the Tokenizer() class? Say, sometimes I want to know what's the most frequent word in the documents, then I want to access word with index 1.
I can do it by running:
> > > vocab = tk.word_index.items()
> > > vocab.sort(key=lambda x:x[1])
This gives:
> > > vocab
[('i', 1), ('you', 2), ('love', 3), ('too', 4)]
But is it somehow hacky?
Thank you!
</issue>
<code>
[start of keras/preprocessing/text.py]
1 # -*- coding: utf-8 -*-
2 '''These preprocessing utilities would greatly benefit
3 from a fast Cython rewrite.
4 '''
5 from __future__ import absolute_import
6
7 import string
8 import sys
9 import numpy as np
10 from six.moves import range
11 from six.moves import zip
12
13 if sys.version_info < (3,):
14 maketrans = string.maketrans
15 else:
16 maketrans = str.maketrans
17
18
19 def base_filter():
20 f = string.punctuation
21 f = f.replace("'", '')
22 f += '\t\n'
23 return f
24
25
26 def text_to_word_sequence(text, filters=base_filter(), lower=True, split=" "):
27 '''prune: sequence of characters to filter out
28 '''
29 if lower:
30 text = text.lower()
31 text = text.translate(maketrans(filters, split*len(filters)))
32 seq = text.split(split)
33 return [_f for _f in seq if _f]
34
35
36 def one_hot(text, n, filters=base_filter(), lower=True, split=" "):
37 seq = text_to_word_sequence(text, filters=filters, lower=lower, split=split)
38 return [(abs(hash(w)) % (n - 1) + 1) for w in seq]
39
40
41 class Tokenizer(object):
42 def __init__(self, nb_words=None, filters=base_filter(),
43 lower=True, split=' ', char_level=False):
44 '''The class allows to vectorize a text corpus, by turning each
45 text into either a sequence of integers (each integer being the index
46 of a token in a dictionary) or into a vector where the coefficient
47 for each token could be binary, based on word count, based on tf-idf...
48
49 # Arguments
50 nb_words: the maximum number of words to keep, based
51 on word frequency. Only the most common `nb_words` words will
52 be kept.
53 filters: a string where each element is a character that will be
54 filtered from the texts. The default is all punctuation, plus
55 tabs and line breaks, minus the `'` character.
56 lower: boolean. Whether to convert the texts to lowercase.
57 split: character or string to use for token splitting.
58 char_level: if True, every character will be treated as a word.
59
60 By default, all punctuation is removed, turning the texts into
61 space-separated sequences of words
62 (words maybe include the `'` character). These sequences are then
63 split into lists of tokens. They will then be indexed or vectorized.
64
65 `0` is a reserved index that won't be assigned to any word.
66 '''
67 self.word_counts = {}
68 self.word_docs = {}
69 self.filters = filters
70 self.split = split
71 self.lower = lower
72 self.nb_words = nb_words
73 self.document_count = 0
74 self.char_level = char_level
75
76 def fit_on_texts(self, texts):
77 '''Required before using texts_to_sequences or texts_to_matrix
78
79 # Arguments
80 texts: can be a list of strings,
81 or a generator of strings (for memory-efficiency)
82 '''
83 self.document_count = 0
84 for text in texts:
85 self.document_count += 1
86 seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)
87 for w in seq:
88 if w in self.word_counts:
89 self.word_counts[w] += 1
90 else:
91 self.word_counts[w] = 1
92 for w in set(seq):
93 if w in self.word_docs:
94 self.word_docs[w] += 1
95 else:
96 self.word_docs[w] = 1
97
98 wcounts = list(self.word_counts.items())
99 wcounts.sort(key=lambda x: x[1], reverse=True)
100 sorted_voc = [wc[0] for wc in wcounts]
101 self.word_index = dict(list(zip(sorted_voc, list(range(1, len(sorted_voc) + 1)))))
102
103 self.index_docs = {}
104 for w, c in list(self.word_docs.items()):
105 self.index_docs[self.word_index[w]] = c
106
107 def fit_on_sequences(self, sequences):
108 '''Required before using sequences_to_matrix
109 (if fit_on_texts was never called)
110 '''
111 self.document_count = len(sequences)
112 self.index_docs = {}
113 for seq in sequences:
114 seq = set(seq)
115 for i in seq:
116 if i not in self.index_docs:
117 self.index_docs[i] = 1
118 else:
119 self.index_docs[i] += 1
120
121 def texts_to_sequences(self, texts):
122 '''Transforms each text in texts in a sequence of integers.
123 Only top "nb_words" most frequent words will be taken into account.
124 Only words known by the tokenizer will be taken into account.
125
126 Returns a list of sequences.
127 '''
128 res = []
129 for vect in self.texts_to_sequences_generator(texts):
130 res.append(vect)
131 return res
132
133 def texts_to_sequences_generator(self, texts):
134 '''Transforms each text in texts in a sequence of integers.
135 Only top "nb_words" most frequent words will be taken into account.
136 Only words known by the tokenizer will be taken into account.
137
138 Yields individual sequences.
139
140 # Arguments:
141 texts: list of strings.
142 '''
143 nb_words = self.nb_words
144 for text in texts:
145 seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)
146 vect = []
147 for w in seq:
148 i = self.word_index.get(w)
149 if i is not None:
150 if nb_words and i >= nb_words:
151 continue
152 else:
153 vect.append(i)
154 yield vect
155
156 def texts_to_matrix(self, texts, mode='binary'):
157 '''Convert a list of texts to a Numpy matrix,
158 according to some vectorization mode.
159
160 # Arguments:
161 texts: list of strings.
162 modes: one of "binary", "count", "tfidf", "freq"
163 '''
164 sequences = self.texts_to_sequences(texts)
165 return self.sequences_to_matrix(sequences, mode=mode)
166
167 def sequences_to_matrix(self, sequences, mode='binary'):
168 '''Converts a list of sequences into a Numpy matrix,
169 according to some vectorization mode.
170
171 # Arguments:
172 sequences: list of sequences
173 (a sequence is a list of integer word indices).
174 modes: one of "binary", "count", "tfidf", "freq"
175 '''
176 if not self.nb_words:
177 if self.word_index:
178 nb_words = len(self.word_index) + 1
179 else:
180 raise Exception('Specify a dimension (nb_words argument), '
181 'or fit on some text data first.')
182 else:
183 nb_words = self.nb_words
184
185 if mode == 'tfidf' and not self.document_count:
186 raise Exception('Fit the Tokenizer on some data '
187 'before using tfidf mode.')
188
189 X = np.zeros((len(sequences), nb_words))
190 for i, seq in enumerate(sequences):
191 if not seq:
192 continue
193 counts = {}
194 for j in seq:
195 if j >= nb_words:
196 continue
197 if j not in counts:
198 counts[j] = 1.
199 else:
200 counts[j] += 1
201 for j, c in list(counts.items()):
202 if mode == 'count':
203 X[i][j] = c
204 elif mode == 'freq':
205 X[i][j] = c / len(seq)
206 elif mode == 'binary':
207 X[i][j] = 1
208 elif mode == 'tfidf':
209 # Use weighting scheme 2 in
210 # https://en.wikipedia.org/wiki/Tf%E2%80%93idf
211 tf = 1 + np.log(c)
212 idf = np.log(1 + self.document_count / (1 + self.index_docs.get(j, 0)))
213 X[i][j] = tf * idf
214 else:
215 raise Exception('Unknown vectorization mode: ' + str(mode))
216 return X
217
[end of keras/preprocessing/text.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/keras/preprocessing/text.py b/keras/preprocessing/text.py
--- a/keras/preprocessing/text.py
+++ b/keras/preprocessing/text.py
@@ -3,6 +3,7 @@
from a fast Cython rewrite.
'''
from __future__ import absolute_import
+from __future__ import division
import string
import sys
|
{"golden_diff": "diff --git a/keras/preprocessing/text.py b/keras/preprocessing/text.py\n--- a/keras/preprocessing/text.py\n+++ b/keras/preprocessing/text.py\n@@ -3,6 +3,7 @@\n from a fast Cython rewrite.\n '''\n from __future__ import absolute_import\n+from __future__ import division\n \n import string\n import sys\n", "issue": "Why TF-IDF matrix generated by keras.preprocessing.text.Tokenizer() has negative values?\nSay, if run the following script:\n\n> > > import keras\n> > > tk = keras.preprocessing.text.Tokenizer()\n> > > texts = ['I love you.', 'I love you, too.']\n> > > tk.fit_on_texts(texts)\n> > > tk.texts_to_matrix(texts, mode='tfidf')\n\nThe output will be:\narray([[ 0. , -1.09861229, -1.09861229, -1.09861229, 0. ],\n [ 0. , -1.38629436, -1.38629436, -1.38629436, -1.38629436]])\n\nBut tf-idf values seems should be non-negative?\n\nBy the way, is there a neat way to get the word by its index, or the vocabulary (in the order of word indices) of the Tokenizer() class? Say, sometimes I want to know what's the most frequent word in the documents, then I want to access word with index 1.\n\nI can do it by running:\n\n> > > vocab = tk.word_index.items()\n> > > vocab.sort(key=lambda x:x[1])\n\nThis gives:\n\n> > > vocab\n\n[('i', 1), ('you', 2), ('love', 3), ('too', 4)]\nBut is it somehow hacky?\n\nThank you!\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n'''These preprocessing utilities would greatly benefit\nfrom a fast Cython rewrite.\n'''\nfrom __future__ import absolute_import\n\nimport string\nimport sys\nimport numpy as np\nfrom six.moves import range\nfrom six.moves import zip\n\nif sys.version_info < (3,):\n maketrans = string.maketrans\nelse:\n maketrans = str.maketrans\n\n\ndef base_filter():\n f = string.punctuation\n f = f.replace(\"'\", '')\n f += '\\t\\n'\n return f\n\n\ndef text_to_word_sequence(text, filters=base_filter(), lower=True, split=\" \"):\n '''prune: sequence of characters to filter out\n '''\n if lower:\n text = text.lower()\n text = text.translate(maketrans(filters, split*len(filters)))\n seq = text.split(split)\n return [_f for _f in seq if _f]\n\n\ndef one_hot(text, n, filters=base_filter(), lower=True, split=\" \"):\n seq = text_to_word_sequence(text, filters=filters, lower=lower, split=split)\n return [(abs(hash(w)) % (n - 1) + 1) for w in seq]\n\n\nclass Tokenizer(object):\n def __init__(self, nb_words=None, filters=base_filter(),\n lower=True, split=' ', char_level=False):\n '''The class allows to vectorize a text corpus, by turning each\n text into either a sequence of integers (each integer being the index\n of a token in a dictionary) or into a vector where the coefficient\n for each token could be binary, based on word count, based on tf-idf...\n\n # Arguments\n nb_words: the maximum number of words to keep, based\n on word frequency. Only the most common `nb_words` words will\n be kept.\n filters: a string where each element is a character that will be\n filtered from the texts. The default is all punctuation, plus\n tabs and line breaks, minus the `'` character.\n lower: boolean. Whether to convert the texts to lowercase.\n split: character or string to use for token splitting.\n char_level: if True, every character will be treated as a word.\n\n By default, all punctuation is removed, turning the texts into\n space-separated sequences of words\n (words maybe include the `'` character). These sequences are then\n split into lists of tokens. They will then be indexed or vectorized.\n\n `0` is a reserved index that won't be assigned to any word.\n '''\n self.word_counts = {}\n self.word_docs = {}\n self.filters = filters\n self.split = split\n self.lower = lower\n self.nb_words = nb_words\n self.document_count = 0\n self.char_level = char_level\n\n def fit_on_texts(self, texts):\n '''Required before using texts_to_sequences or texts_to_matrix\n\n # Arguments\n texts: can be a list of strings,\n or a generator of strings (for memory-efficiency)\n '''\n self.document_count = 0\n for text in texts:\n self.document_count += 1\n seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)\n for w in seq:\n if w in self.word_counts:\n self.word_counts[w] += 1\n else:\n self.word_counts[w] = 1\n for w in set(seq):\n if w in self.word_docs:\n self.word_docs[w] += 1\n else:\n self.word_docs[w] = 1\n\n wcounts = list(self.word_counts.items())\n wcounts.sort(key=lambda x: x[1], reverse=True)\n sorted_voc = [wc[0] for wc in wcounts]\n self.word_index = dict(list(zip(sorted_voc, list(range(1, len(sorted_voc) + 1)))))\n\n self.index_docs = {}\n for w, c in list(self.word_docs.items()):\n self.index_docs[self.word_index[w]] = c\n\n def fit_on_sequences(self, sequences):\n '''Required before using sequences_to_matrix\n (if fit_on_texts was never called)\n '''\n self.document_count = len(sequences)\n self.index_docs = {}\n for seq in sequences:\n seq = set(seq)\n for i in seq:\n if i not in self.index_docs:\n self.index_docs[i] = 1\n else:\n self.index_docs[i] += 1\n\n def texts_to_sequences(self, texts):\n '''Transforms each text in texts in a sequence of integers.\n Only top \"nb_words\" most frequent words will be taken into account.\n Only words known by the tokenizer will be taken into account.\n\n Returns a list of sequences.\n '''\n res = []\n for vect in self.texts_to_sequences_generator(texts):\n res.append(vect)\n return res\n\n def texts_to_sequences_generator(self, texts):\n '''Transforms each text in texts in a sequence of integers.\n Only top \"nb_words\" most frequent words will be taken into account.\n Only words known by the tokenizer will be taken into account.\n\n Yields individual sequences.\n\n # Arguments:\n texts: list of strings.\n '''\n nb_words = self.nb_words\n for text in texts:\n seq = text if self.char_level else text_to_word_sequence(text, self.filters, self.lower, self.split)\n vect = []\n for w in seq:\n i = self.word_index.get(w)\n if i is not None:\n if nb_words and i >= nb_words:\n continue\n else:\n vect.append(i)\n yield vect\n\n def texts_to_matrix(self, texts, mode='binary'):\n '''Convert a list of texts to a Numpy matrix,\n according to some vectorization mode.\n\n # Arguments:\n texts: list of strings.\n modes: one of \"binary\", \"count\", \"tfidf\", \"freq\"\n '''\n sequences = self.texts_to_sequences(texts)\n return self.sequences_to_matrix(sequences, mode=mode)\n\n def sequences_to_matrix(self, sequences, mode='binary'):\n '''Converts a list of sequences into a Numpy matrix,\n according to some vectorization mode.\n\n # Arguments:\n sequences: list of sequences\n (a sequence is a list of integer word indices).\n modes: one of \"binary\", \"count\", \"tfidf\", \"freq\"\n '''\n if not self.nb_words:\n if self.word_index:\n nb_words = len(self.word_index) + 1\n else:\n raise Exception('Specify a dimension (nb_words argument), '\n 'or fit on some text data first.')\n else:\n nb_words = self.nb_words\n\n if mode == 'tfidf' and not self.document_count:\n raise Exception('Fit the Tokenizer on some data '\n 'before using tfidf mode.')\n\n X = np.zeros((len(sequences), nb_words))\n for i, seq in enumerate(sequences):\n if not seq:\n continue\n counts = {}\n for j in seq:\n if j >= nb_words:\n continue\n if j not in counts:\n counts[j] = 1.\n else:\n counts[j] += 1\n for j, c in list(counts.items()):\n if mode == 'count':\n X[i][j] = c\n elif mode == 'freq':\n X[i][j] = c / len(seq)\n elif mode == 'binary':\n X[i][j] = 1\n elif mode == 'tfidf':\n # Use weighting scheme 2 in\n # https://en.wikipedia.org/wiki/Tf%E2%80%93idf\n tf = 1 + np.log(c)\n idf = np.log(1 + self.document_count / (1 + self.index_docs.get(j, 0)))\n X[i][j] = tf * idf\n else:\n raise Exception('Unknown vectorization mode: ' + str(mode))\n return X\n", "path": "keras/preprocessing/text.py"}]}
| 3,174 | 80 |
gh_patches_debug_7770
|
rasdani/github-patches
|
git_diff
|
pandas-dev__pandas-8238
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
BUG: rolling_window yields unexpected results with win_type='triang'
Here's the example in the documentation, modified to have non-zero mean:
```
n = 100
ser = pandas.Series(randn(n)+10, index=pandas.date_range('1/1/2000', periods=n))
pandas.rolling_window(ser, 5, 'triang').plot()
pandas.rolling_window(ser, 5, 'boxcar').plot()
```
The rolling boxcar window is centered around 10, as expected.
The triang window is centered around 6. That suggests that the weights in the window don't add up to 1.
Either that or my expectation of how it should work is wrong?
</issue>
<code>
[start of pandas/util/print_versions.py]
1 import os
2 import platform
3 import sys
4 import struct
5 import subprocess
6 import codecs
7
8
9 def get_sys_info():
10 "Returns system information as a dict"
11
12 blob = []
13
14 # get full commit hash
15 commit = None
16 if os.path.isdir(".git") and os.path.isdir("pandas"):
17 try:
18 pipe = subprocess.Popen('git log --format="%H" -n 1'.split(" "),
19 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
20 so, serr = pipe.communicate()
21 except:
22 pass
23 else:
24 if pipe.returncode == 0:
25 commit = so
26 try:
27 commit = so.decode('utf-8')
28 except ValueError:
29 pass
30 commit = commit.strip().strip('"')
31
32 blob.append(('commit', commit))
33
34 try:
35 sysname, nodename, release, version, machine, processor = platform.uname(
36 )
37 blob.extend([
38 ("python", "%d.%d.%d.%s.%s" % sys.version_info[:]),
39 ("python-bits", struct.calcsize("P") * 8),
40 ("OS", "%s" % (sysname)),
41 ("OS-release", "%s" % (release)),
42 # ("Version", "%s" % (version)),
43 ("machine", "%s" % (machine)),
44 ("processor", "%s" % (processor)),
45 ("byteorder", "%s" % sys.byteorder),
46 ("LC_ALL", "%s" % os.environ.get('LC_ALL', "None")),
47 ("LANG", "%s" % os.environ.get('LANG', "None")),
48
49 ])
50 except:
51 pass
52
53 return blob
54
55
56 def show_versions(as_json=False):
57 import imp
58 sys_info = get_sys_info()
59
60 deps = [
61 # (MODULE_NAME, f(mod) -> mod version)
62 ("pandas", lambda mod: mod.__version__),
63 ("nose", lambda mod: mod.__version__),
64 ("Cython", lambda mod: mod.__version__),
65 ("numpy", lambda mod: mod.version.version),
66 ("scipy", lambda mod: mod.version.version),
67 ("statsmodels", lambda mod: mod.__version__),
68 ("IPython", lambda mod: mod.__version__),
69 ("sphinx", lambda mod: mod.__version__),
70 ("patsy", lambda mod: mod.__version__),
71 ("scikits.timeseries", lambda mod: mod.__version__),
72 ("dateutil", lambda mod: mod.__version__),
73 ("pytz", lambda mod: mod.VERSION),
74 ("bottleneck", lambda mod: mod.__version__),
75 ("tables", lambda mod: mod.__version__),
76 ("numexpr", lambda mod: mod.__version__),
77 ("matplotlib", lambda mod: mod.__version__),
78 ("openpyxl", lambda mod: mod.__version__),
79 ("xlrd", lambda mod: mod.__VERSION__),
80 ("xlwt", lambda mod: mod.__VERSION__),
81 ("xlsxwriter", lambda mod: mod.__version__),
82 ("lxml", lambda mod: mod.etree.__version__),
83 ("bs4", lambda mod: mod.__version__),
84 ("html5lib", lambda mod: mod.__version__),
85 ("httplib2", lambda mod: mod.__version__),
86 ("apiclient", lambda mod: mod.__version__),
87 ("rpy2", lambda mod: mod.__version__),
88 ("sqlalchemy", lambda mod: mod.__version__),
89 ("pymysql", lambda mod: mod.__version__),
90 ("psycopg2", lambda mod: mod.__version__),
91 ]
92
93 deps_blob = list()
94 for (modname, ver_f) in deps:
95 try:
96 try:
97 mod = imp.load_module(modname, *imp.find_module(modname))
98 except (ImportError):
99 import importlib
100 mod = importlib.import_module(modname)
101 ver = ver_f(mod)
102 deps_blob.append((modname, ver))
103 except:
104 deps_blob.append((modname, None))
105
106 if (as_json):
107 # 2.6-safe
108 try:
109 import json
110 except:
111 import simplejson as json
112
113 j = dict(system=dict(sys_info), dependencies=dict(deps_blob))
114
115 if as_json == True:
116 print(j)
117 else:
118 with codecs.open(as_json, "wb", encoding='utf8') as f:
119 json.dump(j, f, indent=2)
120
121 else:
122
123 print("\nINSTALLED VERSIONS")
124 print("------------------")
125
126 for k, stat in sys_info:
127 print("%s: %s" % (k, stat))
128
129 print("")
130 for k, stat in deps_blob:
131 print("%s: %s" % (k, stat))
132
133
134 def main():
135 # optparse is 2.6-safe
136 from optparse import OptionParser
137 parser = OptionParser()
138 parser.add_option("-j", "--json", metavar="FILE", nargs=1,
139 help="Save output as JSON into file, pass in '-' to output to stdout")
140
141 (options, args) = parser.parse_args()
142
143 if options.json == "-":
144 options.json = True
145
146 show_versions(as_json=options.json)
147
148 return 0
149
150 if __name__ == "__main__":
151 sys.exit(main())
152
[end of pandas/util/print_versions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/pandas/util/print_versions.py b/pandas/util/print_versions.py
--- a/pandas/util/print_versions.py
+++ b/pandas/util/print_versions.py
@@ -68,7 +68,6 @@
("IPython", lambda mod: mod.__version__),
("sphinx", lambda mod: mod.__version__),
("patsy", lambda mod: mod.__version__),
- ("scikits.timeseries", lambda mod: mod.__version__),
("dateutil", lambda mod: mod.__version__),
("pytz", lambda mod: mod.VERSION),
("bottleneck", lambda mod: mod.__version__),
|
{"golden_diff": "diff --git a/pandas/util/print_versions.py b/pandas/util/print_versions.py\n--- a/pandas/util/print_versions.py\n+++ b/pandas/util/print_versions.py\n@@ -68,7 +68,6 @@\n (\"IPython\", lambda mod: mod.__version__),\n (\"sphinx\", lambda mod: mod.__version__),\n (\"patsy\", lambda mod: mod.__version__),\n- (\"scikits.timeseries\", lambda mod: mod.__version__),\n (\"dateutil\", lambda mod: mod.__version__),\n (\"pytz\", lambda mod: mod.VERSION),\n (\"bottleneck\", lambda mod: mod.__version__),\n", "issue": "BUG: rolling_window yields unexpected results with win_type='triang'\nHere's the example in the documentation, modified to have non-zero mean:\n\n```\nn = 100\nser = pandas.Series(randn(n)+10, index=pandas.date_range('1/1/2000', periods=n))\npandas.rolling_window(ser, 5, 'triang').plot()\npandas.rolling_window(ser, 5, 'boxcar').plot()\n```\n\nThe rolling boxcar window is centered around 10, as expected.\n\nThe triang window is centered around 6. That suggests that the weights in the window don't add up to 1.\n\nEither that or my expectation of how it should work is wrong?\n\n", "before_files": [{"content": "import os\nimport platform\nimport sys\nimport struct\nimport subprocess\nimport codecs\n\n\ndef get_sys_info():\n \"Returns system information as a dict\"\n\n blob = []\n\n # get full commit hash\n commit = None\n if os.path.isdir(\".git\") and os.path.isdir(\"pandas\"):\n try:\n pipe = subprocess.Popen('git log --format=\"%H\" -n 1'.split(\" \"),\n stdout=subprocess.PIPE, stderr=subprocess.PIPE)\n so, serr = pipe.communicate()\n except:\n pass\n else:\n if pipe.returncode == 0:\n commit = so\n try:\n commit = so.decode('utf-8')\n except ValueError:\n pass\n commit = commit.strip().strip('\"')\n\n blob.append(('commit', commit))\n\n try:\n sysname, nodename, release, version, machine, processor = platform.uname(\n )\n blob.extend([\n (\"python\", \"%d.%d.%d.%s.%s\" % sys.version_info[:]),\n (\"python-bits\", struct.calcsize(\"P\") * 8),\n (\"OS\", \"%s\" % (sysname)),\n (\"OS-release\", \"%s\" % (release)),\n # (\"Version\", \"%s\" % (version)),\n (\"machine\", \"%s\" % (machine)),\n (\"processor\", \"%s\" % (processor)),\n (\"byteorder\", \"%s\" % sys.byteorder),\n (\"LC_ALL\", \"%s\" % os.environ.get('LC_ALL', \"None\")),\n (\"LANG\", \"%s\" % os.environ.get('LANG', \"None\")),\n\n ])\n except:\n pass\n\n return blob\n\n\ndef show_versions(as_json=False):\n import imp\n sys_info = get_sys_info()\n\n deps = [\n # (MODULE_NAME, f(mod) -> mod version)\n (\"pandas\", lambda mod: mod.__version__),\n (\"nose\", lambda mod: mod.__version__),\n (\"Cython\", lambda mod: mod.__version__),\n (\"numpy\", lambda mod: mod.version.version),\n (\"scipy\", lambda mod: mod.version.version),\n (\"statsmodels\", lambda mod: mod.__version__),\n (\"IPython\", lambda mod: mod.__version__),\n (\"sphinx\", lambda mod: mod.__version__),\n (\"patsy\", lambda mod: mod.__version__),\n (\"scikits.timeseries\", lambda mod: mod.__version__),\n (\"dateutil\", lambda mod: mod.__version__),\n (\"pytz\", lambda mod: mod.VERSION),\n (\"bottleneck\", lambda mod: mod.__version__),\n (\"tables\", lambda mod: mod.__version__),\n (\"numexpr\", lambda mod: mod.__version__),\n (\"matplotlib\", lambda mod: mod.__version__),\n (\"openpyxl\", lambda mod: mod.__version__),\n (\"xlrd\", lambda mod: mod.__VERSION__),\n (\"xlwt\", lambda mod: mod.__VERSION__),\n (\"xlsxwriter\", lambda mod: mod.__version__),\n (\"lxml\", lambda mod: mod.etree.__version__),\n (\"bs4\", lambda mod: mod.__version__),\n (\"html5lib\", lambda mod: mod.__version__),\n (\"httplib2\", lambda mod: mod.__version__),\n (\"apiclient\", lambda mod: mod.__version__),\n (\"rpy2\", lambda mod: mod.__version__),\n (\"sqlalchemy\", lambda mod: mod.__version__),\n (\"pymysql\", lambda mod: mod.__version__),\n (\"psycopg2\", lambda mod: mod.__version__),\n ]\n\n deps_blob = list()\n for (modname, ver_f) in deps:\n try:\n try:\n mod = imp.load_module(modname, *imp.find_module(modname))\n except (ImportError):\n import importlib\n mod = importlib.import_module(modname)\n ver = ver_f(mod)\n deps_blob.append((modname, ver))\n except:\n deps_blob.append((modname, None))\n\n if (as_json):\n # 2.6-safe\n try:\n import json\n except:\n import simplejson as json\n\n j = dict(system=dict(sys_info), dependencies=dict(deps_blob))\n\n if as_json == True:\n print(j)\n else:\n with codecs.open(as_json, \"wb\", encoding='utf8') as f:\n json.dump(j, f, indent=2)\n\n else:\n\n print(\"\\nINSTALLED VERSIONS\")\n print(\"------------------\")\n\n for k, stat in sys_info:\n print(\"%s: %s\" % (k, stat))\n\n print(\"\")\n for k, stat in deps_blob:\n print(\"%s: %s\" % (k, stat))\n\n\ndef main():\n # optparse is 2.6-safe\n from optparse import OptionParser\n parser = OptionParser()\n parser.add_option(\"-j\", \"--json\", metavar=\"FILE\", nargs=1,\n help=\"Save output as JSON into file, pass in '-' to output to stdout\")\n\n (options, args) = parser.parse_args()\n\n if options.json == \"-\":\n options.json = True\n\n show_versions(as_json=options.json)\n\n return 0\n\nif __name__ == \"__main__\":\n sys.exit(main())\n", "path": "pandas/util/print_versions.py"}]}
| 2,198 | 143 |
gh_patches_debug_22686
|
rasdani/github-patches
|
git_diff
|
WordPress__openverse-api-932
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`piexif.dump` errors are not safely handled
## Sentry link
<!-- The public (aka "share") Sentry issue link. -->
https://sentry.io/share/issue/a80d52de7f89436586ed0250cd0a32d2/
## Description
<!-- Example: We are trying to access property foo of ImportantClass but the instance is null. -->
<!-- Include any additional information you may have, including potential remedies if any come to mind, and the general context of the code (what causes it to run in the app). -->
The call to `piexif.dump` should be wrapped in a `try/except` to prevent these errors in the watermark endpoint.
<!-- Mention whether this is a known regression, i.e., the feature used to work and now does not. -->
## Reproduction
<!-- Share the steps to reproduce the issue, if you were able to, OR a note sharing that you tried to reproduce but weren’t able to. -->
Visit https://api-dev.openverse.engineering/v1/images/a913fde1-d524-4059-bd4f-9bd687578cc3/watermark/ to see an example of this failure.
</issue>
<code>
[start of api/catalog/api/utils/watermark.py]
1 import logging
2 import os
3 from enum import Flag, auto
4 from io import BytesIO
5 from textwrap import wrap
6
7 from django.conf import settings
8
9 import piexif
10 import requests
11 from PIL import Image, ImageDraw, ImageFont
12 from sentry_sdk import capture_exception
13
14
15 parent_logger = logging.getLogger(__name__)
16
17
18 BREAKPOINT_DIMENSION = 400 # 400px
19 MARGIN_RATIO = 0.04 # 4%
20 FONT_RATIO = 0.04 # 4%
21
22 FRAME_COLOR = "#fff" # White frame
23 TEXT_COLOR = "#000" # Black text
24 HEADERS = {
25 "User-Agent": settings.OUTBOUND_USER_AGENT_TEMPLATE.format(purpose="Watermark")
26 }
27
28
29 class Dimension(Flag):
30 """
31 This enum represents the two dimensions of an image
32 """
33
34 HEIGHT = auto()
35 WIDTH = auto()
36 BOTH = HEIGHT | WIDTH
37 NONE = 0
38
39
40 # Utils
41
42
43 def _smaller_dimension(width, height):
44 """
45 Determine which image dimensions are below the breakpoint dimensions
46 :param width: the width of the image
47 :param height: the height of the image
48 :return: True if the image is small, False otherwise
49 """
50
51 smaller_dimension = Dimension.NONE
52 if width < BREAKPOINT_DIMENSION:
53 smaller_dimension = smaller_dimension | Dimension.WIDTH
54 if height < BREAKPOINT_DIMENSION:
55 smaller_dimension = smaller_dimension | Dimension.HEIGHT
56 return smaller_dimension
57
58
59 def _get_font_path(monospace=False):
60 """
61 Return the path to the TTF font file
62 :param monospace: True for monospaced font, False for variable-width font
63 :return: the path to the TTF font file
64 """
65
66 font_name = "SourceCodePro-Bold.ttf" if monospace else "SourceSansPro-Bold.ttf"
67 font_path = os.path.join(os.path.dirname(__file__), "fonts", font_name)
68
69 return font_path
70
71
72 def _fit_in_width(text, font, max_width):
73 """
74 Break the given text so that it fits in the given space
75 :param text: the text to fit in the limited width
76 :param font: the font containing size and other info
77 :param max_width: the maximum width the text is allowed to take
78 :return: the fitted text
79 """
80
81 char_width, _ = font.getsize("x") # x has the closest to average width
82 max_chars = max_width // char_width
83
84 text = "\n".join(["\n".join(wrap(line, max_chars)) for line in text.split("\n")])
85
86 return text
87
88
89 # Framing
90
91
92 def _create_frame(dimensions):
93 """
94 Creates an frame with the given dimensions
95 :param dimensions: a tuple containing the width and height of the frame
96 :return: a white frame with the given dimensions
97 """
98
99 return Image.new("RGB", dimensions, FRAME_COLOR)
100
101
102 def _frame_image(image, frame, left_margin, top_margin):
103 """
104 Fix the image in the frame with the specified spacing
105 :param image: the image to frame
106 :param frame: the frame in which to fit the image
107 :param left_margin: the margin to the left of the image
108 :param top_margin: the margin to the top of the image
109 :return: the framed image
110 """
111
112 frame.paste(image, (left_margin, top_margin))
113 return frame
114
115
116 # Attribution
117
118
119 def _full_license(image_info):
120 """
121 Get the full license from the image info
122 :param image_info: the information about a particular image
123 :return: the full license text for the image
124 """
125
126 license_name = image_info["license"].upper()
127 license_version = image_info["license_version"].upper()
128 prefix = "" if license_name == "CC0" else "CC "
129
130 return f"{prefix}{license_name} {license_version}"
131
132
133 def _get_attribution_text(image_info):
134 """
135 Generate the attribution text from the image info
136 :param image_info: the info pertaining to the licensing of the image
137 :return: the attribution text
138 """
139
140 title = image_info["title"]
141 creator = image_info["creator"]
142 full_license = _full_license(image_info)
143
144 return f'"{title}" by {creator} is licensed under {full_license}.'
145
146
147 # Actions
148
149
150 def _open_image(url):
151 """
152 Read an image from a URL and convert it into a PIL Image object
153 :param url: the URL from where to read the image
154 :return: the PIL image object with the EXIF data
155 """
156 logger = parent_logger.getChild("_open_image")
157 try:
158 response = requests.get(url, headers=HEADERS)
159 img_bytes = BytesIO(response.content)
160 img = Image.open(img_bytes)
161 # Preserve EXIF metadata
162 if "exif" in img.info:
163 exif = piexif.load(img.info["exif"])
164 else:
165 exif = None
166 return img, exif
167 except requests.exceptions.RequestException as e:
168 capture_exception(e)
169 logger.error(f"Error loading image data: {e}")
170 return None, None
171
172
173 def _print_attribution_on_image(img, image_info):
174 """
175 Add a frame around the image and put the attribution text on the bottom
176 :param img: the image to frame and attribute
177 :param image_info: the information about a particular image
178 :return: return the framed and attributed image
179 """
180
181 width, height = img.size
182 smaller_dimension = _smaller_dimension(width, height)
183
184 if smaller_dimension is Dimension.NONE:
185 margin = round(MARGIN_RATIO * min(width, height))
186 font_size = round(FONT_RATIO * min(width, height))
187 new_width = width
188 else:
189 margin = round(MARGIN_RATIO * BREAKPOINT_DIMENSION)
190 font_size = round(FONT_RATIO * BREAKPOINT_DIMENSION)
191 new_width = (
192 BREAKPOINT_DIMENSION if Dimension.WIDTH in smaller_dimension else width
193 )
194
195 font = ImageFont.truetype(_get_font_path(), size=font_size)
196
197 text = _get_attribution_text(image_info)
198 text = _fit_in_width(text, font, new_width)
199 _, attribution_height = font.getsize_multiline(text)
200
201 frame_width = margin + new_width + margin
202 frame_height = margin + height + margin + attribution_height + margin
203 left_margin = (frame_width - width) // 2
204
205 frame = _create_frame(
206 (
207 frame_width,
208 frame_height,
209 )
210 )
211 _frame_image(img, frame, left_margin, margin)
212
213 draw = ImageDraw.Draw(frame)
214 text_position_x = margin
215 text_position_y = margin + height + margin
216 draw.text(
217 xy=(
218 text_position_x,
219 text_position_y,
220 ),
221 text=text,
222 font=font,
223 fill=TEXT_COLOR,
224 )
225
226 return frame
227
228
229 def watermark(image_url, info, draw_frame=True):
230 """
231 Returns a PIL Image with a watermark and embedded metadata.
232
233 :param image_url: The URL of the image.
234 :param info: A dictionary with keys title, creator, license, and
235 license_version
236 :param draw_frame: Whether to draw an attribution frame.
237 :returns: A PIL Image and its EXIF data, if included.
238 """
239
240 img, exif = _open_image(image_url)
241 if not draw_frame:
242 return img, exif
243 frame = _print_attribution_on_image(img, info)
244 return frame, exif
245
[end of api/catalog/api/utils/watermark.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/api/catalog/api/utils/watermark.py b/api/catalog/api/utils/watermark.py
--- a/api/catalog/api/utils/watermark.py
+++ b/api/catalog/api/utils/watermark.py
@@ -1,5 +1,6 @@
import logging
import os
+import struct
from enum import Flag, auto
from io import BytesIO
from textwrap import wrap
@@ -158,17 +159,18 @@
response = requests.get(url, headers=HEADERS)
img_bytes = BytesIO(response.content)
img = Image.open(img_bytes)
- # Preserve EXIF metadata
- if "exif" in img.info:
- exif = piexif.load(img.info["exif"])
- else:
- exif = None
- return img, exif
except requests.exceptions.RequestException as e:
capture_exception(e)
logger.error(f"Error loading image data: {e}")
return None, None
+ try:
+ # Preserve EXIF metadata
+ exif = piexif.load(img.info["exif"]) if "exif" in img.info else None
+ return img, exif
+ except struct.error:
+ return img, None
+
def _print_attribution_on_image(img, image_info):
"""
|
{"golden_diff": "diff --git a/api/catalog/api/utils/watermark.py b/api/catalog/api/utils/watermark.py\n--- a/api/catalog/api/utils/watermark.py\n+++ b/api/catalog/api/utils/watermark.py\n@@ -1,5 +1,6 @@\n import logging\n import os\n+import struct\n from enum import Flag, auto\n from io import BytesIO\n from textwrap import wrap\n@@ -158,17 +159,18 @@\n response = requests.get(url, headers=HEADERS)\n img_bytes = BytesIO(response.content)\n img = Image.open(img_bytes)\n- # Preserve EXIF metadata\n- if \"exif\" in img.info:\n- exif = piexif.load(img.info[\"exif\"])\n- else:\n- exif = None\n- return img, exif\n except requests.exceptions.RequestException as e:\n capture_exception(e)\n logger.error(f\"Error loading image data: {e}\")\n return None, None\n \n+ try:\n+ # Preserve EXIF metadata\n+ exif = piexif.load(img.info[\"exif\"]) if \"exif\" in img.info else None\n+ return img, exif\n+ except struct.error:\n+ return img, None\n+\n \n def _print_attribution_on_image(img, image_info):\n \"\"\"\n", "issue": "`piexif.dump` errors are not safely handled\n## Sentry link\r\n\r\n<!-- The public (aka \"share\") Sentry issue link. -->\r\nhttps://sentry.io/share/issue/a80d52de7f89436586ed0250cd0a32d2/\r\n\r\n## Description\r\n\r\n<!-- Example: We are trying to access property foo of ImportantClass but the instance is null. -->\r\n<!-- Include any additional information you may have, including potential remedies if any come to mind, and the general context of the code (what causes it to run in the app). -->\r\nThe call to `piexif.dump` should be wrapped in a `try/except` to prevent these errors in the watermark endpoint.\r\n\r\n<!-- Mention whether this is a known regression, i.e., the feature used to work and now does not. -->\r\n\r\n## Reproduction\r\n\r\n<!-- Share the steps to reproduce the issue, if you were able to, OR a note sharing that you tried to reproduce but weren\u2019t able to. -->\r\nVisit https://api-dev.openverse.engineering/v1/images/a913fde1-d524-4059-bd4f-9bd687578cc3/watermark/ to see an example of this failure.\n", "before_files": [{"content": "import logging\nimport os\nfrom enum import Flag, auto\nfrom io import BytesIO\nfrom textwrap import wrap\n\nfrom django.conf import settings\n\nimport piexif\nimport requests\nfrom PIL import Image, ImageDraw, ImageFont\nfrom sentry_sdk import capture_exception\n\n\nparent_logger = logging.getLogger(__name__)\n\n\nBREAKPOINT_DIMENSION = 400 # 400px\nMARGIN_RATIO = 0.04 # 4%\nFONT_RATIO = 0.04 # 4%\n\nFRAME_COLOR = \"#fff\" # White frame\nTEXT_COLOR = \"#000\" # Black text\nHEADERS = {\n \"User-Agent\": settings.OUTBOUND_USER_AGENT_TEMPLATE.format(purpose=\"Watermark\")\n}\n\n\nclass Dimension(Flag):\n \"\"\"\n This enum represents the two dimensions of an image\n \"\"\"\n\n HEIGHT = auto()\n WIDTH = auto()\n BOTH = HEIGHT | WIDTH\n NONE = 0\n\n\n# Utils\n\n\ndef _smaller_dimension(width, height):\n \"\"\"\n Determine which image dimensions are below the breakpoint dimensions\n :param width: the width of the image\n :param height: the height of the image\n :return: True if the image is small, False otherwise\n \"\"\"\n\n smaller_dimension = Dimension.NONE\n if width < BREAKPOINT_DIMENSION:\n smaller_dimension = smaller_dimension | Dimension.WIDTH\n if height < BREAKPOINT_DIMENSION:\n smaller_dimension = smaller_dimension | Dimension.HEIGHT\n return smaller_dimension\n\n\ndef _get_font_path(monospace=False):\n \"\"\"\n Return the path to the TTF font file\n :param monospace: True for monospaced font, False for variable-width font\n :return: the path to the TTF font file\n \"\"\"\n\n font_name = \"SourceCodePro-Bold.ttf\" if monospace else \"SourceSansPro-Bold.ttf\"\n font_path = os.path.join(os.path.dirname(__file__), \"fonts\", font_name)\n\n return font_path\n\n\ndef _fit_in_width(text, font, max_width):\n \"\"\"\n Break the given text so that it fits in the given space\n :param text: the text to fit in the limited width\n :param font: the font containing size and other info\n :param max_width: the maximum width the text is allowed to take\n :return: the fitted text\n \"\"\"\n\n char_width, _ = font.getsize(\"x\") # x has the closest to average width\n max_chars = max_width // char_width\n\n text = \"\\n\".join([\"\\n\".join(wrap(line, max_chars)) for line in text.split(\"\\n\")])\n\n return text\n\n\n# Framing\n\n\ndef _create_frame(dimensions):\n \"\"\"\n Creates an frame with the given dimensions\n :param dimensions: a tuple containing the width and height of the frame\n :return: a white frame with the given dimensions\n \"\"\"\n\n return Image.new(\"RGB\", dimensions, FRAME_COLOR)\n\n\ndef _frame_image(image, frame, left_margin, top_margin):\n \"\"\"\n Fix the image in the frame with the specified spacing\n :param image: the image to frame\n :param frame: the frame in which to fit the image\n :param left_margin: the margin to the left of the image\n :param top_margin: the margin to the top of the image\n :return: the framed image\n \"\"\"\n\n frame.paste(image, (left_margin, top_margin))\n return frame\n\n\n# Attribution\n\n\ndef _full_license(image_info):\n \"\"\"\n Get the full license from the image info\n :param image_info: the information about a particular image\n :return: the full license text for the image\n \"\"\"\n\n license_name = image_info[\"license\"].upper()\n license_version = image_info[\"license_version\"].upper()\n prefix = \"\" if license_name == \"CC0\" else \"CC \"\n\n return f\"{prefix}{license_name} {license_version}\"\n\n\ndef _get_attribution_text(image_info):\n \"\"\"\n Generate the attribution text from the image info\n :param image_info: the info pertaining to the licensing of the image\n :return: the attribution text\n \"\"\"\n\n title = image_info[\"title\"]\n creator = image_info[\"creator\"]\n full_license = _full_license(image_info)\n\n return f'\"{title}\" by {creator} is licensed under {full_license}.'\n\n\n# Actions\n\n\ndef _open_image(url):\n \"\"\"\n Read an image from a URL and convert it into a PIL Image object\n :param url: the URL from where to read the image\n :return: the PIL image object with the EXIF data\n \"\"\"\n logger = parent_logger.getChild(\"_open_image\")\n try:\n response = requests.get(url, headers=HEADERS)\n img_bytes = BytesIO(response.content)\n img = Image.open(img_bytes)\n # Preserve EXIF metadata\n if \"exif\" in img.info:\n exif = piexif.load(img.info[\"exif\"])\n else:\n exif = None\n return img, exif\n except requests.exceptions.RequestException as e:\n capture_exception(e)\n logger.error(f\"Error loading image data: {e}\")\n return None, None\n\n\ndef _print_attribution_on_image(img, image_info):\n \"\"\"\n Add a frame around the image and put the attribution text on the bottom\n :param img: the image to frame and attribute\n :param image_info: the information about a particular image\n :return: return the framed and attributed image\n \"\"\"\n\n width, height = img.size\n smaller_dimension = _smaller_dimension(width, height)\n\n if smaller_dimension is Dimension.NONE:\n margin = round(MARGIN_RATIO * min(width, height))\n font_size = round(FONT_RATIO * min(width, height))\n new_width = width\n else:\n margin = round(MARGIN_RATIO * BREAKPOINT_DIMENSION)\n font_size = round(FONT_RATIO * BREAKPOINT_DIMENSION)\n new_width = (\n BREAKPOINT_DIMENSION if Dimension.WIDTH in smaller_dimension else width\n )\n\n font = ImageFont.truetype(_get_font_path(), size=font_size)\n\n text = _get_attribution_text(image_info)\n text = _fit_in_width(text, font, new_width)\n _, attribution_height = font.getsize_multiline(text)\n\n frame_width = margin + new_width + margin\n frame_height = margin + height + margin + attribution_height + margin\n left_margin = (frame_width - width) // 2\n\n frame = _create_frame(\n (\n frame_width,\n frame_height,\n )\n )\n _frame_image(img, frame, left_margin, margin)\n\n draw = ImageDraw.Draw(frame)\n text_position_x = margin\n text_position_y = margin + height + margin\n draw.text(\n xy=(\n text_position_x,\n text_position_y,\n ),\n text=text,\n font=font,\n fill=TEXT_COLOR,\n )\n\n return frame\n\n\ndef watermark(image_url, info, draw_frame=True):\n \"\"\"\n Returns a PIL Image with a watermark and embedded metadata.\n\n :param image_url: The URL of the image.\n :param info: A dictionary with keys title, creator, license, and\n license_version\n :param draw_frame: Whether to draw an attribution frame.\n :returns: A PIL Image and its EXIF data, if included.\n \"\"\"\n\n img, exif = _open_image(image_url)\n if not draw_frame:\n return img, exif\n frame = _print_attribution_on_image(img, info)\n return frame, exif\n", "path": "api/catalog/api/utils/watermark.py"}]}
| 3,113 | 289 |
gh_patches_debug_37926
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-5842
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Samsonite spider finds dealers, not official stores
This spider is wrong, e.g., the stores in Norway are not official Samsonite stores but dealers carrying the Samsonite brand
E.g., this is Chillout Travel Store, not a Samsonite store
https://www.alltheplaces.xyz/map/#15.79/59.920398/10.757257
The website does list official stores and dealers separately, so it should be possible to import the right type?
https://www.samsonite.no/samsonite-store/?search=dealer&city=&country=no&lat=59.920469259204786&lng=10.755597088646583&radius=20
_Originally posted by @eisams in https://github.com/alltheplaces/alltheplaces/issues/4385#issuecomment-1586255246_
</issue>
<code>
[start of locations/spiders/samsonite_eu.py]
1 import scrapy
2 import xmltodict
3
4 from locations.dict_parser import DictParser
5
6
7 class SamsoniteEuSpider(scrapy.Spider):
8 name = "samsonite_eu"
9 item_attributes = {
10 "brand": "Samsonite",
11 "brand_wikidata": "Q1203426",
12 }
13 allowed_domains = ["samsonite.com"]
14
15 def start_requests(self):
16 country_eu = [
17 "AL",
18 "CZ",
19 "DE",
20 "DK",
21 "CY",
22 "AT",
23 "BE",
24 "BG",
25 "CH",
26 "EE",
27 "EL",
28 "ES",
29 "FI",
30 "FR",
31 "HR",
32 "HU",
33 "IE",
34 "IS",
35 "IT",
36 "LT",
37 "LU",
38 "NL",
39 "NO",
40 "LV",
41 "ME",
42 "MT",
43 "MK",
44 "LI",
45 "PL",
46 "SI",
47 "SK",
48 "TR",
49 "UK",
50 "RS",
51 "SE",
52 "PT",
53 "RO",
54 ]
55 template = "https://storelocator.samsonite.eu/data-exchange/getDealerLocatorMapV2_Radius.aspx?s=sams&country={}&search=dealer&lat=48.85799300000001&lng=2.381153&radius=100000"
56 for country in country_eu:
57 yield scrapy.Request(url=template.format(country), callback=self.parse)
58
59 def parse(self, response):
60 data = xmltodict.parse(response.text)
61 if data.get("dealers"):
62 stores = data.get("dealers", {}).get("dealer")
63 stores = stores if type(stores) == list else [stores]
64 for store in stores:
65 item = DictParser.parse(store)
66 item["ref"] = store.get("fld_Deal_Id")
67 item["street_address"] = store.get("fld_Deal_Address1")
68 item["city"] = store.get("fld_Deal_City1")
69 item["postcode"] = store.get("fld_Deal_Zip")
70 item["country"] = store.get("fld_Coun_Name")
71 item["phone"] = store.get("fld_Deal_Phone")
72 item["email"] = store.get("fld_Deal_Email")
73
74 yield item
75
[end of locations/spiders/samsonite_eu.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/samsonite_eu.py b/locations/spiders/samsonite_eu.py
--- a/locations/spiders/samsonite_eu.py
+++ b/locations/spiders/samsonite_eu.py
@@ -1,15 +1,13 @@
import scrapy
import xmltodict
-from locations.dict_parser import DictParser
+from locations.items import Feature, add_social_media
class SamsoniteEuSpider(scrapy.Spider):
name = "samsonite_eu"
- item_attributes = {
- "brand": "Samsonite",
- "brand_wikidata": "Q1203426",
- }
+ CHIC_ACCENT = {"brand": "Chic Accent"}
+ SAMSONITE = {"brand": "Samsonite", "brand_wikidata": "Q1203426"}
allowed_domains = ["samsonite.com"]
def start_requests(self):
@@ -51,6 +49,7 @@
"SE",
"PT",
"RO",
+ "GB",
]
template = "https://storelocator.samsonite.eu/data-exchange/getDealerLocatorMapV2_Radius.aspx?s=sams&country={}&search=dealer&lat=48.85799300000001&lng=2.381153&radius=100000"
for country in country_eu:
@@ -62,13 +61,31 @@
stores = data.get("dealers", {}).get("dealer")
stores = stores if type(stores) == list else [stores]
for store in stores:
- item = DictParser.parse(store)
+ if store["fld_Deal_DeCl_ID"] != "9":
+ continue
+ item = Feature()
+ item["lat"] = store["Latitude"]
+ item["lon"] = store["Longitude"]
item["ref"] = store.get("fld_Deal_Id")
item["street_address"] = store.get("fld_Deal_Address1")
item["city"] = store.get("fld_Deal_City1")
item["postcode"] = store.get("fld_Deal_Zip")
item["country"] = store.get("fld_Coun_Name")
- item["phone"] = store.get("fld_Deal_Phone")
- item["email"] = store.get("fld_Deal_Email")
+ item["email"] = store.get("fld_Deal_Email") or ""
+ item["website"] = store["fld_Deal_DetailPageUrl"]
+
+ if "chicaccent.com" in item["email"]:
+ item.update(self.CHIC_ACCENT)
+ else:
+ item.update(self.SAMSONITE)
+
+ if phone := store.get("fld_Deal_Phone"):
+ phone = store["fld_Deal_Prefix"] + phone.lower()
+
+ if "whatsapp" in phone:
+ phone, whats_app = phone.split("whatsapp")
+ add_social_media(item, "WhatsApp", whats_app.strip(" :"))
+
+ item["phone"] = phone
yield item
|
{"golden_diff": "diff --git a/locations/spiders/samsonite_eu.py b/locations/spiders/samsonite_eu.py\n--- a/locations/spiders/samsonite_eu.py\n+++ b/locations/spiders/samsonite_eu.py\n@@ -1,15 +1,13 @@\n import scrapy\n import xmltodict\n \n-from locations.dict_parser import DictParser\n+from locations.items import Feature, add_social_media\n \n \n class SamsoniteEuSpider(scrapy.Spider):\n name = \"samsonite_eu\"\n- item_attributes = {\n- \"brand\": \"Samsonite\",\n- \"brand_wikidata\": \"Q1203426\",\n- }\n+ CHIC_ACCENT = {\"brand\": \"Chic Accent\"}\n+ SAMSONITE = {\"brand\": \"Samsonite\", \"brand_wikidata\": \"Q1203426\"}\n allowed_domains = [\"samsonite.com\"]\n \n def start_requests(self):\n@@ -51,6 +49,7 @@\n \"SE\",\n \"PT\",\n \"RO\",\n+ \"GB\",\n ]\n template = \"https://storelocator.samsonite.eu/data-exchange/getDealerLocatorMapV2_Radius.aspx?s=sams&country={}&search=dealer&lat=48.85799300000001&lng=2.381153&radius=100000\"\n for country in country_eu:\n@@ -62,13 +61,31 @@\n stores = data.get(\"dealers\", {}).get(\"dealer\")\n stores = stores if type(stores) == list else [stores]\n for store in stores:\n- item = DictParser.parse(store)\n+ if store[\"fld_Deal_DeCl_ID\"] != \"9\":\n+ continue\n+ item = Feature()\n+ item[\"lat\"] = store[\"Latitude\"]\n+ item[\"lon\"] = store[\"Longitude\"]\n item[\"ref\"] = store.get(\"fld_Deal_Id\")\n item[\"street_address\"] = store.get(\"fld_Deal_Address1\")\n item[\"city\"] = store.get(\"fld_Deal_City1\")\n item[\"postcode\"] = store.get(\"fld_Deal_Zip\")\n item[\"country\"] = store.get(\"fld_Coun_Name\")\n- item[\"phone\"] = store.get(\"fld_Deal_Phone\")\n- item[\"email\"] = store.get(\"fld_Deal_Email\")\n+ item[\"email\"] = store.get(\"fld_Deal_Email\") or \"\"\n+ item[\"website\"] = store[\"fld_Deal_DetailPageUrl\"]\n+\n+ if \"chicaccent.com\" in item[\"email\"]:\n+ item.update(self.CHIC_ACCENT)\n+ else:\n+ item.update(self.SAMSONITE)\n+\n+ if phone := store.get(\"fld_Deal_Phone\"):\n+ phone = store[\"fld_Deal_Prefix\"] + phone.lower()\n+\n+ if \"whatsapp\" in phone:\n+ phone, whats_app = phone.split(\"whatsapp\")\n+ add_social_media(item, \"WhatsApp\", whats_app.strip(\" :\"))\n+\n+ item[\"phone\"] = phone\n \n yield item\n", "issue": "Samsonite spider finds dealers, not official stores\nThis spider is wrong, e.g., the stores in Norway are not official Samsonite stores but dealers carrying the Samsonite brand\r\n\r\nE.g., this is Chillout Travel Store, not a Samsonite store\r\nhttps://www.alltheplaces.xyz/map/#15.79/59.920398/10.757257\r\n\r\nThe website does list official stores and dealers separately, so it should be possible to import the right type?\r\nhttps://www.samsonite.no/samsonite-store/?search=dealer&city=&country=no&lat=59.920469259204786&lng=10.755597088646583&radius=20\r\n\r\n_Originally posted by @eisams in https://github.com/alltheplaces/alltheplaces/issues/4385#issuecomment-1586255246_\r\n \n", "before_files": [{"content": "import scrapy\nimport xmltodict\n\nfrom locations.dict_parser import DictParser\n\n\nclass SamsoniteEuSpider(scrapy.Spider):\n name = \"samsonite_eu\"\n item_attributes = {\n \"brand\": \"Samsonite\",\n \"brand_wikidata\": \"Q1203426\",\n }\n allowed_domains = [\"samsonite.com\"]\n\n def start_requests(self):\n country_eu = [\n \"AL\",\n \"CZ\",\n \"DE\",\n \"DK\",\n \"CY\",\n \"AT\",\n \"BE\",\n \"BG\",\n \"CH\",\n \"EE\",\n \"EL\",\n \"ES\",\n \"FI\",\n \"FR\",\n \"HR\",\n \"HU\",\n \"IE\",\n \"IS\",\n \"IT\",\n \"LT\",\n \"LU\",\n \"NL\",\n \"NO\",\n \"LV\",\n \"ME\",\n \"MT\",\n \"MK\",\n \"LI\",\n \"PL\",\n \"SI\",\n \"SK\",\n \"TR\",\n \"UK\",\n \"RS\",\n \"SE\",\n \"PT\",\n \"RO\",\n ]\n template = \"https://storelocator.samsonite.eu/data-exchange/getDealerLocatorMapV2_Radius.aspx?s=sams&country={}&search=dealer&lat=48.85799300000001&lng=2.381153&radius=100000\"\n for country in country_eu:\n yield scrapy.Request(url=template.format(country), callback=self.parse)\n\n def parse(self, response):\n data = xmltodict.parse(response.text)\n if data.get(\"dealers\"):\n stores = data.get(\"dealers\", {}).get(\"dealer\")\n stores = stores if type(stores) == list else [stores]\n for store in stores:\n item = DictParser.parse(store)\n item[\"ref\"] = store.get(\"fld_Deal_Id\")\n item[\"street_address\"] = store.get(\"fld_Deal_Address1\")\n item[\"city\"] = store.get(\"fld_Deal_City1\")\n item[\"postcode\"] = store.get(\"fld_Deal_Zip\")\n item[\"country\"] = store.get(\"fld_Coun_Name\")\n item[\"phone\"] = store.get(\"fld_Deal_Phone\")\n item[\"email\"] = store.get(\"fld_Deal_Email\")\n\n yield item\n", "path": "locations/spiders/samsonite_eu.py"}]}
| 1,430 | 699 |
gh_patches_debug_25598
|
rasdani/github-patches
|
git_diff
|
alltheplaces__alltheplaces-3459
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider public_storage is broken
During the global build at 2021-08-04-14-42-45, spider **public_storage** failed with **834 features** and **1879 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-04-14-42-45/logs/public_storage.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-04-14-42-45/output/public_storage.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-04-14-42-45/output/public_storage.geojson))
</issue>
<code>
[start of locations/spiders/public_storage.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4
5 from locations.items import GeojsonPointItem
6 from locations.hours import OpeningHours
7
8
9 class PublicStorageSpider(scrapy.Spider):
10 name = "public_storage"
11 item_attributes = { 'brand': "Public Storage" }
12 allowed_domains = ["www.publicstorage.com"]
13 start_urls = (
14 'https://www.publicstorage.com/sitemap_plp.xml',
15 )
16
17 def parse(self, response):
18 response.selector.remove_namespaces()
19 city_urls = response.xpath('//url/loc/text()').extract()
20 for path in city_urls:
21 yield scrapy.Request(
22 path.strip(),
23 callback=self.parse_store,
24 )
25
26 def parse_hours(self, hours):
27 opening_hours = OpeningHours()
28
29 for hour in hours:
30 for day in hour['dayOfWeek']:
31 opening_hours.add_range(
32 day=day[:2],
33 open_time=hour["opens"],
34 close_time=hour["closes"],
35 )
36
37 return opening_hours.as_opening_hours()
38
39 def parse_store(self, response):
40 data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
41 data = data['@graph'][0]
42
43 properties = {
44 "ref": data['@id'],
45 "opening_hours": self.parse_hours(data['openingHoursSpecification']),
46 "addr_full": data['address']['streetAddress'],
47 "city": data['address']['addressLocality'],
48 "state": data['address']['addressRegion'],
49 "postcode": data['address']['postalCode'],
50 "phone": data['telephone'],
51 "lat": data['geo']['latitude'],
52 "lon": data['geo']['longitude'],
53 }
54
55 yield GeojsonPointItem(**properties)
56
[end of locations/spiders/public_storage.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch>
|
diff --git a/locations/spiders/public_storage.py b/locations/spiders/public_storage.py
--- a/locations/spiders/public_storage.py
+++ b/locations/spiders/public_storage.py
@@ -20,9 +20,13 @@
for path in city_urls:
yield scrapy.Request(
path.strip(),
- callback=self.parse_store,
+ callback=self.load_store,
)
+ def load_store(self, response):
+ ldjson = response.xpath('//link[@type="application/ld+json"]/@href').get()
+ yield scrapy.Request(response.urljoin(ldjson), callback=self.parse_store)
+
def parse_hours(self, hours):
opening_hours = OpeningHours()
@@ -37,11 +41,11 @@
return opening_hours.as_opening_hours()
def parse_store(self, response):
- data = json.loads(response.xpath('//script[@type="application/ld+json"]/text()').extract_first())
- data = data['@graph'][0]
+ data = response.json()['@graph'][0]
properties = {
"ref": data['@id'],
+ "website": data['url'],
"opening_hours": self.parse_hours(data['openingHoursSpecification']),
"addr_full": data['address']['streetAddress'],
"city": data['address']['addressLocality'],
|
{"golden_diff": "diff --git a/locations/spiders/public_storage.py b/locations/spiders/public_storage.py\n--- a/locations/spiders/public_storage.py\n+++ b/locations/spiders/public_storage.py\n@@ -20,9 +20,13 @@\n for path in city_urls:\n yield scrapy.Request(\n path.strip(),\n- callback=self.parse_store,\n+ callback=self.load_store,\n )\n \n+ def load_store(self, response):\n+ ldjson = response.xpath('//link[@type=\"application/ld+json\"]/@href').get()\n+ yield scrapy.Request(response.urljoin(ldjson), callback=self.parse_store)\n+\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n \n@@ -37,11 +41,11 @@\n return opening_hours.as_opening_hours()\n \n def parse_store(self, response):\n- data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n- data = data['@graph'][0]\n+ data = response.json()['@graph'][0]\n \n properties = {\n \"ref\": data['@id'],\n+ \"website\": data['url'],\n \"opening_hours\": self.parse_hours(data['openingHoursSpecification']),\n \"addr_full\": data['address']['streetAddress'],\n \"city\": data['address']['addressLocality'],\n", "issue": "Spider public_storage is broken\nDuring the global build at 2021-08-04-14-42-45, spider **public_storage** failed with **834 features** and **1879 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-04-14-42-45/logs/public_storage.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-04-14-42-45/output/public_storage.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-04-14-42-45/output/public_storage.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass PublicStorageSpider(scrapy.Spider):\n name = \"public_storage\"\n item_attributes = { 'brand': \"Public Storage\" }\n allowed_domains = [\"www.publicstorage.com\"]\n start_urls = (\n 'https://www.publicstorage.com/sitemap_plp.xml',\n )\n\n def parse(self, response):\n response.selector.remove_namespaces()\n city_urls = response.xpath('//url/loc/text()').extract()\n for path in city_urls:\n yield scrapy.Request(\n path.strip(),\n callback=self.parse_store,\n )\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n\n for hour in hours:\n for day in hour['dayOfWeek']:\n opening_hours.add_range(\n day=day[:2],\n open_time=hour[\"opens\"],\n close_time=hour[\"closes\"],\n )\n\n return opening_hours.as_opening_hours()\n\n def parse_store(self, response):\n data = json.loads(response.xpath('//script[@type=\"application/ld+json\"]/text()').extract_first())\n data = data['@graph'][0]\n\n properties = {\n \"ref\": data['@id'],\n \"opening_hours\": self.parse_hours(data['openingHoursSpecification']),\n \"addr_full\": data['address']['streetAddress'],\n \"city\": data['address']['addressLocality'],\n \"state\": data['address']['addressRegion'],\n \"postcode\": data['address']['postalCode'],\n \"phone\": data['telephone'],\n \"lat\": data['geo']['latitude'],\n \"lon\": data['geo']['longitude'],\n }\n\n yield GeojsonPointItem(**properties)\n", "path": "locations/spiders/public_storage.py"}]}
| 1,203 | 292 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.