problem_id
stringlengths 18
22
| source
stringclasses 1
value | task_type
stringclasses 1
value | in_source_id
stringlengths 13
58
| prompt
stringlengths 1.71k
9.01k
| golden_diff
stringlengths 151
4.94k
| verification_info
stringlengths 465
11.3k
| num_tokens_prompt
int64 557
2.05k
| num_tokens_diff
int64 48
1.02k
|
---|---|---|---|---|---|---|---|---|
gh_patches_debug_29959 | rasdani/github-patches | git_diff | joke2k__faker-1800 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Latvian ssn generator is not correct
* Faker version: 17.0.0
* OS: Windows
Latvian ssn generator generates ssn in format 'ddd-dd-dddd', which is not correct.
### Steps to reproduce
```
from faker import Faker
fake = Faker('lv_LV')
fake.ssn()
```
### Expected behavior
It should be 12 characters long DDMMYY-CZZZQ e.g. 170334-19862
### Actual behavior
fake.ssn() for latvian locale outputs ssn in format 'ddd-dd-dddd'
'604-87-6475'
'824-21-9489'
'411-57-8156'
'938-77-8408'
</issue>
<code>
[start of faker/providers/ssn/lv_LV/__init__.py]
1 from .. import Provider as BaseProvider
2
3
4 class Provider(BaseProvider):
5 """
6 A Faker provider for the Latvian VAT IDs
7 """
8
9 vat_id_formats = ("LV###########",)
10
11 def vat_id(self) -> str:
12 """
13 http://ec.europa.eu/taxation_customs/vies/faq.html#item_11
14 :return: a random Latvian VAT ID
15 """
16
17 return self.bothify(self.random_element(self.vat_id_formats))
18
[end of faker/providers/ssn/lv_LV/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/faker/providers/ssn/lv_LV/__init__.py b/faker/providers/ssn/lv_LV/__init__.py
--- a/faker/providers/ssn/lv_LV/__init__.py
+++ b/faker/providers/ssn/lv_LV/__init__.py
@@ -1,7 +1,56 @@
-from .. import Provider as BaseProvider
+import datetime
+from .. import Provider as SsnProvider
+
+
+class Provider(SsnProvider):
+ def ssn(self, min_age: int = 0, max_age: int = 105) -> str:
+ """
+ Returns 11 character Latvian personal identity code (Personas kods).
+ This function assigns random age to person.
+
+ Personal code consists of eleven characters of the form DDMMYYCZZZQ, where
+ DDMMYY is the date of birth, C the century sign, ZZZ the individual
+ number and Q the control character (checksum). The number for the
+ century is either 0 (1800–1899), 1 (1900–1999), or 2 (2000–2099).
+ """
+
+ def _checksum(ssn_without_checksum):
+ weights = [1, 6, 3, 7, 9, 10, 5, 8, 4, 2]
+ weighted_sum = sum(int(digit) * weight for digit, weight in zip(ssn_without_checksum, weights))
+ reminder = (1 - weighted_sum) % 11
+ if reminder == 10:
+ return 0
+ elif reminder < -1:
+ return reminder + 11
+ return reminder
+
+ age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))
+ birthday = datetime.date.today() - age
+ ssn_date = "%02d%02d%s" % (
+ birthday.day,
+ birthday.month,
+ str(birthday.year)[-2:],
+ )
+ century = self._get_century_code(birthday.year) # Century
+ suffix = self.generator.random.randrange(111, 999)
+ checksum = _checksum(f"{ssn_date}{century:01d}{suffix:03d}")
+ ssn = f"{ssn_date}-{century:01d}{suffix:03d}{checksum:01d}"
+ return ssn
+
+ @staticmethod
+ def _get_century_code(year: int) -> int:
+ """Returns the century code for a given year"""
+ if 2000 <= year < 3000:
+ code = 2
+ elif 1900 <= year < 2000:
+ code = 1
+ elif 1800 <= year < 1900:
+ code = 0
+ else:
+ raise ValueError("SSN do not support people born before the year 1800 or after the year 2999")
+ return code
-class Provider(BaseProvider):
"""
A Faker provider for the Latvian VAT IDs
"""
| {"golden_diff": "diff --git a/faker/providers/ssn/lv_LV/__init__.py b/faker/providers/ssn/lv_LV/__init__.py\n--- a/faker/providers/ssn/lv_LV/__init__.py\n+++ b/faker/providers/ssn/lv_LV/__init__.py\n@@ -1,7 +1,56 @@\n-from .. import Provider as BaseProvider\n+import datetime\n \n+from .. import Provider as SsnProvider\n+\n+\n+class Provider(SsnProvider):\n+ def ssn(self, min_age: int = 0, max_age: int = 105) -> str:\n+ \"\"\"\n+ Returns 11 character Latvian personal identity code (Personas kods).\n+ This function assigns random age to person.\n+\n+ Personal code consists of eleven characters of the form DDMMYYCZZZQ, where\n+ DDMMYY is the date of birth, C the century sign, ZZZ the individual\n+ number and Q the control character (checksum). The number for the\n+ century is either 0 (1800\u20131899), 1 (1900\u20131999), or 2 (2000\u20132099).\n+ \"\"\"\n+\n+ def _checksum(ssn_without_checksum):\n+ weights = [1, 6, 3, 7, 9, 10, 5, 8, 4, 2]\n+ weighted_sum = sum(int(digit) * weight for digit, weight in zip(ssn_without_checksum, weights))\n+ reminder = (1 - weighted_sum) % 11\n+ if reminder == 10:\n+ return 0\n+ elif reminder < -1:\n+ return reminder + 11\n+ return reminder\n+\n+ age = datetime.timedelta(days=self.generator.random.randrange(min_age * 365, max_age * 365))\n+ birthday = datetime.date.today() - age\n+ ssn_date = \"%02d%02d%s\" % (\n+ birthday.day,\n+ birthday.month,\n+ str(birthday.year)[-2:],\n+ )\n+ century = self._get_century_code(birthday.year) # Century\n+ suffix = self.generator.random.randrange(111, 999)\n+ checksum = _checksum(f\"{ssn_date}{century:01d}{suffix:03d}\")\n+ ssn = f\"{ssn_date}-{century:01d}{suffix:03d}{checksum:01d}\"\n+ return ssn\n+\n+ @staticmethod\n+ def _get_century_code(year: int) -> int:\n+ \"\"\"Returns the century code for a given year\"\"\"\n+ if 2000 <= year < 3000:\n+ code = 2\n+ elif 1900 <= year < 2000:\n+ code = 1\n+ elif 1800 <= year < 1900:\n+ code = 0\n+ else:\n+ raise ValueError(\"SSN do not support people born before the year 1800 or after the year 2999\")\n+ return code\n \n-class Provider(BaseProvider):\n \"\"\"\n A Faker provider for the Latvian VAT IDs\n \"\"\"\n", "issue": "Latvian ssn generator is not correct\n* Faker version: 17.0.0\r\n* OS: Windows\r\n\r\n\r\nLatvian ssn generator generates ssn in format 'ddd-dd-dddd', which is not correct. \r\n\r\n### Steps to reproduce\r\n\r\n```\r\nfrom faker import Faker\r\nfake = Faker('lv_LV')\r\nfake.ssn()\r\n\r\n```\r\n\r\n### Expected behavior\r\n\r\nIt should be 12 characters long DDMMYY-CZZZQ e.g. 170334-19862\r\n\r\n### Actual behavior\r\n\r\nfake.ssn() for latvian locale outputs ssn in format 'ddd-dd-dddd'\r\n'604-87-6475'\r\n'824-21-9489'\r\n'411-57-8156'\r\n'938-77-8408'\r\n\n", "before_files": [{"content": "from .. import Provider as BaseProvider\n\n\nclass Provider(BaseProvider):\n \"\"\"\n A Faker provider for the Latvian VAT IDs\n \"\"\"\n\n vat_id_formats = (\"LV###########\",)\n\n def vat_id(self) -> str:\n \"\"\"\n http://ec.europa.eu/taxation_customs/vies/faq.html#item_11\n :return: a random Latvian VAT ID\n \"\"\"\n\n return self.bothify(self.random_element(self.vat_id_formats))\n", "path": "faker/providers/ssn/lv_LV/__init__.py"}]} | 876 | 739 |
gh_patches_debug_11013 | rasdani/github-patches | git_diff | holoviz__panel-6293 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
pn.template attribute not found when lazy loading panel
panel==1.3.8
As Panel takes ~2secs to import I would like to lazy load it in one of my packages. This package provides different utility functions. Only some are panel related.
Lazy loading of modules in Python is defined here https://docs.python.org/3/library/importlib.html#implementing-lazy-imports. When I use that I get an `AttributeError` because the attribute `template` is not defined on `panel`. Without lazy loading this works.
## Minimum reproducible example
`lazy_loader.py`
```python
import importlib.util
import sys
def lazy_import(name):
spec = importlib.util.find_spec(name)
loader = importlib.util.LazyLoader(spec.loader)
spec.loader = loader
module = importlib.util.module_from_spec(spec)
sys.modules[name] = module
loader.exec_module(module)
return module
pn = lazy_import("panel")
# import panel as pn
def extension():
return pn.extension()
```
`app.py`
```python
from lazy_loader import extension
import panel as pn
extension()
pn.template.FastListTemplate(title="Hello", main=["world"]).servable()
```
```bash
panel serve app.py --autoreload --index app
```

¨
```bash
AttributeError: module 'panel' has no attribute 'template'
Traceback (most recent call last):
File "/home/jovyan/repos/aw-lib/.venv/lib/python3.11/site-packages/bokeh/application/handlers/code_runner.py", line 229, in run
exec(self._code, module.__dict__)
File "/home/jovyan/repos/aw-lib/app.py", line 6, in <module>
pn.template.FastListTemplate(title="Hello", main=["world"]).servable()
^^^^^^^^^^^
AttributeError: module 'panel' has no attribute 'template'
```
</issue>
<code>
[start of panel/__init__.py]
1 """
2 Panel is a high level app and dashboarding framework
3 ====================================================
4
5 Panel is an open-source Python library that lets you create custom
6 interactive web apps and dashboards by connecting user-defined widgets
7 to plots, images, tables, or text.
8
9 Panel works with the tools you know and ❤️.
10
11 Check out https://panel.holoviz.org/
12
13 .. figure:: https://user-images.githubusercontent.com/42288570/152672367-6c239073-0ea0-4a2b-a4c0-817e8090e877.gif
14 :alt: Panel Dashboard
15
16 Panel Dashboard
17
18 How to develop a Panel app in 3 simple steps
19 --------------------------------------------
20
21 - Write the app
22
23 >>> import panel as pn
24 >>> pn.extension(sizing_mode="stretch_width", template="fast")
25 >>> pn.state.template.param.update(title="My Data App")
26 >>> pn.panel(some_python_object).servable()
27
28 - Run your app
29
30 $ panel serve my_script.py --autoreload --show
31
32 or
33
34 $ panel serve my_notebook.ipynb --autoreload --show
35
36 The app will be available in your browser!
37
38 - Change your code and save it
39
40 The app will reload with your changes!
41
42 You can also add automatic reload to jupyterlab. Check out
43 https://blog.holoviz.org/panel_0.12.0.html#JupyterLab-previews
44
45 To learn more about Panel check out
46 https://panel.holoviz.org/getting_started/index.html
47 """
48 from param import rx
49
50 from . import chat # noqa
51 from . import layout # noqa
52 from . import links # noqa
53 from . import pane # noqa
54 from . import param # noqa
55 from . import pipeline # noqa
56 from . import reactive # noqa
57 from . import viewable # noqa
58 from . import widgets # noqa
59 from .config import __version__, config, panel_extension as extension # noqa
60 from .depends import bind, depends # noqa
61 from .interact import interact # noqa
62 from .io import ( # noqa
63 _jupyter_server_extension_paths, cache, ipywidget, serve, state,
64 )
65 from .layout import ( # noqa
66 Accordion, Card, Column, FlexBox, FloatPanel, GridBox, GridSpec, GridStack,
67 HSpacer, Row, Spacer, Swipe, Tabs, VSpacer, WidgetBox,
68 )
69 from .pane import panel # noqa
70 from .param import Param, ReactiveExpr # noqa
71 from .template import Template # noqa
72 from .widgets import indicators, widget # noqa
73
74 __all__ = (
75 "__version__",
76 "Accordion",
77 "Card",
78 "chat",
79 "Column",
80 "FlexBox",
81 "FloatPanel",
82 "GridBox",
83 "GridSpec",
84 "GridStack",
85 "HSpacer",
86 "Param",
87 "ReactiveExpr",
88 "Row",
89 "Spacer",
90 "Tabs",
91 "Template",
92 "VSpacer",
93 "WidgetBox",
94 "bind",
95 "cache",
96 "config",
97 "depends",
98 "extension",
99 "indicators",
100 "interact",
101 "ipywidget",
102 "layout",
103 "links",
104 "pane",
105 "panel",
106 "param",
107 "pipeline",
108 "rx",
109 "serve",
110 "state",
111 "viewable",
112 "widgets",
113 "widget"
114 )
115
[end of panel/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/panel/__init__.py b/panel/__init__.py
--- a/panel/__init__.py
+++ b/panel/__init__.py
@@ -54,6 +54,7 @@
from . import param # noqa
from . import pipeline # noqa
from . import reactive # noqa
+from . import template # noqa
from . import viewable # noqa
from . import widgets # noqa
from .config import __version__, config, panel_extension as extension # noqa
@@ -108,6 +109,7 @@
"rx",
"serve",
"state",
+ "template",
"viewable",
"widgets",
"widget"
| {"golden_diff": "diff --git a/panel/__init__.py b/panel/__init__.py\n--- a/panel/__init__.py\n+++ b/panel/__init__.py\n@@ -54,6 +54,7 @@\n from . import param # noqa\n from . import pipeline # noqa\n from . import reactive # noqa\n+from . import template # noqa\n from . import viewable # noqa\n from . import widgets # noqa\n from .config import __version__, config, panel_extension as extension # noqa\n@@ -108,6 +109,7 @@\n \"rx\",\n \"serve\",\n \"state\",\n+ \"template\",\n \"viewable\",\n \"widgets\",\n \"widget\"\n", "issue": "pn.template attribute not found when lazy loading panel\npanel==1.3.8\r\n\r\nAs Panel takes ~2secs to import I would like to lazy load it in one of my packages. This package provides different utility functions. Only some are panel related.\r\n\r\nLazy loading of modules in Python is defined here https://docs.python.org/3/library/importlib.html#implementing-lazy-imports. When I use that I get an `AttributeError` because the attribute `template` is not defined on `panel`. Without lazy loading this works.\r\n\r\n## Minimum reproducible example\r\n\r\n`lazy_loader.py`\r\n\r\n```python\r\nimport importlib.util\r\nimport sys\r\n\r\ndef lazy_import(name):\r\n spec = importlib.util.find_spec(name)\r\n loader = importlib.util.LazyLoader(spec.loader)\r\n spec.loader = loader\r\n module = importlib.util.module_from_spec(spec)\r\n sys.modules[name] = module\r\n loader.exec_module(module)\r\n return module\r\n\r\npn = lazy_import(\"panel\")\r\n# import panel as pn\r\n\r\ndef extension():\r\n return pn.extension()\r\n```\r\n\r\n`app.py`\r\n\r\n```python\r\nfrom lazy_loader import extension\r\nimport panel as pn\r\n\r\nextension()\r\n\r\npn.template.FastListTemplate(title=\"Hello\", main=[\"world\"]).servable()\r\n```\r\n\r\n```bash\r\npanel serve app.py --autoreload --index app\r\n```\r\n\r\n\r\n\u00a8\r\n\r\n```bash\r\nAttributeError: module 'panel' has no attribute 'template'\r\n\r\nTraceback (most recent call last):\r\n File \"/home/jovyan/repos/aw-lib/.venv/lib/python3.11/site-packages/bokeh/application/handlers/code_runner.py\", line 229, in run\r\n exec(self._code, module.__dict__)\r\n File \"/home/jovyan/repos/aw-lib/app.py\", line 6, in <module>\r\n pn.template.FastListTemplate(title=\"Hello\", main=[\"world\"]).servable()\r\n ^^^^^^^^^^^\r\nAttributeError: module 'panel' has no attribute 'template'\r\n```\n", "before_files": [{"content": "\"\"\"\nPanel is a high level app and dashboarding framework\n====================================================\n\nPanel is an open-source Python library that lets you create custom\ninteractive web apps and dashboards by connecting user-defined widgets\nto plots, images, tables, or text.\n\nPanel works with the tools you know and \u2764\ufe0f.\n\nCheck out https://panel.holoviz.org/\n\n.. figure:: https://user-images.githubusercontent.com/42288570/152672367-6c239073-0ea0-4a2b-a4c0-817e8090e877.gif\n :alt: Panel Dashboard\n\n Panel Dashboard\n\nHow to develop a Panel app in 3 simple steps\n--------------------------------------------\n\n- Write the app\n\n>>> import panel as pn\n>>> pn.extension(sizing_mode=\"stretch_width\", template=\"fast\")\n>>> pn.state.template.param.update(title=\"My Data App\")\n>>> pn.panel(some_python_object).servable()\n\n- Run your app\n\n$ panel serve my_script.py --autoreload --show\n\nor\n\n$ panel serve my_notebook.ipynb --autoreload --show\n\nThe app will be available in your browser!\n\n- Change your code and save it\n\nThe app will reload with your changes!\n\nYou can also add automatic reload to jupyterlab. Check out\nhttps://blog.holoviz.org/panel_0.12.0.html#JupyterLab-previews\n\nTo learn more about Panel check out\nhttps://panel.holoviz.org/getting_started/index.html\n\"\"\"\nfrom param import rx\n\nfrom . import chat # noqa\nfrom . import layout # noqa\nfrom . import links # noqa\nfrom . import pane # noqa\nfrom . import param # noqa\nfrom . import pipeline # noqa\nfrom . import reactive # noqa\nfrom . import viewable # noqa\nfrom . import widgets # noqa\nfrom .config import __version__, config, panel_extension as extension # noqa\nfrom .depends import bind, depends # noqa\nfrom .interact import interact # noqa\nfrom .io import ( # noqa\n _jupyter_server_extension_paths, cache, ipywidget, serve, state,\n)\nfrom .layout import ( # noqa\n Accordion, Card, Column, FlexBox, FloatPanel, GridBox, GridSpec, GridStack,\n HSpacer, Row, Spacer, Swipe, Tabs, VSpacer, WidgetBox,\n)\nfrom .pane import panel # noqa\nfrom .param import Param, ReactiveExpr # noqa\nfrom .template import Template # noqa\nfrom .widgets import indicators, widget # noqa\n\n__all__ = (\n \"__version__\",\n \"Accordion\",\n \"Card\",\n \"chat\",\n \"Column\",\n \"FlexBox\",\n \"FloatPanel\",\n \"GridBox\",\n \"GridSpec\",\n \"GridStack\",\n \"HSpacer\",\n \"Param\",\n \"ReactiveExpr\",\n \"Row\",\n \"Spacer\",\n \"Tabs\",\n \"Template\",\n \"VSpacer\",\n \"WidgetBox\",\n \"bind\",\n \"cache\",\n \"config\",\n \"depends\",\n \"extension\",\n \"indicators\",\n \"interact\",\n \"ipywidget\",\n \"layout\",\n \"links\",\n \"pane\",\n \"panel\",\n \"param\",\n \"pipeline\",\n \"rx\",\n \"serve\",\n \"state\",\n \"viewable\",\n \"widgets\",\n \"widget\"\n)\n", "path": "panel/__init__.py"}]} | 2,008 | 160 |
gh_patches_debug_40115 | rasdani/github-patches | git_diff | Pylons__pyramid-2902 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
add_translation_dirs behaves the wrong way around
This is a follow-up to #1473. See the reproducer and notes there.
I would propose an API change that changes behaviour of add_translation_dirs to append specs at the end of existing specs instead of prepending it. That way, API users simply need to understand "last spec wins". This matches much closer to the mental model of "add".
It matches the current documented behaviour of how the method is to _behave_ in case of multiple calls.
</issue>
<code>
[start of pyramid/config/i18n.py]
1 from pyramid.interfaces import (
2 ILocaleNegotiator,
3 ITranslationDirectories,
4 )
5
6 from pyramid.exceptions import ConfigurationError
7 from pyramid.path import AssetResolver
8 from pyramid.util import action_method
9
10 class I18NConfiguratorMixin(object):
11 @action_method
12 def set_locale_negotiator(self, negotiator):
13 """
14 Set the :term:`locale negotiator` for this application. The
15 :term:`locale negotiator` is a callable which accepts a
16 :term:`request` object and which returns a :term:`locale
17 name`. The ``negotiator`` argument should be the locale
18 negotiator implementation or a :term:`dotted Python name`
19 which refers to such an implementation.
20
21 Later calls to this method override earlier calls; there can
22 be only one locale negotiator active at a time within an
23 application. See :ref:`activating_translation` for more
24 information.
25
26 .. note::
27
28 Using the ``locale_negotiator`` argument to the
29 :class:`pyramid.config.Configurator` constructor can be used to
30 achieve the same purpose.
31 """
32 def register():
33 self._set_locale_negotiator(negotiator)
34 intr = self.introspectable('locale negotiator', None,
35 self.object_description(negotiator),
36 'locale negotiator')
37 intr['negotiator'] = negotiator
38 self.action(ILocaleNegotiator, register, introspectables=(intr,))
39
40 def _set_locale_negotiator(self, negotiator):
41 locale_negotiator = self.maybe_dotted(negotiator)
42 self.registry.registerUtility(locale_negotiator, ILocaleNegotiator)
43
44 @action_method
45 def add_translation_dirs(self, *specs):
46 """ Add one or more :term:`translation directory` paths to the
47 current configuration state. The ``specs`` argument is a
48 sequence that may contain absolute directory paths
49 (e.g. ``/usr/share/locale``) or :term:`asset specification`
50 names naming a directory path (e.g. ``some.package:locale``)
51 or a combination of the two.
52
53 Example:
54
55 .. code-block:: python
56
57 config.add_translation_dirs('/usr/share/locale',
58 'some.package:locale')
59
60 The translation directories are defined as a list in which
61 translations defined later have precedence over translations defined
62 earlier.
63
64 If multiple specs are provided in a single call to
65 ``add_translation_dirs``, the directories will be inserted in the
66 order they're provided (earlier items are trumped by later items).
67
68 .. warning::
69
70 Consecutive calls to ``add_translation_dirs`` will sort the
71 directories such that the later calls will add folders with
72 lower precedence than earlier calls.
73
74 """
75 introspectables = []
76
77 def register():
78 directories = []
79 resolver = AssetResolver(self.package_name)
80
81 # defer spec resolution until register to allow for asset
82 # overrides to take place in an earlier config phase
83 for spec in specs[::-1]: # reversed
84 # the trailing slash helps match asset overrides for folders
85 if not spec.endswith('/'):
86 spec += '/'
87 asset = resolver.resolve(spec)
88 directory = asset.abspath()
89 if not asset.isdir():
90 raise ConfigurationError('"%s" is not a directory' %
91 directory)
92 intr = self.introspectable('translation directories', directory,
93 spec, 'translation directory')
94 intr['directory'] = directory
95 intr['spec'] = spec
96 introspectables.append(intr)
97 directories.append(directory)
98
99 tdirs = self.registry.queryUtility(ITranslationDirectories)
100 if tdirs is None:
101 tdirs = []
102 self.registry.registerUtility(tdirs, ITranslationDirectories)
103 for directory in directories:
104 tdirs.insert(0, directory)
105
106 self.action(None, register, introspectables=introspectables)
107
108
[end of pyramid/config/i18n.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/config/i18n.py b/pyramid/config/i18n.py
--- a/pyramid/config/i18n.py
+++ b/pyramid/config/i18n.py
@@ -42,7 +42,7 @@
self.registry.registerUtility(locale_negotiator, ILocaleNegotiator)
@action_method
- def add_translation_dirs(self, *specs):
+ def add_translation_dirs(self, *specs, **kw):
""" Add one or more :term:`translation directory` paths to the
current configuration state. The ``specs`` argument is a
sequence that may contain absolute directory paths
@@ -61,18 +61,27 @@
translations defined later have precedence over translations defined
earlier.
+ By default, consecutive calls to ``add_translation_dirs`` will add
+ directories to the start of the list. This means later calls to
+ ``add_translation_dirs`` will have their translations trumped by
+ earlier calls. If you explicitly need this call to trump an earlier
+ call then you may set ``override`` to ``True``.
+
If multiple specs are provided in a single call to
``add_translation_dirs``, the directories will be inserted in the
order they're provided (earlier items are trumped by later items).
- .. warning::
+ .. versionchanged:: 1.8
- Consecutive calls to ``add_translation_dirs`` will sort the
- directories such that the later calls will add folders with
- lower precedence than earlier calls.
+ The ``override`` parameter was added to allow a later call
+ to ``add_translation_dirs`` to override an earlier call, inserting
+ folders at the beginning of the translation directory list.
"""
introspectables = []
+ override = kw.pop('override', False)
+ if kw:
+ raise TypeError('invalid keyword arguments: %s', sorted(kw.keys()))
def register():
directories = []
@@ -80,7 +89,7 @@
# defer spec resolution until register to allow for asset
# overrides to take place in an earlier config phase
- for spec in specs[::-1]: # reversed
+ for spec in specs:
# the trailing slash helps match asset overrides for folders
if not spec.endswith('/'):
spec += '/'
@@ -100,8 +109,11 @@
if tdirs is None:
tdirs = []
self.registry.registerUtility(tdirs, ITranslationDirectories)
- for directory in directories:
- tdirs.insert(0, directory)
+ if override:
+ tdirs.extend(directories)
+ else:
+ for directory in reversed(directories):
+ tdirs.insert(0, directory)
self.action(None, register, introspectables=introspectables)
| {"golden_diff": "diff --git a/pyramid/config/i18n.py b/pyramid/config/i18n.py\n--- a/pyramid/config/i18n.py\n+++ b/pyramid/config/i18n.py\n@@ -42,7 +42,7 @@\n self.registry.registerUtility(locale_negotiator, ILocaleNegotiator)\n \n @action_method\n- def add_translation_dirs(self, *specs):\n+ def add_translation_dirs(self, *specs, **kw):\n \"\"\" Add one or more :term:`translation directory` paths to the\n current configuration state. The ``specs`` argument is a\n sequence that may contain absolute directory paths\n@@ -61,18 +61,27 @@\n translations defined later have precedence over translations defined\n earlier.\n \n+ By default, consecutive calls to ``add_translation_dirs`` will add\n+ directories to the start of the list. This means later calls to\n+ ``add_translation_dirs`` will have their translations trumped by\n+ earlier calls. If you explicitly need this call to trump an earlier\n+ call then you may set ``override`` to ``True``.\n+\n If multiple specs are provided in a single call to\n ``add_translation_dirs``, the directories will be inserted in the\n order they're provided (earlier items are trumped by later items).\n \n- .. warning::\n+ .. versionchanged:: 1.8\n \n- Consecutive calls to ``add_translation_dirs`` will sort the\n- directories such that the later calls will add folders with\n- lower precedence than earlier calls.\n+ The ``override`` parameter was added to allow a later call\n+ to ``add_translation_dirs`` to override an earlier call, inserting\n+ folders at the beginning of the translation directory list.\n \n \"\"\"\n introspectables = []\n+ override = kw.pop('override', False)\n+ if kw:\n+ raise TypeError('invalid keyword arguments: %s', sorted(kw.keys()))\n \n def register():\n directories = []\n@@ -80,7 +89,7 @@\n \n # defer spec resolution until register to allow for asset\n # overrides to take place in an earlier config phase\n- for spec in specs[::-1]: # reversed\n+ for spec in specs:\n # the trailing slash helps match asset overrides for folders\n if not spec.endswith('/'):\n spec += '/'\n@@ -100,8 +109,11 @@\n if tdirs is None:\n tdirs = []\n self.registry.registerUtility(tdirs, ITranslationDirectories)\n- for directory in directories:\n- tdirs.insert(0, directory)\n+ if override:\n+ tdirs.extend(directories)\n+ else:\n+ for directory in reversed(directories):\n+ tdirs.insert(0, directory)\n \n self.action(None, register, introspectables=introspectables)\n", "issue": "add_translation_dirs behaves the wrong way around\nThis is a follow-up to #1473. See the reproducer and notes there.\n\nI would propose an API change that changes behaviour of add_translation_dirs to append specs at the end of existing specs instead of prepending it. That way, API users simply need to understand \"last spec wins\". This matches much closer to the mental model of \"add\".\n\nIt matches the current documented behaviour of how the method is to _behave_ in case of multiple calls.\n\n", "before_files": [{"content": "from pyramid.interfaces import (\n ILocaleNegotiator,\n ITranslationDirectories,\n )\n\nfrom pyramid.exceptions import ConfigurationError\nfrom pyramid.path import AssetResolver\nfrom pyramid.util import action_method\n\nclass I18NConfiguratorMixin(object):\n @action_method\n def set_locale_negotiator(self, negotiator):\n \"\"\"\n Set the :term:`locale negotiator` for this application. The\n :term:`locale negotiator` is a callable which accepts a\n :term:`request` object and which returns a :term:`locale\n name`. The ``negotiator`` argument should be the locale\n negotiator implementation or a :term:`dotted Python name`\n which refers to such an implementation.\n\n Later calls to this method override earlier calls; there can\n be only one locale negotiator active at a time within an\n application. See :ref:`activating_translation` for more\n information.\n\n .. note::\n\n Using the ``locale_negotiator`` argument to the\n :class:`pyramid.config.Configurator` constructor can be used to\n achieve the same purpose.\n \"\"\"\n def register():\n self._set_locale_negotiator(negotiator)\n intr = self.introspectable('locale negotiator', None,\n self.object_description(negotiator),\n 'locale negotiator')\n intr['negotiator'] = negotiator\n self.action(ILocaleNegotiator, register, introspectables=(intr,))\n\n def _set_locale_negotiator(self, negotiator):\n locale_negotiator = self.maybe_dotted(negotiator)\n self.registry.registerUtility(locale_negotiator, ILocaleNegotiator)\n\n @action_method\n def add_translation_dirs(self, *specs):\n \"\"\" Add one or more :term:`translation directory` paths to the\n current configuration state. The ``specs`` argument is a\n sequence that may contain absolute directory paths\n (e.g. ``/usr/share/locale``) or :term:`asset specification`\n names naming a directory path (e.g. ``some.package:locale``)\n or a combination of the two.\n\n Example:\n\n .. code-block:: python\n\n config.add_translation_dirs('/usr/share/locale',\n 'some.package:locale')\n\n The translation directories are defined as a list in which\n translations defined later have precedence over translations defined\n earlier.\n\n If multiple specs are provided in a single call to\n ``add_translation_dirs``, the directories will be inserted in the\n order they're provided (earlier items are trumped by later items).\n\n .. warning::\n\n Consecutive calls to ``add_translation_dirs`` will sort the\n directories such that the later calls will add folders with\n lower precedence than earlier calls.\n\n \"\"\"\n introspectables = []\n\n def register():\n directories = []\n resolver = AssetResolver(self.package_name)\n\n # defer spec resolution until register to allow for asset\n # overrides to take place in an earlier config phase\n for spec in specs[::-1]: # reversed\n # the trailing slash helps match asset overrides for folders\n if not spec.endswith('/'):\n spec += '/'\n asset = resolver.resolve(spec)\n directory = asset.abspath()\n if not asset.isdir():\n raise ConfigurationError('\"%s\" is not a directory' %\n directory)\n intr = self.introspectable('translation directories', directory,\n spec, 'translation directory')\n intr['directory'] = directory\n intr['spec'] = spec\n introspectables.append(intr)\n directories.append(directory)\n\n tdirs = self.registry.queryUtility(ITranslationDirectories)\n if tdirs is None:\n tdirs = []\n self.registry.registerUtility(tdirs, ITranslationDirectories)\n for directory in directories:\n tdirs.insert(0, directory)\n\n self.action(None, register, introspectables=introspectables)\n\n", "path": "pyramid/config/i18n.py"}]} | 1,705 | 629 |
gh_patches_debug_43176 | rasdani/github-patches | git_diff | encode__starlette-134 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Support `shutdown` as a synonym for `cleanup`
* Support either `cleanup` or `shutdown` as the ASGI lifespan message name.
* Update uvicorn to move to shutdown - https://github.com/encode/uvicorn/issues/233
* Finally, after a small period of time, drop `cleanup`
Easy PR for a contributor to jump on would be addressing the first part of this, and supporting either name.
</issue>
<code>
[start of starlette/lifespan.py]
1 import asyncio
2 import logging
3 import traceback
4 import typing
5 from types import TracebackType
6 from starlette.types import ASGIApp, ASGIInstance, Receive, Message, Send
7
8
9 STATE_TRANSITION_ERROR = "Got invalid state transition on lifespan protocol."
10
11
12 class LifespanHandler:
13 def __init__(self) -> None:
14 self.startup_handlers = [] # type: typing.List[typing.Callable]
15 self.cleanup_handlers = [] # type: typing.List[typing.Callable]
16
17 def on_event(self, event_type: str) -> typing.Callable:
18 def decorator(func: typing.Callable) -> typing.Callable:
19 self.add_event_handler(event_type, func)
20 return func
21
22 return decorator
23
24 def add_event_handler(self, event_type: str, func: typing.Callable) -> None:
25 assert event_type in ("startup", "cleanup")
26
27 if event_type == "startup":
28 self.startup_handlers.append(func)
29 else:
30 self.cleanup_handlers.append(func)
31
32 async def run_startup(self) -> None:
33 for handler in self.startup_handlers:
34 if asyncio.iscoroutinefunction(handler):
35 await handler()
36 else:
37 handler()
38
39 async def run_cleanup(self) -> None:
40 for handler in self.cleanup_handlers:
41 if asyncio.iscoroutinefunction(handler):
42 await handler()
43 else:
44 handler()
45
46 def __call__(self, scope: Message) -> ASGIInstance:
47 assert scope["type"] == "lifespan"
48 return self.run_lifespan
49
50 async def run_lifespan(self, receive: Receive, send: Send) -> None:
51 message = await receive()
52 assert message["type"] == "lifespan.startup"
53 await self.run_startup()
54 await send({"type": "lifespan.startup.complete"})
55 message = await receive()
56 assert message["type"] == "lifespan.cleanup"
57 await self.run_cleanup()
58 await send({"type": "lifespan.cleanup.complete"})
59
60
61 class LifespanContext:
62 def __init__(
63 self, app: ASGIApp, startup_timeout: int = 10, cleanup_timeout: int = 10
64 ) -> None:
65 self.startup_timeout = startup_timeout
66 self.cleanup_timeout = cleanup_timeout
67 self.startup_event = asyncio.Event()
68 self.cleanup_event = asyncio.Event()
69 self.receive_queue = asyncio.Queue() # type: asyncio.Queue
70 self.asgi = app({"type": "lifespan"}) # type: ASGIInstance
71
72 def __enter__(self) -> None:
73 loop = asyncio.get_event_loop()
74 loop.create_task(self.run_lifespan())
75 loop.run_until_complete(self.wait_startup())
76
77 def __exit__(
78 self,
79 exc_type: typing.Type[BaseException],
80 exc: BaseException,
81 tb: TracebackType,
82 ) -> None:
83 loop = asyncio.get_event_loop()
84 loop.run_until_complete(self.wait_cleanup())
85
86 async def run_lifespan(self) -> None:
87 try:
88 await self.asgi(self.receive, self.send)
89 finally:
90 self.startup_event.set()
91 self.cleanup_event.set()
92
93 async def send(self, message: Message) -> None:
94 if message["type"] == "lifespan.startup.complete":
95 assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR
96 assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR
97 self.startup_event.set()
98 else:
99 assert message["type"] == "lifespan.cleanup.complete"
100 assert self.startup_event.is_set(), STATE_TRANSITION_ERROR
101 assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR
102 self.cleanup_event.set()
103
104 async def receive(self) -> Message:
105 return await self.receive_queue.get()
106
107 async def wait_startup(self) -> None:
108 await self.receive_queue.put({"type": "lifespan.startup"})
109 await asyncio.wait_for(self.startup_event.wait(), timeout=self.startup_timeout)
110
111 async def wait_cleanup(self) -> None:
112 await self.receive_queue.put({"type": "lifespan.cleanup"})
113 await asyncio.wait_for(self.cleanup_event.wait(), timeout=self.cleanup_timeout)
114
[end of starlette/lifespan.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/starlette/lifespan.py b/starlette/lifespan.py
--- a/starlette/lifespan.py
+++ b/starlette/lifespan.py
@@ -22,7 +22,7 @@
return decorator
def add_event_handler(self, event_type: str, func: typing.Callable) -> None:
- assert event_type in ("startup", "cleanup")
+ assert event_type in ("startup", "shutdown", "cleanup")
if event_type == "startup":
self.startup_handlers.append(func)
@@ -53,19 +53,26 @@
await self.run_startup()
await send({"type": "lifespan.startup.complete"})
message = await receive()
- assert message["type"] == "lifespan.cleanup"
+ assert (
+ message["type"] == "lifespan.shutdown"
+ or message["type"] == "lifespan.cleanup"
+ )
await self.run_cleanup()
- await send({"type": "lifespan.cleanup.complete"})
+ if message["type"] == "lifespan.shutdown":
+ await send({"type": "lifespan.shutdown.complete"})
+
+ if message["type"] == "lifespan.cleanup":
+ await send({"type": "lifespan.cleanup.complete"}) # pragma: no cover
class LifespanContext:
def __init__(
- self, app: ASGIApp, startup_timeout: int = 10, cleanup_timeout: int = 10
+ self, app: ASGIApp, startup_timeout: int = 10, shutdown_timeout: int = 10
) -> None:
self.startup_timeout = startup_timeout
- self.cleanup_timeout = cleanup_timeout
+ self.shutdown_timeout = shutdown_timeout
self.startup_event = asyncio.Event()
- self.cleanup_event = asyncio.Event()
+ self.shutdown_event = asyncio.Event()
self.receive_queue = asyncio.Queue() # type: asyncio.Queue
self.asgi = app({"type": "lifespan"}) # type: ASGIInstance
@@ -81,25 +88,25 @@
tb: TracebackType,
) -> None:
loop = asyncio.get_event_loop()
- loop.run_until_complete(self.wait_cleanup())
+ loop.run_until_complete(self.wait_shutdown())
async def run_lifespan(self) -> None:
try:
await self.asgi(self.receive, self.send)
finally:
self.startup_event.set()
- self.cleanup_event.set()
+ self.shutdown_event.set()
async def send(self, message: Message) -> None:
if message["type"] == "lifespan.startup.complete":
assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR
- assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR
+ assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
self.startup_event.set()
else:
- assert message["type"] == "lifespan.cleanup.complete"
+ assert message["type"] == "lifespan.shutdown.complete"
assert self.startup_event.is_set(), STATE_TRANSITION_ERROR
- assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR
- self.cleanup_event.set()
+ assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR
+ self.shutdown_event.set()
async def receive(self) -> Message:
return await self.receive_queue.get()
@@ -108,6 +115,8 @@
await self.receive_queue.put({"type": "lifespan.startup"})
await asyncio.wait_for(self.startup_event.wait(), timeout=self.startup_timeout)
- async def wait_cleanup(self) -> None:
- await self.receive_queue.put({"type": "lifespan.cleanup"})
- await asyncio.wait_for(self.cleanup_event.wait(), timeout=self.cleanup_timeout)
+ async def wait_shutdown(self) -> None:
+ await self.receive_queue.put({"type": "lifespan.shutdown"})
+ await asyncio.wait_for(
+ self.shutdown_event.wait(), timeout=self.shutdown_timeout
+ )
| {"golden_diff": "diff --git a/starlette/lifespan.py b/starlette/lifespan.py\n--- a/starlette/lifespan.py\n+++ b/starlette/lifespan.py\n@@ -22,7 +22,7 @@\n return decorator\n \n def add_event_handler(self, event_type: str, func: typing.Callable) -> None:\n- assert event_type in (\"startup\", \"cleanup\")\n+ assert event_type in (\"startup\", \"shutdown\", \"cleanup\")\n \n if event_type == \"startup\":\n self.startup_handlers.append(func)\n@@ -53,19 +53,26 @@\n await self.run_startup()\n await send({\"type\": \"lifespan.startup.complete\"})\n message = await receive()\n- assert message[\"type\"] == \"lifespan.cleanup\"\n+ assert (\n+ message[\"type\"] == \"lifespan.shutdown\"\n+ or message[\"type\"] == \"lifespan.cleanup\"\n+ )\n await self.run_cleanup()\n- await send({\"type\": \"lifespan.cleanup.complete\"})\n+ if message[\"type\"] == \"lifespan.shutdown\":\n+ await send({\"type\": \"lifespan.shutdown.complete\"})\n+\n+ if message[\"type\"] == \"lifespan.cleanup\":\n+ await send({\"type\": \"lifespan.cleanup.complete\"}) # pragma: no cover\n \n \n class LifespanContext:\n def __init__(\n- self, app: ASGIApp, startup_timeout: int = 10, cleanup_timeout: int = 10\n+ self, app: ASGIApp, startup_timeout: int = 10, shutdown_timeout: int = 10\n ) -> None:\n self.startup_timeout = startup_timeout\n- self.cleanup_timeout = cleanup_timeout\n+ self.shutdown_timeout = shutdown_timeout\n self.startup_event = asyncio.Event()\n- self.cleanup_event = asyncio.Event()\n+ self.shutdown_event = asyncio.Event()\n self.receive_queue = asyncio.Queue() # type: asyncio.Queue\n self.asgi = app({\"type\": \"lifespan\"}) # type: ASGIInstance\n \n@@ -81,25 +88,25 @@\n tb: TracebackType,\n ) -> None:\n loop = asyncio.get_event_loop()\n- loop.run_until_complete(self.wait_cleanup())\n+ loop.run_until_complete(self.wait_shutdown())\n \n async def run_lifespan(self) -> None:\n try:\n await self.asgi(self.receive, self.send)\n finally:\n self.startup_event.set()\n- self.cleanup_event.set()\n+ self.shutdown_event.set()\n \n async def send(self, message: Message) -> None:\n if message[\"type\"] == \"lifespan.startup.complete\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n- assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR\n+ assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n else:\n- assert message[\"type\"] == \"lifespan.cleanup.complete\"\n+ assert message[\"type\"] == \"lifespan.shutdown.complete\"\n assert self.startup_event.is_set(), STATE_TRANSITION_ERROR\n- assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR\n- self.cleanup_event.set()\n+ assert not self.shutdown_event.is_set(), STATE_TRANSITION_ERROR\n+ self.shutdown_event.set()\n \n async def receive(self) -> Message:\n return await self.receive_queue.get()\n@@ -108,6 +115,8 @@\n await self.receive_queue.put({\"type\": \"lifespan.startup\"})\n await asyncio.wait_for(self.startup_event.wait(), timeout=self.startup_timeout)\n \n- async def wait_cleanup(self) -> None:\n- await self.receive_queue.put({\"type\": \"lifespan.cleanup\"})\n- await asyncio.wait_for(self.cleanup_event.wait(), timeout=self.cleanup_timeout)\n+ async def wait_shutdown(self) -> None:\n+ await self.receive_queue.put({\"type\": \"lifespan.shutdown\"})\n+ await asyncio.wait_for(\n+ self.shutdown_event.wait(), timeout=self.shutdown_timeout\n+ )\n", "issue": "Support `shutdown` as a synonym for `cleanup`\n* Support either `cleanup` or `shutdown` as the ASGI lifespan message name.\r\n* Update uvicorn to move to shutdown - https://github.com/encode/uvicorn/issues/233\r\n* Finally, after a small period of time, drop `cleanup`\r\n\r\nEasy PR for a contributor to jump on would be addressing the first part of this, and supporting either name.\n", "before_files": [{"content": "import asyncio\nimport logging\nimport traceback\nimport typing\nfrom types import TracebackType\nfrom starlette.types import ASGIApp, ASGIInstance, Receive, Message, Send\n\n\nSTATE_TRANSITION_ERROR = \"Got invalid state transition on lifespan protocol.\"\n\n\nclass LifespanHandler:\n def __init__(self) -> None:\n self.startup_handlers = [] # type: typing.List[typing.Callable]\n self.cleanup_handlers = [] # type: typing.List[typing.Callable]\n\n def on_event(self, event_type: str) -> typing.Callable:\n def decorator(func: typing.Callable) -> typing.Callable:\n self.add_event_handler(event_type, func)\n return func\n\n return decorator\n\n def add_event_handler(self, event_type: str, func: typing.Callable) -> None:\n assert event_type in (\"startup\", \"cleanup\")\n\n if event_type == \"startup\":\n self.startup_handlers.append(func)\n else:\n self.cleanup_handlers.append(func)\n\n async def run_startup(self) -> None:\n for handler in self.startup_handlers:\n if asyncio.iscoroutinefunction(handler):\n await handler()\n else:\n handler()\n\n async def run_cleanup(self) -> None:\n for handler in self.cleanup_handlers:\n if asyncio.iscoroutinefunction(handler):\n await handler()\n else:\n handler()\n\n def __call__(self, scope: Message) -> ASGIInstance:\n assert scope[\"type\"] == \"lifespan\"\n return self.run_lifespan\n\n async def run_lifespan(self, receive: Receive, send: Send) -> None:\n message = await receive()\n assert message[\"type\"] == \"lifespan.startup\"\n await self.run_startup()\n await send({\"type\": \"lifespan.startup.complete\"})\n message = await receive()\n assert message[\"type\"] == \"lifespan.cleanup\"\n await self.run_cleanup()\n await send({\"type\": \"lifespan.cleanup.complete\"})\n\n\nclass LifespanContext:\n def __init__(\n self, app: ASGIApp, startup_timeout: int = 10, cleanup_timeout: int = 10\n ) -> None:\n self.startup_timeout = startup_timeout\n self.cleanup_timeout = cleanup_timeout\n self.startup_event = asyncio.Event()\n self.cleanup_event = asyncio.Event()\n self.receive_queue = asyncio.Queue() # type: asyncio.Queue\n self.asgi = app({\"type\": \"lifespan\"}) # type: ASGIInstance\n\n def __enter__(self) -> None:\n loop = asyncio.get_event_loop()\n loop.create_task(self.run_lifespan())\n loop.run_until_complete(self.wait_startup())\n\n def __exit__(\n self,\n exc_type: typing.Type[BaseException],\n exc: BaseException,\n tb: TracebackType,\n ) -> None:\n loop = asyncio.get_event_loop()\n loop.run_until_complete(self.wait_cleanup())\n\n async def run_lifespan(self) -> None:\n try:\n await self.asgi(self.receive, self.send)\n finally:\n self.startup_event.set()\n self.cleanup_event.set()\n\n async def send(self, message: Message) -> None:\n if message[\"type\"] == \"lifespan.startup.complete\":\n assert not self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR\n self.startup_event.set()\n else:\n assert message[\"type\"] == \"lifespan.cleanup.complete\"\n assert self.startup_event.is_set(), STATE_TRANSITION_ERROR\n assert not self.cleanup_event.is_set(), STATE_TRANSITION_ERROR\n self.cleanup_event.set()\n\n async def receive(self) -> Message:\n return await self.receive_queue.get()\n\n async def wait_startup(self) -> None:\n await self.receive_queue.put({\"type\": \"lifespan.startup\"})\n await asyncio.wait_for(self.startup_event.wait(), timeout=self.startup_timeout)\n\n async def wait_cleanup(self) -> None:\n await self.receive_queue.put({\"type\": \"lifespan.cleanup\"})\n await asyncio.wait_for(self.cleanup_event.wait(), timeout=self.cleanup_timeout)\n", "path": "starlette/lifespan.py"}]} | 1,762 | 889 |
gh_patches_debug_32409 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-1044 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HTML filtering requires a "re save" on each page for the settings to take effect.
Using: Plone 5.0b4.dev0 (5007)
Been testing the HTML filtering settings using iFrame video.
When I add an "iframe" video the saved page does not show the video (as expected). But when I toggle ON the "Disable HTML Filtering" checkbox the video still does not display on the page (even after emptying cache). If I edit the video page and immediately "Save" (making no edits) the video displays as expected. The reverse is also true. If I have video correctly displaying on the page (with html filtering disabled) and uncheck "Disable HTML Filtering" checkbox in the HTML Filtering panel, go back to the video page and reload the video still displays. If I edit the page and immediately save the page the video properly does not display. Is this the expected behavior?
In addition. If I add "iFrame" to the "Custom tags" dialogue box I cannot get the video to display with "Disable HTML Filtering" checkbox OFF. Even if I do the "Edit" and "Save" trick from above.
</issue>
<code>
[start of Products/CMFPlone/controlpanel/browser/filter.py]
1 # -*- coding: utf-8 -*-
2 from Products.CMFCore.utils import getToolByName
3 from Products.CMFPlone import PloneMessageFactory as _ # NOQA
4 from Products.CMFPlone.interfaces import IFilterSchema
5 from Products.Five.browser.pagetemplatefile import ViewPageTemplateFile
6 from Products.statusmessages.interfaces import IStatusMessage
7 from plone.autoform.form import AutoExtensibleForm
8 from plone.z3cform import layout
9 from z3c.form import button
10 from z3c.form import form
11 from Products.PortalTransforms.transforms.safe_html import VALID_TAGS
12 from Products.PortalTransforms.transforms.safe_html import NASTY_TAGS
13
14
15 class FilterControlPanel(AutoExtensibleForm, form.EditForm):
16 id = "FilterControlPanel"
17 label = _(u"HTML Filtering Settings")
18 description = _(
19 'description_html_filtering',
20 default=u"HTML generation is heavily cached across Plone. "
21 u"After changing settings here, you may have to edit "
22 u"existing content to see the changes in these filter settings "
23 u"or restart your server.")
24 schema = IFilterSchema
25 form_name = _(u"HTML Filtering Settings")
26 control_panel_view = "filter-controlpanel"
27
28 def updateActions(self): # NOQA
29 """Have to override this because we only have Save, not Cancel
30 """
31 super(FilterControlPanel, self).updateActions()
32 self.actions['save'].addClass("context")
33
34 @button.buttonAndHandler(_(u"Save"), name='save')
35 def handleSave(self, action): # NOQA
36 data, errors = self.extractData()
37 if errors:
38 self.status = self.formErrorsMessage
39 return
40
41 # Save in portal tools
42 safe_html = getattr(
43 getToolByName(self.context, 'portal_transforms'),
44 'safe_html',
45 None)
46
47 nasty_tags = data['nasty_tags']
48 custom_tags = data['custom_tags']
49 stripped_tags = data['stripped_tags']
50
51 valid = safe_html._config['valid_tags']
52
53 # start with removing tags that do not belong in valid tags
54 for value in nasty_tags + stripped_tags:
55 if value in valid:
56 del valid[value]
57 # add in custom tags
58 for custom in custom_tags:
59 if value not in valid:
60 valid[custom] = 1
61 # then, check if something was previously prevented but is no longer
62 for tag in set(VALID_TAGS.keys()) - set(valid.keys()):
63 if tag not in nasty_tags and tag not in stripped_tags:
64 valid[tag] = VALID_TAGS[tag]
65
66 # nasty tags are simple, just set the value here
67 nasty_value = {}
68 for tag in nasty_tags:
69 nasty_value[tag] = NASTY_TAGS.get(tag, VALID_TAGS.get(tag, 1))
70 safe_html._config['nasty_tags'] = nasty_value
71
72 disable_filtering = int(data['disable_filtering'])
73 if disable_filtering != safe_html._config['disable_transform']:
74 safe_html._config['disable_transform'] = disable_filtering
75
76 for attr in ('stripped_combinations', 'class_blacklist', 'stripped_attributes',
77 'style_whitelist'):
78 value = data[attr]
79 if value is None:
80 if attr == 'stripped_combinations':
81 value = {}
82 else:
83 value = []
84 if value != safe_html._config[attr]:
85 safe_html._config[attr] = value
86
87 # always reload the transform
88 safe_html._p_changed = True
89 safe_html.reload()
90
91 self.applyChanges(data)
92 IStatusMessage(self.request).addStatusMessage(
93 _(u"Changes saved."),
94 "info")
95 self.request.response.redirect(self.request.getURL())
96
97
98 class ControlPanelFormWrapper(layout.FormWrapper):
99 """Use this form as the plone.z3cform layout wrapper to get the control
100 panel layout.
101 """
102 index = ViewPageTemplateFile('filter_controlpanel.pt')
103
104
105 FilterControlPanelView = layout.wrap_form(
106 FilterControlPanel, ControlPanelFormWrapper)
107
[end of Products/CMFPlone/controlpanel/browser/filter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/controlpanel/browser/filter.py b/Products/CMFPlone/controlpanel/browser/filter.py
--- a/Products/CMFPlone/controlpanel/browser/filter.py
+++ b/Products/CMFPlone/controlpanel/browser/filter.py
@@ -15,12 +15,7 @@
class FilterControlPanel(AutoExtensibleForm, form.EditForm):
id = "FilterControlPanel"
label = _(u"HTML Filtering Settings")
- description = _(
- 'description_html_filtering',
- default=u"HTML generation is heavily cached across Plone. "
- u"After changing settings here, you may have to edit "
- u"existing content to see the changes in these filter settings "
- u"or restart your server.")
+ description = ""
schema = IFilterSchema
form_name = _(u"HTML Filtering Settings")
control_panel_view = "filter-controlpanel"
@@ -73,8 +68,8 @@
if disable_filtering != safe_html._config['disable_transform']:
safe_html._config['disable_transform'] = disable_filtering
- for attr in ('stripped_combinations', 'class_blacklist', 'stripped_attributes',
- 'style_whitelist'):
+ for attr in ('stripped_combinations', 'class_blacklist',
+ 'stripped_attributes', 'style_whitelist'):
value = data[attr]
if value is None:
if attr == 'stripped_combinations':
@@ -92,6 +87,11 @@
IStatusMessage(self.request).addStatusMessage(
_(u"Changes saved."),
"info")
+ IStatusMessage(self.request).addStatusMessage(
+ _(u"HTML generation is heavily cached across Plone. You may "
+ u"have to edit existing content or restart your server to see "
+ u"the changes."),
+ "warning")
self.request.response.redirect(self.request.getURL())
| {"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/filter.py b/Products/CMFPlone/controlpanel/browser/filter.py\n--- a/Products/CMFPlone/controlpanel/browser/filter.py\n+++ b/Products/CMFPlone/controlpanel/browser/filter.py\n@@ -15,12 +15,7 @@\n class FilterControlPanel(AutoExtensibleForm, form.EditForm):\n id = \"FilterControlPanel\"\n label = _(u\"HTML Filtering Settings\")\n- description = _(\n- 'description_html_filtering',\n- default=u\"HTML generation is heavily cached across Plone. \"\n- u\"After changing settings here, you may have to edit \"\n- u\"existing content to see the changes in these filter settings \"\n- u\"or restart your server.\")\n+ description = \"\"\n schema = IFilterSchema\n form_name = _(u\"HTML Filtering Settings\")\n control_panel_view = \"filter-controlpanel\"\n@@ -73,8 +68,8 @@\n if disable_filtering != safe_html._config['disable_transform']:\n safe_html._config['disable_transform'] = disable_filtering\n \n- for attr in ('stripped_combinations', 'class_blacklist', 'stripped_attributes',\n- 'style_whitelist'):\n+ for attr in ('stripped_combinations', 'class_blacklist',\n+ 'stripped_attributes', 'style_whitelist'):\n value = data[attr]\n if value is None:\n if attr == 'stripped_combinations':\n@@ -92,6 +87,11 @@\n IStatusMessage(self.request).addStatusMessage(\n _(u\"Changes saved.\"),\n \"info\")\n+ IStatusMessage(self.request).addStatusMessage(\n+ _(u\"HTML generation is heavily cached across Plone. You may \"\n+ u\"have to edit existing content or restart your server to see \"\n+ u\"the changes.\"),\n+ \"warning\")\n self.request.response.redirect(self.request.getURL())\n", "issue": "HTML filtering requires a \"re save\" on each page for the settings to take effect. \nUsing: Plone 5.0b4.dev0 (5007)\n\nBeen testing the HTML filtering settings using iFrame video. \nWhen I add an \"iframe\" video the saved page does not show the video (as expected). But when I toggle ON the \"Disable HTML Filtering\" checkbox the video still does not display on the page (even after emptying cache). If I edit the video page and immediately \"Save\" (making no edits) the video displays as expected. The reverse is also true. If I have video correctly displaying on the page (with html filtering disabled) and uncheck \"Disable HTML Filtering\" checkbox in the HTML Filtering panel, go back to the video page and reload the video still displays. If I edit the page and immediately save the page the video properly does not display. Is this the expected behavior?\n\nIn addition. If I add \"iFrame\" to the \"Custom tags\" dialogue box I cannot get the video to display with \"Disable HTML Filtering\" checkbox OFF. Even if I do the \"Edit\" and \"Save\" trick from above.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone import PloneMessageFactory as _ # NOQA\nfrom Products.CMFPlone.interfaces import IFilterSchema\nfrom Products.Five.browser.pagetemplatefile import ViewPageTemplateFile\nfrom Products.statusmessages.interfaces import IStatusMessage\nfrom plone.autoform.form import AutoExtensibleForm\nfrom plone.z3cform import layout\nfrom z3c.form import button\nfrom z3c.form import form\nfrom Products.PortalTransforms.transforms.safe_html import VALID_TAGS\nfrom Products.PortalTransforms.transforms.safe_html import NASTY_TAGS\n\n\nclass FilterControlPanel(AutoExtensibleForm, form.EditForm):\n id = \"FilterControlPanel\"\n label = _(u\"HTML Filtering Settings\")\n description = _(\n 'description_html_filtering',\n default=u\"HTML generation is heavily cached across Plone. \"\n u\"After changing settings here, you may have to edit \"\n u\"existing content to see the changes in these filter settings \"\n u\"or restart your server.\")\n schema = IFilterSchema\n form_name = _(u\"HTML Filtering Settings\")\n control_panel_view = \"filter-controlpanel\"\n\n def updateActions(self): # NOQA\n \"\"\"Have to override this because we only have Save, not Cancel\n \"\"\"\n super(FilterControlPanel, self).updateActions()\n self.actions['save'].addClass(\"context\")\n\n @button.buttonAndHandler(_(u\"Save\"), name='save')\n def handleSave(self, action): # NOQA\n data, errors = self.extractData()\n if errors:\n self.status = self.formErrorsMessage\n return\n\n # Save in portal tools\n safe_html = getattr(\n getToolByName(self.context, 'portal_transforms'),\n 'safe_html',\n None)\n\n nasty_tags = data['nasty_tags']\n custom_tags = data['custom_tags']\n stripped_tags = data['stripped_tags']\n\n valid = safe_html._config['valid_tags']\n\n # start with removing tags that do not belong in valid tags\n for value in nasty_tags + stripped_tags:\n if value in valid:\n del valid[value]\n # add in custom tags\n for custom in custom_tags:\n if value not in valid:\n valid[custom] = 1\n # then, check if something was previously prevented but is no longer\n for tag in set(VALID_TAGS.keys()) - set(valid.keys()):\n if tag not in nasty_tags and tag not in stripped_tags:\n valid[tag] = VALID_TAGS[tag]\n\n # nasty tags are simple, just set the value here\n nasty_value = {}\n for tag in nasty_tags:\n nasty_value[tag] = NASTY_TAGS.get(tag, VALID_TAGS.get(tag, 1))\n safe_html._config['nasty_tags'] = nasty_value\n\n disable_filtering = int(data['disable_filtering'])\n if disable_filtering != safe_html._config['disable_transform']:\n safe_html._config['disable_transform'] = disable_filtering\n\n for attr in ('stripped_combinations', 'class_blacklist', 'stripped_attributes',\n 'style_whitelist'):\n value = data[attr]\n if value is None:\n if attr == 'stripped_combinations':\n value = {}\n else:\n value = []\n if value != safe_html._config[attr]:\n safe_html._config[attr] = value\n\n # always reload the transform\n safe_html._p_changed = True\n safe_html.reload()\n\n self.applyChanges(data)\n IStatusMessage(self.request).addStatusMessage(\n _(u\"Changes saved.\"),\n \"info\")\n self.request.response.redirect(self.request.getURL())\n\n\nclass ControlPanelFormWrapper(layout.FormWrapper):\n \"\"\"Use this form as the plone.z3cform layout wrapper to get the control\n panel layout.\n \"\"\"\n index = ViewPageTemplateFile('filter_controlpanel.pt')\n\n\nFilterControlPanelView = layout.wrap_form(\n FilterControlPanel, ControlPanelFormWrapper)\n", "path": "Products/CMFPlone/controlpanel/browser/filter.py"}]} | 1,880 | 426 |
gh_patches_debug_9181 | rasdani/github-patches | git_diff | certbot__certbot-9218 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
snap_config.prepare_env can hang forever if snapd is malfunctioning
Reported at https://community.letsencrypt.org/t/sudo-certbot-renew-hangs-forever/138649
There's some evidence in that thread that `snapd` can get into a state where it just stops responding to commands (whether via `snap`, `snapctl`, or the REST API directly).
Certbot should guard against the possibility of hanging forever, by setting timeouts on relevant network operations:
https://github.com/certbot/certbot/blob/9ca7f76505b10b2f395ddffc4ddc1cbc8afb516b/certbot/certbot/_internal/snap_config.py#L87-L89
https://github.com/certbot/certbot/blob/9ca7f76505b10b2f395ddffc4ddc1cbc8afb516b/certbot/certbot/_internal/snap_config.py#L56-L57
Edit: to clarify, although the reported issue involves the previous shell-based snap wrapper, I have reproduced the same hang and lack of effective timeout with the current Python implementation.
</issue>
<code>
[start of certbot/certbot/_internal/snap_config.py]
1 """Module configuring Certbot in a snap environment"""
2 import logging
3 import socket
4 from typing import Iterable
5 from typing import List
6 from typing import Optional
7
8 from requests import Session
9 from requests.adapters import HTTPAdapter
10 from requests.exceptions import HTTPError
11 from requests.exceptions import RequestException
12
13 from certbot.compat import os
14 from certbot.errors import Error
15
16 try:
17 from urllib3.connection import HTTPConnection
18 from urllib3.connectionpool import HTTPConnectionPool
19 except ImportError:
20 # Stub imports for oldest requirements, that will never be used in snaps.
21 HTTPConnection = object # type: ignore[misc,assignment]
22 HTTPConnectionPool = object # type: ignore[misc,assignment]
23
24
25 _ARCH_TRIPLET_MAP = {
26 'arm64': 'aarch64-linux-gnu',
27 'armhf': 'arm-linux-gnueabihf',
28 'i386': 'i386-linux-gnu',
29 'ppc64el': 'powerpc64le-linux-gnu',
30 'powerpc': 'powerpc-linux-gnu',
31 'amd64': 'x86_64-linux-gnu',
32 's390x': 's390x-linux-gnu',
33 }
34
35 LOGGER = logging.getLogger(__name__)
36
37
38 def prepare_env(cli_args: List[str]) -> List[str]:
39 """
40 Prepare runtime environment for a certbot execution in snap.
41 :param list cli_args: List of command line arguments
42 :return: Update list of command line arguments
43 :rtype: list
44 """
45 snap_arch = os.environ.get('SNAP_ARCH')
46
47 if snap_arch not in _ARCH_TRIPLET_MAP:
48 raise Error('Unrecognized value of SNAP_ARCH: {0}'.format(snap_arch))
49
50 os.environ['CERTBOT_AUGEAS_PATH'] = '{0}/usr/lib/{1}/libaugeas.so.0'.format(
51 os.environ.get('SNAP'), _ARCH_TRIPLET_MAP[snap_arch])
52
53 with Session() as session:
54 session.mount('http://snapd/', _SnapdAdapter())
55
56 try:
57 response = session.get('http://snapd/v2/connections?snap=certbot&interface=content')
58 response.raise_for_status()
59 except RequestException as e:
60 if isinstance(e, HTTPError) and e.response.status_code == 404:
61 LOGGER.error('An error occurred while fetching Certbot snap plugins: '
62 'your version of snapd is outdated.')
63 LOGGER.error('Please run "sudo snap install core; sudo snap refresh core" '
64 'in your terminal and try again.')
65 else:
66 LOGGER.error('An error occurred while fetching Certbot snap plugins: '
67 'make sure the snapd service is running.')
68 raise e
69
70 data = response.json()
71 connections = ['/snap/{0}/current/lib/python3.8/site-packages/'.format(item['slot']['snap'])
72 for item in data.get('result', {}).get('established', [])
73 if item.get('plug', {}).get('plug') == 'plugin'
74 and item.get('plug-attrs', {}).get('content') == 'certbot-1']
75
76 os.environ['CERTBOT_PLUGIN_PATH'] = ':'.join(connections)
77
78 cli_args.append('--preconfigured-renewal')
79
80 return cli_args
81
82
83 class _SnapdConnection(HTTPConnection):
84 def __init__(self) -> None:
85 super().__init__("localhost")
86 self.sock: Optional[socket.socket] = None
87
88 def connect(self) -> None:
89 self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
90 self.sock.connect("/run/snapd.socket")
91
92
93 class _SnapdConnectionPool(HTTPConnectionPool):
94 def __init__(self) -> None:
95 super().__init__("localhost")
96
97 def _new_conn(self) -> _SnapdConnection:
98 return _SnapdConnection()
99
100
101 class _SnapdAdapter(HTTPAdapter):
102 def get_connection(self, url: str,
103 proxies: Optional[Iterable[str]] = None) -> _SnapdConnectionPool:
104 return _SnapdConnectionPool()
105
[end of certbot/certbot/_internal/snap_config.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/certbot/certbot/_internal/snap_config.py b/certbot/certbot/_internal/snap_config.py
--- a/certbot/certbot/_internal/snap_config.py
+++ b/certbot/certbot/_internal/snap_config.py
@@ -54,7 +54,8 @@
session.mount('http://snapd/', _SnapdAdapter())
try:
- response = session.get('http://snapd/v2/connections?snap=certbot&interface=content')
+ response = session.get('http://snapd/v2/connections?snap=certbot&interface=content',
+ timeout=30.0)
response.raise_for_status()
except RequestException as e:
if isinstance(e, HTTPError) and e.response.status_code == 404:
| {"golden_diff": "diff --git a/certbot/certbot/_internal/snap_config.py b/certbot/certbot/_internal/snap_config.py\n--- a/certbot/certbot/_internal/snap_config.py\n+++ b/certbot/certbot/_internal/snap_config.py\n@@ -54,7 +54,8 @@\n session.mount('http://snapd/', _SnapdAdapter())\n \n try:\n- response = session.get('http://snapd/v2/connections?snap=certbot&interface=content')\n+ response = session.get('http://snapd/v2/connections?snap=certbot&interface=content',\n+ timeout=30.0)\n response.raise_for_status()\n except RequestException as e:\n if isinstance(e, HTTPError) and e.response.status_code == 404:\n", "issue": "snap_config.prepare_env can hang forever if snapd is malfunctioning\nReported at https://community.letsencrypt.org/t/sudo-certbot-renew-hangs-forever/138649\r\n\r\nThere's some evidence in that thread that `snapd` can get into a state where it just stops responding to commands (whether via `snap`, `snapctl`, or the REST API directly).\r\n\r\nCertbot should guard against the possibility of hanging forever, by setting timeouts on relevant network operations:\r\n\r\nhttps://github.com/certbot/certbot/blob/9ca7f76505b10b2f395ddffc4ddc1cbc8afb516b/certbot/certbot/_internal/snap_config.py#L87-L89\r\n\r\nhttps://github.com/certbot/certbot/blob/9ca7f76505b10b2f395ddffc4ddc1cbc8afb516b/certbot/certbot/_internal/snap_config.py#L56-L57\r\n\r\nEdit: to clarify, although the reported issue involves the previous shell-based snap wrapper, I have reproduced the same hang and lack of effective timeout with the current Python implementation.\n", "before_files": [{"content": "\"\"\"Module configuring Certbot in a snap environment\"\"\"\nimport logging\nimport socket\nfrom typing import Iterable\nfrom typing import List\nfrom typing import Optional\n\nfrom requests import Session\nfrom requests.adapters import HTTPAdapter\nfrom requests.exceptions import HTTPError\nfrom requests.exceptions import RequestException\n\nfrom certbot.compat import os\nfrom certbot.errors import Error\n\ntry:\n from urllib3.connection import HTTPConnection\n from urllib3.connectionpool import HTTPConnectionPool\nexcept ImportError:\n # Stub imports for oldest requirements, that will never be used in snaps.\n HTTPConnection = object # type: ignore[misc,assignment]\n HTTPConnectionPool = object # type: ignore[misc,assignment]\n\n\n_ARCH_TRIPLET_MAP = {\n 'arm64': 'aarch64-linux-gnu',\n 'armhf': 'arm-linux-gnueabihf',\n 'i386': 'i386-linux-gnu',\n 'ppc64el': 'powerpc64le-linux-gnu',\n 'powerpc': 'powerpc-linux-gnu',\n 'amd64': 'x86_64-linux-gnu',\n 's390x': 's390x-linux-gnu',\n}\n\nLOGGER = logging.getLogger(__name__)\n\n\ndef prepare_env(cli_args: List[str]) -> List[str]:\n \"\"\"\n Prepare runtime environment for a certbot execution in snap.\n :param list cli_args: List of command line arguments\n :return: Update list of command line arguments\n :rtype: list\n \"\"\"\n snap_arch = os.environ.get('SNAP_ARCH')\n\n if snap_arch not in _ARCH_TRIPLET_MAP:\n raise Error('Unrecognized value of SNAP_ARCH: {0}'.format(snap_arch))\n\n os.environ['CERTBOT_AUGEAS_PATH'] = '{0}/usr/lib/{1}/libaugeas.so.0'.format(\n os.environ.get('SNAP'), _ARCH_TRIPLET_MAP[snap_arch])\n\n with Session() as session:\n session.mount('http://snapd/', _SnapdAdapter())\n\n try:\n response = session.get('http://snapd/v2/connections?snap=certbot&interface=content')\n response.raise_for_status()\n except RequestException as e:\n if isinstance(e, HTTPError) and e.response.status_code == 404:\n LOGGER.error('An error occurred while fetching Certbot snap plugins: '\n 'your version of snapd is outdated.')\n LOGGER.error('Please run \"sudo snap install core; sudo snap refresh core\" '\n 'in your terminal and try again.')\n else:\n LOGGER.error('An error occurred while fetching Certbot snap plugins: '\n 'make sure the snapd service is running.')\n raise e\n\n data = response.json()\n connections = ['/snap/{0}/current/lib/python3.8/site-packages/'.format(item['slot']['snap'])\n for item in data.get('result', {}).get('established', [])\n if item.get('plug', {}).get('plug') == 'plugin'\n and item.get('plug-attrs', {}).get('content') == 'certbot-1']\n\n os.environ['CERTBOT_PLUGIN_PATH'] = ':'.join(connections)\n\n cli_args.append('--preconfigured-renewal')\n\n return cli_args\n\n\nclass _SnapdConnection(HTTPConnection):\n def __init__(self) -> None:\n super().__init__(\"localhost\")\n self.sock: Optional[socket.socket] = None\n\n def connect(self) -> None:\n self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)\n self.sock.connect(\"/run/snapd.socket\")\n\n\nclass _SnapdConnectionPool(HTTPConnectionPool):\n def __init__(self) -> None:\n super().__init__(\"localhost\")\n\n def _new_conn(self) -> _SnapdConnection:\n return _SnapdConnection()\n\n\nclass _SnapdAdapter(HTTPAdapter):\n def get_connection(self, url: str,\n proxies: Optional[Iterable[str]] = None) -> _SnapdConnectionPool:\n return _SnapdConnectionPool()\n", "path": "certbot/certbot/_internal/snap_config.py"}]} | 1,898 | 182 |
gh_patches_debug_2671 | rasdani/github-patches | git_diff | saleor__saleor-1389 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add robots meta tag and "nofollow" link attribute
1. Fragile pages should be not indexed by search engines.
```
<meta name=”robots” content=”nofollow, noindex”>
```
- [x] Add above meta tag to order's confirmation page
2. Pages that brings no to little content value should not be crawled
```
<meta name=”robots” content=”nofollow”>
```
- [x] Add above meta tag to sign in/sign up/cart pages
3. Add link attribute
- [x] Links pointing to above pages should have set attribute `rel="nofollow"`
</issue>
<code>
[start of saleor/core/views.py]
1 from __future__ import unicode_literals
2
3 from django.template.response import TemplateResponse
4 from django.contrib import messages
5 from django.conf import settings
6 from django.utils.translation import pgettext_lazy
7 from impersonate.views import impersonate as orig_impersonate
8
9 from ..dashboard.views import staff_member_required
10 from ..product.utils import products_with_availability, products_for_homepage
11 from ..userprofile.models import User
12
13
14 def home(request):
15 products = products_for_homepage()[:8]
16 products = products_with_availability(
17 products, discounts=request.discounts, local_currency=request.currency)
18 return TemplateResponse(
19 request, 'home.html',
20 {'products': products, 'parent': None})
21
22
23 @staff_member_required
24 def styleguide(request):
25 return TemplateResponse(request, 'styleguide.html')
26
27
28 def impersonate(request, uid):
29 response = orig_impersonate(request, uid)
30 if request.session.modified:
31 msg = pgettext_lazy(
32 'Impersonation message',
33 'You are now logged as {}'.format(User.objects.get(pk=uid)))
34 messages.success(request, msg)
35 return response
36
[end of saleor/core/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/saleor/core/views.py b/saleor/core/views.py
--- a/saleor/core/views.py
+++ b/saleor/core/views.py
@@ -2,7 +2,6 @@
from django.template.response import TemplateResponse
from django.contrib import messages
-from django.conf import settings
from django.utils.translation import pgettext_lazy
from impersonate.views import impersonate as orig_impersonate
| {"golden_diff": "diff --git a/saleor/core/views.py b/saleor/core/views.py\n--- a/saleor/core/views.py\n+++ b/saleor/core/views.py\n@@ -2,7 +2,6 @@\n \n from django.template.response import TemplateResponse\n from django.contrib import messages\n-from django.conf import settings\n from django.utils.translation import pgettext_lazy\n from impersonate.views import impersonate as orig_impersonate\n", "issue": "Add robots meta tag and \"nofollow\" link attribute\n1. Fragile pages should be not indexed by search engines.\r\n```\r\n<meta name=\u201drobots\u201d content=\u201dnofollow, noindex\u201d>\r\n```\r\n- [x] Add above meta tag to order's confirmation page\r\n\r\n2. Pages that brings no to little content value should not be crawled\r\n```\r\n<meta name=\u201drobots\u201d content=\u201dnofollow\u201d>\r\n```\r\n- [x] Add above meta tag to sign in/sign up/cart pages \r\n3. Add link attribute\r\n- [x] Links pointing to above pages should have set attribute `rel=\"nofollow\"` \n", "before_files": [{"content": "from __future__ import unicode_literals\n\nfrom django.template.response import TemplateResponse\nfrom django.contrib import messages\nfrom django.conf import settings\nfrom django.utils.translation import pgettext_lazy\nfrom impersonate.views import impersonate as orig_impersonate\n\nfrom ..dashboard.views import staff_member_required\nfrom ..product.utils import products_with_availability, products_for_homepage\nfrom ..userprofile.models import User\n\n\ndef home(request):\n products = products_for_homepage()[:8]\n products = products_with_availability(\n products, discounts=request.discounts, local_currency=request.currency)\n return TemplateResponse(\n request, 'home.html',\n {'products': products, 'parent': None})\n\n\n@staff_member_required\ndef styleguide(request):\n return TemplateResponse(request, 'styleguide.html')\n\n\ndef impersonate(request, uid):\n response = orig_impersonate(request, uid)\n if request.session.modified:\n msg = pgettext_lazy(\n 'Impersonation message',\n 'You are now logged as {}'.format(User.objects.get(pk=uid)))\n messages.success(request, msg)\n return response\n", "path": "saleor/core/views.py"}]} | 960 | 90 |
gh_patches_debug_9597 | rasdani/github-patches | git_diff | google__turbinia-1017 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Cron.py unicode decode error
It looks like input evidence caused a unicode error for cron. I think this would only be raised when receiving unreadable input but filing this if we want to catch this exception differently/look into it further
```
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/turbinia-20220216-py3.8.egg/turbinia/workers/__init__.py", line 1005, in run_wrapper
self.result = self.run(evidence, self.result)
File "/usr/local/lib/python3.8/dist-packages/turbinia-20220216-py3.8.egg/turbinia/workers/cron.py", line 54, in run
crontab = input_file.read()
File "/usr/lib/python3.8/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xdc in position 0: invalid continuation byte
```
</issue>
<code>
[start of turbinia/workers/cron.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2021 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Task for analysing cron files."""
16
17 from __future__ import unicode_literals
18
19 import os
20 import re
21
22 from turbinia.evidence import EvidenceState as state
23 from turbinia.evidence import ReportText
24 from turbinia.lib import text_formatter as fmt
25 from turbinia.workers import TurbiniaTask
26 from turbinia.workers import Priority
27
28
29 class CronAnalysisTask(TurbiniaTask):
30 """Task to analyze crontabs."""
31
32 REQUIRED_STATES = [
33 state.ATTACHED, state.CONTAINER_MOUNTED, state.DECOMPRESSED
34 ]
35
36 def run(self, evidence, result):
37 """Run the cron analysis worker.
38
39 Args:
40 evidence (Evidence object): The evidence we will process.
41 result (TurbiniaTaskResult): The object to place task results into.
42
43 Returns:
44 TurbiniaTaskResult object.
45 """
46 # Where to store the resulting output file.
47 output_file_name = 'cron_analysis.txt'
48 output_file_path = os.path.join(self.output_dir, output_file_name)
49 # Set the output file as the data source for the output evidence.
50 output_evidence = ReportText(source_path=output_file_path)
51
52 # Read the input file
53 with open(evidence.local_path, 'r') as input_file:
54 crontab = input_file.read()
55
56 (report, priority, summary) = self.analyse_crontab(crontab)
57 output_evidence.text_data = report
58 result.report_priority = priority
59 result.report_data = report
60
61 # Write the report to the output file.
62 with open(output_file_path, 'wb') as fh:
63 fh.write(output_evidence.text_data.encode('utf-8'))
64
65 # Add the resulting evidence to the result object.
66 result.add_evidence(output_evidence, evidence.config)
67 result.close(self, success=True, status=summary)
68 return result
69
70 def analyse_crontab(self, crontab):
71 """Analyses a Cron file.
72
73 Args:
74 crontab (str): file content.
75
76 Returns:
77 Tuple(
78 report_text(str): The report data
79 report_priority(int): The priority of the report (0 - 100)
80 summary(str): A summary of the report (used for task status)
81 )
82 """
83 findings = []
84 wget_or_curl = re.compile(r'(wget|curl)', re.IGNORECASE | re.MULTILINE)
85 pipe_to_sh = re.compile(r'\|(.*)sh ', re.IGNORECASE | re.MULTILINE)
86 get_piped_to_sh = re.compile(
87 r'((wget|curl).*\|)+(.*sh)', re.IGNORECASE | re.MULTILINE)
88
89 if re.search(get_piped_to_sh, crontab):
90 findings.append(fmt.bullet('Remote file retrieval piped to a shell.'))
91 elif re.search(wget_or_curl, crontab):
92 findings.append(fmt.bullet('Remote file retrieval'))
93 elif re.search(pipe_to_sh, crontab):
94 findings.append(fmt.bullet('File piped to shell'))
95
96 if findings:
97 summary = 'Potentially backdoored crontab found.'
98 findings.insert(0, fmt.heading4(fmt.bold(summary)))
99 report = '\n'.join(findings)
100 return (report, Priority.HIGH, summary)
101
102 report = 'No issues found in crontabs'
103 return (report, Priority.LOW, report)
104
[end of turbinia/workers/cron.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/turbinia/workers/cron.py b/turbinia/workers/cron.py
--- a/turbinia/workers/cron.py
+++ b/turbinia/workers/cron.py
@@ -51,7 +51,14 @@
# Read the input file
with open(evidence.local_path, 'r') as input_file:
- crontab = input_file.read()
+ try:
+ crontab = input_file.read()
+ except UnicodeDecodeError as exception:
+ message = 'Error parsing cron file {0:s}: {1!s}'.format(
+ evidence.local_path, exception)
+ result.log(message)
+ result.close(self, success=False, status=message)
+ return result
(report, priority, summary) = self.analyse_crontab(crontab)
output_evidence.text_data = report
| {"golden_diff": "diff --git a/turbinia/workers/cron.py b/turbinia/workers/cron.py\n--- a/turbinia/workers/cron.py\n+++ b/turbinia/workers/cron.py\n@@ -51,7 +51,14 @@\n \n # Read the input file\n with open(evidence.local_path, 'r') as input_file:\n- crontab = input_file.read()\n+ try:\n+ crontab = input_file.read()\n+ except UnicodeDecodeError as exception:\n+ message = 'Error parsing cron file {0:s}: {1!s}'.format(\n+ evidence.local_path, exception)\n+ result.log(message)\n+ result.close(self, success=False, status=message)\n+ return result\n \n (report, priority, summary) = self.analyse_crontab(crontab)\n output_evidence.text_data = report\n", "issue": "Cron.py unicode decode error\nIt looks like input evidence caused a unicode error for cron. I think this would only be raised when receiving unreadable input but filing this if we want to catch this exception differently/look into it further\r\n\r\n```\r\n Traceback (most recent call last):\r\n File \"/usr/local/lib/python3.8/dist-packages/turbinia-20220216-py3.8.egg/turbinia/workers/__init__.py\", line 1005, in run_wrapper\r\n self.result = self.run(evidence, self.result)\r\n File \"/usr/local/lib/python3.8/dist-packages/turbinia-20220216-py3.8.egg/turbinia/workers/cron.py\", line 54, in run\r\n crontab = input_file.read()\r\n File \"/usr/lib/python3.8/codecs.py\", line 322, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xdc in position 0: invalid continuation byte\r\n```\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2021 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Task for analysing cron files.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport os\nimport re\n\nfrom turbinia.evidence import EvidenceState as state\nfrom turbinia.evidence import ReportText\nfrom turbinia.lib import text_formatter as fmt\nfrom turbinia.workers import TurbiniaTask\nfrom turbinia.workers import Priority\n\n\nclass CronAnalysisTask(TurbiniaTask):\n \"\"\"Task to analyze crontabs.\"\"\"\n\n REQUIRED_STATES = [\n state.ATTACHED, state.CONTAINER_MOUNTED, state.DECOMPRESSED\n ]\n\n def run(self, evidence, result):\n \"\"\"Run the cron analysis worker.\n\n Args:\n evidence (Evidence object): The evidence we will process.\n result (TurbiniaTaskResult): The object to place task results into.\n\n Returns:\n TurbiniaTaskResult object.\n \"\"\"\n # Where to store the resulting output file.\n output_file_name = 'cron_analysis.txt'\n output_file_path = os.path.join(self.output_dir, output_file_name)\n # Set the output file as the data source for the output evidence.\n output_evidence = ReportText(source_path=output_file_path)\n\n # Read the input file\n with open(evidence.local_path, 'r') as input_file:\n crontab = input_file.read()\n\n (report, priority, summary) = self.analyse_crontab(crontab)\n output_evidence.text_data = report\n result.report_priority = priority\n result.report_data = report\n\n # Write the report to the output file.\n with open(output_file_path, 'wb') as fh:\n fh.write(output_evidence.text_data.encode('utf-8'))\n\n # Add the resulting evidence to the result object.\n result.add_evidence(output_evidence, evidence.config)\n result.close(self, success=True, status=summary)\n return result\n\n def analyse_crontab(self, crontab):\n \"\"\"Analyses a Cron file.\n\n Args:\n crontab (str): file content.\n\n Returns:\n Tuple(\n report_text(str): The report data\n report_priority(int): The priority of the report (0 - 100)\n summary(str): A summary of the report (used for task status)\n )\n \"\"\"\n findings = []\n wget_or_curl = re.compile(r'(wget|curl)', re.IGNORECASE | re.MULTILINE)\n pipe_to_sh = re.compile(r'\\|(.*)sh ', re.IGNORECASE | re.MULTILINE)\n get_piped_to_sh = re.compile(\n r'((wget|curl).*\\|)+(.*sh)', re.IGNORECASE | re.MULTILINE)\n\n if re.search(get_piped_to_sh, crontab):\n findings.append(fmt.bullet('Remote file retrieval piped to a shell.'))\n elif re.search(wget_or_curl, crontab):\n findings.append(fmt.bullet('Remote file retrieval'))\n elif re.search(pipe_to_sh, crontab):\n findings.append(fmt.bullet('File piped to shell'))\n\n if findings:\n summary = 'Potentially backdoored crontab found.'\n findings.insert(0, fmt.heading4(fmt.bold(summary)))\n report = '\\n'.join(findings)\n return (report, Priority.HIGH, summary)\n\n report = 'No issues found in crontabs'\n return (report, Priority.LOW, report)\n", "path": "turbinia/workers/cron.py"}]} | 1,872 | 198 |
gh_patches_debug_20118 | rasdani/github-patches | git_diff | google__turbinia-321 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
turbinia_job_graph.py doesn't support new job manager
turbinia_job_graph.py needs to be updated to support the new job manager (from #257).
</issue>
<code>
[start of tools/turbinia_job_graph.py]
1 # -*- coding: utf-8 -*-
2 # Copyright 2018 Google Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15 """Graph to visualise job/evidence relationships."""
16
17 from __future__ import unicode_literals
18
19 import argparse
20 import graphviz
21
22 from turbinia.jobs import get_jobs as turbinia_jobs
23
24
25 def create_graph():
26 """Create graph of relationships between Turbinia jobs and evidence.
27
28 Returns:
29 Instance of graphviz.dot.Digraph
30 """
31 dot = graphviz.Digraph(comment='Turbinia Evidence graph', format='png')
32 for job in turbinia_jobs():
33 dot.node(job.name)
34 for evidence in job.evidence_input:
35 dot.node(evidence.__name__, shape='box')
36 dot.edge(evidence.__name__, job.name)
37
38 for evidence in job.evidence_output:
39 dot.node(evidence.__name__, shape='box')
40 dot.edge(job.name, evidence.__name__)
41 return dot
42
43
44 if __name__ == '__main__':
45 parser = argparse.ArgumentParser(
46 description='Create Turbinia evidence graph.')
47 parser.add_argument('filename', type=unicode, help='where to save the file')
48 args = parser.parse_args()
49
50 graph = create_graph()
51 output_file = args.filename.replace('.png', '')
52
53 try:
54 rendered_graph = graph.render(filename=output_file, cleanup=True)
55 print('Graph generated and saved to: {0}'.format(rendered_graph))
56 except graphviz.ExecutableNotFound:
57 print('Graphviz is not installed - Run: apt-get install graphviz')
58
[end of tools/turbinia_job_graph.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/tools/turbinia_job_graph.py b/tools/turbinia_job_graph.py
--- a/tools/turbinia_job_graph.py
+++ b/tools/turbinia_job_graph.py
@@ -19,7 +19,7 @@
import argparse
import graphviz
-from turbinia.jobs import get_jobs as turbinia_jobs
+from turbinia.jobs import manager as jobs_manager
def create_graph():
@@ -29,15 +29,15 @@
Instance of graphviz.dot.Digraph
"""
dot = graphviz.Digraph(comment='Turbinia Evidence graph', format='png')
- for job in turbinia_jobs():
- dot.node(job.name)
+ for _, job in jobs_manager.JobsManager.GetJobs():
+ dot.node(job.NAME)
for evidence in job.evidence_input:
dot.node(evidence.__name__, shape='box')
- dot.edge(evidence.__name__, job.name)
+ dot.edge(evidence.__name__, job.NAME)
for evidence in job.evidence_output:
dot.node(evidence.__name__, shape='box')
- dot.edge(job.name, evidence.__name__)
+ dot.edge(job.NAME, evidence.__name__)
return dot
| {"golden_diff": "diff --git a/tools/turbinia_job_graph.py b/tools/turbinia_job_graph.py\n--- a/tools/turbinia_job_graph.py\n+++ b/tools/turbinia_job_graph.py\n@@ -19,7 +19,7 @@\n import argparse\n import graphviz\n \n-from turbinia.jobs import get_jobs as turbinia_jobs\n+from turbinia.jobs import manager as jobs_manager\n \n \n def create_graph():\n@@ -29,15 +29,15 @@\n Instance of graphviz.dot.Digraph\n \"\"\"\n dot = graphviz.Digraph(comment='Turbinia Evidence graph', format='png')\n- for job in turbinia_jobs():\n- dot.node(job.name)\n+ for _, job in jobs_manager.JobsManager.GetJobs():\n+ dot.node(job.NAME)\n for evidence in job.evidence_input:\n dot.node(evidence.__name__, shape='box')\n- dot.edge(evidence.__name__, job.name)\n+ dot.edge(evidence.__name__, job.NAME)\n \n for evidence in job.evidence_output:\n dot.node(evidence.__name__, shape='box')\n- dot.edge(job.name, evidence.__name__)\n+ dot.edge(job.NAME, evidence.__name__)\n return dot\n", "issue": "turbinia_job_graph.py doesn't support new job manager\nturbinia_job_graph.py needs to be updated to support the new job manager (from #257).\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright 2018 Google Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\"\"\"Graph to visualise job/evidence relationships.\"\"\"\n\nfrom __future__ import unicode_literals\n\nimport argparse\nimport graphviz\n\nfrom turbinia.jobs import get_jobs as turbinia_jobs\n\n\ndef create_graph():\n \"\"\"Create graph of relationships between Turbinia jobs and evidence.\n\n Returns:\n Instance of graphviz.dot.Digraph\n \"\"\"\n dot = graphviz.Digraph(comment='Turbinia Evidence graph', format='png')\n for job in turbinia_jobs():\n dot.node(job.name)\n for evidence in job.evidence_input:\n dot.node(evidence.__name__, shape='box')\n dot.edge(evidence.__name__, job.name)\n\n for evidence in job.evidence_output:\n dot.node(evidence.__name__, shape='box')\n dot.edge(job.name, evidence.__name__)\n return dot\n\n\nif __name__ == '__main__':\n parser = argparse.ArgumentParser(\n description='Create Turbinia evidence graph.')\n parser.add_argument('filename', type=unicode, help='where to save the file')\n args = parser.parse_args()\n\n graph = create_graph()\n output_file = args.filename.replace('.png', '')\n\n try:\n rendered_graph = graph.render(filename=output_file, cleanup=True)\n print('Graph generated and saved to: {0}'.format(rendered_graph))\n except graphviz.ExecutableNotFound:\n print('Graphviz is not installed - Run: apt-get install graphviz')\n", "path": "tools/turbinia_job_graph.py"}]} | 1,131 | 270 |
gh_patches_debug_11740 | rasdani/github-patches | git_diff | locustio__locust-2715 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Locust insists on using IPv6 despite being in an IPv4 stack.
### Prerequisites
- [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/)
- [X] I am reporting a bug, not asking a question
### Description
I have an environment that is IPv4-only, but there is an IPv6 address listed in /etc/hosts. When I try to connect to the master using the following command, it results in an error:
```
# locust -f ./p4_basic_code.py --worker --master-host locust-master -L DEBUG
[2024-04-25 06:24:31,138] ip-192-168-0-235.ap-northeast-3.compute.internal/DEBUG/locust.runners: Failed to connect to master locust-master:5557, retry 1/60.
```
By changing [self.socket.setsockopt(zmq.IPV6, 1)](https://github.com/locustio/locust/blob/master/locust/rpc/zmqrpc.py#L18) to 0 resolves the issue.
```
# cat /etc/hosts
127.0.0.1 localhost
192.168.0.235 ip-192-168-0-235.ap-northeast-3.compute.internal
127.255.0.1 locust-master
2600:f0f0:0:0:0:0:0:1 locust-master
```
I recommend detecting whether the current environment supports IPv6 before enabling it.
### Command line
locust -f ./p4_basic_code.py --worker --master-host locust-master -L DEBUG
### Locustfile contents
```python3
N/A
```
### Python version
3.11
### Locust version
2.26.0
### Operating system
Debian 12
</issue>
<code>
[start of locust/rpc/zmqrpc.py]
1 from locust.exception import RPCError, RPCReceiveError, RPCSendError
2 from locust.util.exception_handler import retry
3
4 import msgpack.exceptions as msgerr
5 import zmq.error as zmqerr
6 import zmq.green as zmq
7
8 from .protocol import Message
9
10
11 class BaseSocket:
12 def __init__(self, sock_type):
13 context = zmq.Context()
14 self.socket = context.socket(sock_type)
15
16 self.socket.setsockopt(zmq.TCP_KEEPALIVE, 1)
17 self.socket.setsockopt(zmq.TCP_KEEPALIVE_IDLE, 30)
18 self.socket.setsockopt(zmq.IPV6, 1)
19
20 @retry()
21 def send(self, msg):
22 try:
23 self.socket.send(msg.serialize(), zmq.NOBLOCK)
24 except zmqerr.ZMQError as e:
25 raise RPCSendError("ZMQ sent failure") from e
26
27 @retry()
28 def send_to_client(self, msg):
29 try:
30 self.socket.send_multipart([msg.node_id.encode(), msg.serialize()])
31 except zmqerr.ZMQError as e:
32 raise RPCSendError("ZMQ sent failure") from e
33
34 def recv(self):
35 try:
36 data = self.socket.recv()
37 msg = Message.unserialize(data)
38 except msgerr.ExtraData as e:
39 raise RPCReceiveError("ZMQ interrupted message") from e
40 except zmqerr.ZMQError as e:
41 raise RPCError("ZMQ network broken") from e
42 return msg
43
44 def recv_from_client(self):
45 try:
46 data = self.socket.recv_multipart()
47 addr = data[0].decode()
48 except UnicodeDecodeError as e:
49 raise RPCReceiveError("ZMQ interrupted or corrupted message") from e
50 except zmqerr.ZMQError as e:
51 raise RPCError("ZMQ network broken") from e
52 try:
53 msg = Message.unserialize(data[1])
54 except (UnicodeDecodeError, msgerr.ExtraData) as e:
55 raise RPCReceiveError("ZMQ interrupted or corrupted message", addr=addr) from e
56 return addr, msg
57
58 def close(self, linger=None):
59 self.socket.close(linger=linger)
60
61
62 class Server(BaseSocket):
63 def __init__(self, host, port):
64 BaseSocket.__init__(self, zmq.ROUTER)
65 if port == 0:
66 self.port = self.socket.bind_to_random_port(f"tcp://{host}")
67 else:
68 try:
69 self.socket.bind("tcp://%s:%i" % (host, port))
70 self.port = port
71 except zmqerr.ZMQError as e:
72 raise RPCError(f"Socket bind failure: {e}")
73
74
75 class Client(BaseSocket):
76 def __init__(self, host, port, identity):
77 BaseSocket.__init__(self, zmq.DEALER)
78 self.socket.setsockopt(zmq.IDENTITY, identity.encode())
79 self.socket.connect("tcp://%s:%i" % (host, port))
80
[end of locust/rpc/zmqrpc.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locust/rpc/zmqrpc.py b/locust/rpc/zmqrpc.py
--- a/locust/rpc/zmqrpc.py
+++ b/locust/rpc/zmqrpc.py
@@ -4,6 +4,7 @@
import msgpack.exceptions as msgerr
import zmq.error as zmqerr
import zmq.green as zmq
+from urllib3.util.connection import HAS_IPV6
from .protocol import Message
@@ -15,7 +16,8 @@
self.socket.setsockopt(zmq.TCP_KEEPALIVE, 1)
self.socket.setsockopt(zmq.TCP_KEEPALIVE_IDLE, 30)
- self.socket.setsockopt(zmq.IPV6, 1)
+ if HAS_IPV6:
+ self.socket.setsockopt(zmq.IPV6, 1)
@retry()
def send(self, msg):
| {"golden_diff": "diff --git a/locust/rpc/zmqrpc.py b/locust/rpc/zmqrpc.py\n--- a/locust/rpc/zmqrpc.py\n+++ b/locust/rpc/zmqrpc.py\n@@ -4,6 +4,7 @@\n import msgpack.exceptions as msgerr\n import zmq.error as zmqerr\n import zmq.green as zmq\n+from urllib3.util.connection import HAS_IPV6\n \n from .protocol import Message\n \n@@ -15,7 +16,8 @@\n \n self.socket.setsockopt(zmq.TCP_KEEPALIVE, 1)\n self.socket.setsockopt(zmq.TCP_KEEPALIVE_IDLE, 30)\n- self.socket.setsockopt(zmq.IPV6, 1)\n+ if HAS_IPV6:\n+ self.socket.setsockopt(zmq.IPV6, 1)\n \n @retry()\n def send(self, msg):\n", "issue": "Locust insists on using IPv6 despite being in an IPv4 stack.\n### Prerequisites\r\n\r\n- [X] I am using [the latest version of Locust](https://github.com/locustio/locust/releases/)\r\n- [X] I am reporting a bug, not asking a question\r\n\r\n### Description\r\n\r\nI have an environment that is IPv4-only, but there is an IPv6 address listed in /etc/hosts. When I try to connect to the master using the following command, it results in an error:\r\n\r\n```\r\n# locust -f ./p4_basic_code.py --worker --master-host locust-master -L DEBUG\r\n[2024-04-25 06:24:31,138] ip-192-168-0-235.ap-northeast-3.compute.internal/DEBUG/locust.runners: Failed to connect to master locust-master:5557, retry 1/60.\r\n```\r\n\r\nBy changing [self.socket.setsockopt(zmq.IPV6, 1)](https://github.com/locustio/locust/blob/master/locust/rpc/zmqrpc.py#L18) to 0 resolves the issue.\r\n```\r\n# cat /etc/hosts\r\n127.0.0.1 localhost\r\n192.168.0.235 ip-192-168-0-235.ap-northeast-3.compute.internal\r\n127.255.0.1 locust-master\r\n2600:f0f0:0:0:0:0:0:1 locust-master\r\n```\r\n\r\nI recommend detecting whether the current environment supports IPv6 before enabling it.\r\n\r\n### Command line\r\n\r\nlocust -f ./p4_basic_code.py --worker --master-host locust-master -L DEBUG\r\n\r\n### Locustfile contents\r\n\r\n```python3\r\nN/A\r\n```\r\n\r\n\r\n### Python version\r\n\r\n3.11\r\n\r\n### Locust version\r\n\r\n2.26.0\r\n\r\n### Operating system\r\n\r\nDebian 12\n", "before_files": [{"content": "from locust.exception import RPCError, RPCReceiveError, RPCSendError\nfrom locust.util.exception_handler import retry\n\nimport msgpack.exceptions as msgerr\nimport zmq.error as zmqerr\nimport zmq.green as zmq\n\nfrom .protocol import Message\n\n\nclass BaseSocket:\n def __init__(self, sock_type):\n context = zmq.Context()\n self.socket = context.socket(sock_type)\n\n self.socket.setsockopt(zmq.TCP_KEEPALIVE, 1)\n self.socket.setsockopt(zmq.TCP_KEEPALIVE_IDLE, 30)\n self.socket.setsockopt(zmq.IPV6, 1)\n\n @retry()\n def send(self, msg):\n try:\n self.socket.send(msg.serialize(), zmq.NOBLOCK)\n except zmqerr.ZMQError as e:\n raise RPCSendError(\"ZMQ sent failure\") from e\n\n @retry()\n def send_to_client(self, msg):\n try:\n self.socket.send_multipart([msg.node_id.encode(), msg.serialize()])\n except zmqerr.ZMQError as e:\n raise RPCSendError(\"ZMQ sent failure\") from e\n\n def recv(self):\n try:\n data = self.socket.recv()\n msg = Message.unserialize(data)\n except msgerr.ExtraData as e:\n raise RPCReceiveError(\"ZMQ interrupted message\") from e\n except zmqerr.ZMQError as e:\n raise RPCError(\"ZMQ network broken\") from e\n return msg\n\n def recv_from_client(self):\n try:\n data = self.socket.recv_multipart()\n addr = data[0].decode()\n except UnicodeDecodeError as e:\n raise RPCReceiveError(\"ZMQ interrupted or corrupted message\") from e\n except zmqerr.ZMQError as e:\n raise RPCError(\"ZMQ network broken\") from e\n try:\n msg = Message.unserialize(data[1])\n except (UnicodeDecodeError, msgerr.ExtraData) as e:\n raise RPCReceiveError(\"ZMQ interrupted or corrupted message\", addr=addr) from e\n return addr, msg\n\n def close(self, linger=None):\n self.socket.close(linger=linger)\n\n\nclass Server(BaseSocket):\n def __init__(self, host, port):\n BaseSocket.__init__(self, zmq.ROUTER)\n if port == 0:\n self.port = self.socket.bind_to_random_port(f\"tcp://{host}\")\n else:\n try:\n self.socket.bind(\"tcp://%s:%i\" % (host, port))\n self.port = port\n except zmqerr.ZMQError as e:\n raise RPCError(f\"Socket bind failure: {e}\")\n\n\nclass Client(BaseSocket):\n def __init__(self, host, port, identity):\n BaseSocket.__init__(self, zmq.DEALER)\n self.socket.setsockopt(zmq.IDENTITY, identity.encode())\n self.socket.connect(\"tcp://%s:%i\" % (host, port))\n", "path": "locust/rpc/zmqrpc.py"}]} | 1,775 | 197 |
gh_patches_debug_1391 | rasdani/github-patches | git_diff | opsdroid__opsdroid-1683 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
skill-seen broken with redis database?
I've been testing opsdroid with a redis database and the seen skill appears to be having problems serializing python datetime objects.
user: when did you last see user?
opsdroid: Whoops there has been an error.
opsdroid: Check the log for details.
this is the opsdroid log with DEBUG logging enabled:
```
notrexroof_1 | DEBUG opsdroid.memory: Putting seen to memory.
notrexroof_1 | DEBUG opsdroid.database.redis: Putting seen into Redis.
notrexroof_1 | ERROR opsdroid.core: Exception when running skill 'seen'.
notrexroof_1 | Traceback (most recent call last):
notrexroof_1 | File "/usr/local/lib/python3.8/site-packages/opsdroid/core.py", line 427, in run_skill
notrexroof_1 | return await skill(self, config, event)
notrexroof_1 | File "/root/.local/share/opsdroid/opsdroid-modules/skill/seen/__init__.py", line 16, in last_seen
notrexroof_1 | await message.respond("I last saw {} {}".format(name, human(seen[name], precision=1)))
notrexroof_1 | File "/root/.local/share/opsdroid/site-packages/ago.py", line 55, in human
notrexroof_1 | delta = get_delta_from_subject(subject)
notrexroof_1 | File "/root/.local/share/opsdroid/site-packages/ago.py", line 16, in get_delta_from_subject
notrexroof_1 | subject = float(subject)
notrexroof_1 | TypeError: float() argument must be a string or a number, not 'dict'
```
I know this hasn't been touched in a few years, but I'm wondering if there is a general issue with serializing objects into a redis database within opsdroid.
</issue>
<code>
[start of opsdroid/database/redis/__init__.py]
1 """Module for storing data within Redis."""
2 import json
3 import logging
4
5 import aioredis
6 from aioredis import parser
7 from voluptuous import Any
8
9 from opsdroid.database import Database
10 from opsdroid.helper import JSONEncoder, JSONDecoder
11
12 _LOGGER = logging.getLogger(__name__)
13 CONFIG_SCHEMA = {"host": str, "port": Any(int, str), "database": int, "password": str}
14
15
16 class RedisDatabase(Database):
17 """Database class for storing data within a Redis instance."""
18
19 def __init__(self, config, opsdroid=None):
20 """Initialise the redis database.
21
22 Set basic properties of the database. Initialise properties like
23 name, connection arguments, database file, table name and config.
24
25 Args:
26 config (dict): The configuration of the database which consists
27 of `file` and `table` name of the sqlite database
28 specified in `configuration.yaml` file.
29 opsdroid (OpsDroid): An instance of opsdroid.core.
30
31 """
32 super().__init__(config, opsdroid=opsdroid)
33 self.config = config
34 self.client = None
35 self.host = self.config.get("host", "localhost")
36 self.port = self.config.get("port", 6379)
37 self.database = self.config.get("database", 0)
38 self.password = self.config.get("password", None)
39 _LOGGER.debug(_("Loaded Redis database connector."))
40
41 async def connect(self):
42 """Connect to the database.
43
44 This method will connect to a Redis database. By default it will
45 connect to Redis on localhost on port 6379
46
47 """
48 try:
49 self.client = await aioredis.create_pool(
50 address=(self.host, int(self.port)),
51 db=self.database,
52 password=self.password,
53 parser=parser.PyReader,
54 )
55
56 _LOGGER.info(
57 _("Connected to Redis database %s from %s on port %s."),
58 self.database,
59 self.host,
60 self.port,
61 )
62 except OSError:
63 _LOGGER.warning(
64 _("Unable to connect to Redis database on address: %s port: %s."),
65 self.host,
66 self.port,
67 )
68
69 async def put(self, key, data):
70 """Store the data object in Redis against the key.
71
72 Args:
73 key (string): The key to store the data object under.
74 data (object): The data object to store.
75
76 """
77 if self.client:
78 _LOGGER.debug(_("Putting %s into Redis."), key)
79 await self.client.execute("SET", key, json.dumps(data, cls=JSONEncoder))
80
81 async def get(self, key):
82 """Get data from Redis for a given key.
83
84 Args:
85 key (string): The key to lookup in the database.
86
87 Returns:
88 object or None: The data object stored for that key, or None if no
89 object found for that key.
90
91 """
92 if self.client:
93 _LOGGER.debug(_("Getting %s from Redis."), key)
94 data = await self.client.execute("GET", key)
95
96 if data:
97 return json.loads(data, encoding=JSONDecoder)
98
99 return None
100
101 async def delete(self, key):
102 """Delete data from Redis for a given key.
103
104 Args:
105 key (string): The key to delete in the database.
106
107 """
108 if self.client:
109 _LOGGER.debug(_("Deleting %s from Redis."), key)
110 await self.client.execute("DEL", key)
111
112 async def disconnect(self):
113 """Disconnect from the database."""
114 if self.client:
115 self.client.close()
116
[end of opsdroid/database/redis/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/database/redis/__init__.py b/opsdroid/database/redis/__init__.py
--- a/opsdroid/database/redis/__init__.py
+++ b/opsdroid/database/redis/__init__.py
@@ -94,7 +94,7 @@
data = await self.client.execute("GET", key)
if data:
- return json.loads(data, encoding=JSONDecoder)
+ return json.loads(data, object_hook=JSONDecoder())
return None
| {"golden_diff": "diff --git a/opsdroid/database/redis/__init__.py b/opsdroid/database/redis/__init__.py\n--- a/opsdroid/database/redis/__init__.py\n+++ b/opsdroid/database/redis/__init__.py\n@@ -94,7 +94,7 @@\n data = await self.client.execute(\"GET\", key)\n \n if data:\n- return json.loads(data, encoding=JSONDecoder)\n+ return json.loads(data, object_hook=JSONDecoder())\n \n return None\n", "issue": "skill-seen broken with redis database?\nI've been testing opsdroid with a redis database and the seen skill appears to be having problems serializing python datetime objects.\r\n\r\nuser: when did you last see user?\r\nopsdroid: Whoops there has been an error.\r\nopsdroid: Check the log for details.\r\n\r\nthis is the opsdroid log with DEBUG logging enabled:\r\n\r\n```\r\nnotrexroof_1 | DEBUG opsdroid.memory: Putting seen to memory.\r\nnotrexroof_1 | DEBUG opsdroid.database.redis: Putting seen into Redis.\r\nnotrexroof_1 | ERROR opsdroid.core: Exception when running skill 'seen'.\r\nnotrexroof_1 | Traceback (most recent call last):\r\nnotrexroof_1 | File \"/usr/local/lib/python3.8/site-packages/opsdroid/core.py\", line 427, in run_skill\r\nnotrexroof_1 | return await skill(self, config, event)\r\nnotrexroof_1 | File \"/root/.local/share/opsdroid/opsdroid-modules/skill/seen/__init__.py\", line 16, in last_seen\r\nnotrexroof_1 | await message.respond(\"I last saw {} {}\".format(name, human(seen[name], precision=1)))\r\nnotrexroof_1 | File \"/root/.local/share/opsdroid/site-packages/ago.py\", line 55, in human\r\nnotrexroof_1 | delta = get_delta_from_subject(subject)\r\nnotrexroof_1 | File \"/root/.local/share/opsdroid/site-packages/ago.py\", line 16, in get_delta_from_subject\r\nnotrexroof_1 | subject = float(subject)\r\nnotrexroof_1 | TypeError: float() argument must be a string or a number, not 'dict'\r\n```\r\n\r\nI know this hasn't been touched in a few years, but I'm wondering if there is a general issue with serializing objects into a redis database within opsdroid.\r\n\r\n\n", "before_files": [{"content": "\"\"\"Module for storing data within Redis.\"\"\"\nimport json\nimport logging\n\nimport aioredis\nfrom aioredis import parser\nfrom voluptuous import Any\n\nfrom opsdroid.database import Database\nfrom opsdroid.helper import JSONEncoder, JSONDecoder\n\n_LOGGER = logging.getLogger(__name__)\nCONFIG_SCHEMA = {\"host\": str, \"port\": Any(int, str), \"database\": int, \"password\": str}\n\n\nclass RedisDatabase(Database):\n \"\"\"Database class for storing data within a Redis instance.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Initialise the redis database.\n\n Set basic properties of the database. Initialise properties like\n name, connection arguments, database file, table name and config.\n\n Args:\n config (dict): The configuration of the database which consists\n of `file` and `table` name of the sqlite database\n specified in `configuration.yaml` file.\n opsdroid (OpsDroid): An instance of opsdroid.core.\n\n \"\"\"\n super().__init__(config, opsdroid=opsdroid)\n self.config = config\n self.client = None\n self.host = self.config.get(\"host\", \"localhost\")\n self.port = self.config.get(\"port\", 6379)\n self.database = self.config.get(\"database\", 0)\n self.password = self.config.get(\"password\", None)\n _LOGGER.debug(_(\"Loaded Redis database connector.\"))\n\n async def connect(self):\n \"\"\"Connect to the database.\n\n This method will connect to a Redis database. By default it will\n connect to Redis on localhost on port 6379\n\n \"\"\"\n try:\n self.client = await aioredis.create_pool(\n address=(self.host, int(self.port)),\n db=self.database,\n password=self.password,\n parser=parser.PyReader,\n )\n\n _LOGGER.info(\n _(\"Connected to Redis database %s from %s on port %s.\"),\n self.database,\n self.host,\n self.port,\n )\n except OSError:\n _LOGGER.warning(\n _(\"Unable to connect to Redis database on address: %s port: %s.\"),\n self.host,\n self.port,\n )\n\n async def put(self, key, data):\n \"\"\"Store the data object in Redis against the key.\n\n Args:\n key (string): The key to store the data object under.\n data (object): The data object to store.\n\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Putting %s into Redis.\"), key)\n await self.client.execute(\"SET\", key, json.dumps(data, cls=JSONEncoder))\n\n async def get(self, key):\n \"\"\"Get data from Redis for a given key.\n\n Args:\n key (string): The key to lookup in the database.\n\n Returns:\n object or None: The data object stored for that key, or None if no\n object found for that key.\n\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Getting %s from Redis.\"), key)\n data = await self.client.execute(\"GET\", key)\n\n if data:\n return json.loads(data, encoding=JSONDecoder)\n\n return None\n\n async def delete(self, key):\n \"\"\"Delete data from Redis for a given key.\n\n Args:\n key (string): The key to delete in the database.\n\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Deleting %s from Redis.\"), key)\n await self.client.execute(\"DEL\", key)\n\n async def disconnect(self):\n \"\"\"Disconnect from the database.\"\"\"\n if self.client:\n self.client.close()\n", "path": "opsdroid/database/redis/__init__.py"}]} | 1,999 | 116 |
gh_patches_debug_13451 | rasdani/github-patches | git_diff | CTPUG__wafer-312 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Schedule Editor does not clear extra fields on existing items
When replacing an existing item in the schedule editor, the notes, css_class and details fields are not replaced or cleared.
While this can be useful to leave css_class untouched, it is surprising behaviour and usually the wrong thing to do for notes and details.
I think we must clear notes and details when this happens.
We should probably clear all extra fields when replacing an item, for the sake of predictablity,
</issue>
<code>
[start of wafer/schedule/serializers.py]
1 from rest_framework import serializers
2
3 from wafer.talks.models import Talk
4 from wafer.pages.models import Page
5 from wafer.schedule.models import ScheduleItem, Venue, Slot
6
7
8 class ScheduleItemSerializer(serializers.HyperlinkedModelSerializer):
9 page = serializers.PrimaryKeyRelatedField(
10 allow_null=True, queryset=Page.objects.all())
11 talk = serializers.PrimaryKeyRelatedField(
12 allow_null=True, queryset=Talk.objects.all())
13 venue = serializers.PrimaryKeyRelatedField(
14 allow_null=True, queryset=Venue.objects.all())
15 slots = serializers.PrimaryKeyRelatedField(
16 allow_null=True, many=True, queryset=Slot.objects.all())
17
18 class Meta:
19 model = ScheduleItem
20 fields = ('id', 'talk', 'page', 'venue', 'slots')
21
22 def create(self, validated_data):
23 venue_id = validated_data['venue']
24 slots = validated_data['slots']
25 talk = validated_data.get('talk')
26 page = validated_data.get('page')
27
28 try:
29 existing_schedule_item = ScheduleItem.objects.get(
30 venue_id=venue_id, slots__in=slots)
31 except ScheduleItem.DoesNotExist:
32 pass
33 else:
34 existing_schedule_item.talk = talk
35 existing_schedule_item.page = page
36 existing_schedule_item.slots = slots
37 existing_schedule_item.save()
38 return existing_schedule_item
39 return super(ScheduleItemSerializer, self).create(validated_data)
40
41
42
[end of wafer/schedule/serializers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/wafer/schedule/serializers.py b/wafer/schedule/serializers.py
--- a/wafer/schedule/serializers.py
+++ b/wafer/schedule/serializers.py
@@ -34,6 +34,12 @@
existing_schedule_item.talk = talk
existing_schedule_item.page = page
existing_schedule_item.slots = slots
+ # Clear any existing details that aren't editable by the
+ # schedule edit view
+ existing_schedule_item.details = ''
+ existing_schedule_item.notes = ''
+ existing_schedule_item.css_class = ''
+ existing_schedule_item.expand = False
existing_schedule_item.save()
return existing_schedule_item
return super(ScheduleItemSerializer, self).create(validated_data)
| {"golden_diff": "diff --git a/wafer/schedule/serializers.py b/wafer/schedule/serializers.py\n--- a/wafer/schedule/serializers.py\n+++ b/wafer/schedule/serializers.py\n@@ -34,6 +34,12 @@\n existing_schedule_item.talk = talk\n existing_schedule_item.page = page\n existing_schedule_item.slots = slots\n+ # Clear any existing details that aren't editable by the\n+ # schedule edit view\n+ existing_schedule_item.details = ''\n+ existing_schedule_item.notes = ''\n+ existing_schedule_item.css_class = ''\n+ existing_schedule_item.expand = False\n existing_schedule_item.save()\n return existing_schedule_item\n return super(ScheduleItemSerializer, self).create(validated_data)\n", "issue": "Schedule Editor does not clear extra fields on existing items\nWhen replacing an existing item in the schedule editor, the notes, css_class and details fields are not replaced or cleared.\n\nWhile this can be useful to leave css_class untouched, it is surprising behaviour and usually the wrong thing to do for notes and details. \n\nI think we must clear notes and details when this happens.\n\nWe should probably clear all extra fields when replacing an item, for the sake of predictablity,\n\n", "before_files": [{"content": "from rest_framework import serializers\n\nfrom wafer.talks.models import Talk\nfrom wafer.pages.models import Page\nfrom wafer.schedule.models import ScheduleItem, Venue, Slot\n\n\nclass ScheduleItemSerializer(serializers.HyperlinkedModelSerializer):\n page = serializers.PrimaryKeyRelatedField(\n allow_null=True, queryset=Page.objects.all())\n talk = serializers.PrimaryKeyRelatedField(\n allow_null=True, queryset=Talk.objects.all())\n venue = serializers.PrimaryKeyRelatedField(\n allow_null=True, queryset=Venue.objects.all())\n slots = serializers.PrimaryKeyRelatedField(\n allow_null=True, many=True, queryset=Slot.objects.all())\n\n class Meta:\n model = ScheduleItem\n fields = ('id', 'talk', 'page', 'venue', 'slots')\n\n def create(self, validated_data):\n venue_id = validated_data['venue']\n slots = validated_data['slots']\n talk = validated_data.get('talk')\n page = validated_data.get('page')\n\n try:\n existing_schedule_item = ScheduleItem.objects.get(\n venue_id=venue_id, slots__in=slots)\n except ScheduleItem.DoesNotExist:\n pass\n else:\n existing_schedule_item.talk = talk\n existing_schedule_item.page = page\n existing_schedule_item.slots = slots\n existing_schedule_item.save()\n return existing_schedule_item\n return super(ScheduleItemSerializer, self).create(validated_data)\n\n\n", "path": "wafer/schedule/serializers.py"}]} | 1,010 | 170 |
gh_patches_debug_24514 | rasdani/github-patches | git_diff | bridgecrewio__checkov-2025 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Check CKV_AZURE_50 fails incorrectly for resource 'azurerm_virtual_machine'
CKV_AZURE_50 always fails for the resource "azurerm_virtual_machine" due to an incorrect check of the existence of the attribute "allow_extension_operations":
https://github.com/bridgecrewio/checkov/blob/25388a34231e09ac17b266ad9db0b4c0e806e956/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py#L15
The Terraform resource "azurerm_virtual_machine" does not have an attribute named "allow_extension_operations" (see [Terraform Resouce Docu](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine).
</issue>
<code>
[start of checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py]
1 from checkov.common.models.enums import CheckCategories, CheckResult
2 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
3
4
5 class AzureInstanceExtensions(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure Virtual Machine Extensions are not Installed"
8 id = "CKV_AZURE_50"
9 supported_resources = ['azurerm_virtual_machine', 'azurerm_linux_virtual_machine']
10 categories = [CheckCategories.GENERAL_SECURITY]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
12 missing_block_result=CheckResult.PASSED)
13
14 def get_inspected_key(self):
15 return 'allow_extension_operations'
16
17 def get_expected_value(self):
18 return False
19
20
21 check = AzureInstanceExtensions()
22
[end of checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py b/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py
--- a/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py
+++ b/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py
@@ -1,20 +1,21 @@
-from checkov.common.models.enums import CheckCategories, CheckResult
+from typing import Any
+
+from checkov.common.models.enums import CheckCategories
from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
class AzureInstanceExtensions(BaseResourceValueCheck):
- def __init__(self):
+ def __init__(self) -> None:
name = "Ensure Virtual Machine Extensions are not Installed"
id = "CKV_AZURE_50"
- supported_resources = ['azurerm_virtual_machine', 'azurerm_linux_virtual_machine']
+ supported_resources = ["azurerm_linux_virtual_machine", "azurerm_windows_virtual_machine"]
categories = [CheckCategories.GENERAL_SECURITY]
- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,
- missing_block_result=CheckResult.PASSED)
+ super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
- def get_inspected_key(self):
- return 'allow_extension_operations'
+ def get_inspected_key(self) -> str:
+ return "allow_extension_operations"
- def get_expected_value(self):
+ def get_expected_value(self) -> Any:
return False
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py b/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py\n--- a/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py\n+++ b/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py\n@@ -1,20 +1,21 @@\n-from checkov.common.models.enums import CheckCategories, CheckResult\n+from typing import Any\n+\n+from checkov.common.models.enums import CheckCategories\n from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n \n \n class AzureInstanceExtensions(BaseResourceValueCheck):\n- def __init__(self):\n+ def __init__(self) -> None:\n name = \"Ensure Virtual Machine Extensions are not Installed\"\n id = \"CKV_AZURE_50\"\n- supported_resources = ['azurerm_virtual_machine', 'azurerm_linux_virtual_machine']\n+ supported_resources = [\"azurerm_linux_virtual_machine\", \"azurerm_windows_virtual_machine\"]\n categories = [CheckCategories.GENERAL_SECURITY]\n- super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n- missing_block_result=CheckResult.PASSED)\n+ super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n \n- def get_inspected_key(self):\n- return 'allow_extension_operations'\n+ def get_inspected_key(self) -> str:\n+ return \"allow_extension_operations\"\n \n- def get_expected_value(self):\n+ def get_expected_value(self) -> Any:\n return False\n", "issue": "Check CKV_AZURE_50 fails incorrectly for resource 'azurerm_virtual_machine'\nCKV_AZURE_50 always fails for the resource \"azurerm_virtual_machine\" due to an incorrect check of the existence of the attribute \"allow_extension_operations\":\r\n\r\nhttps://github.com/bridgecrewio/checkov/blob/25388a34231e09ac17b266ad9db0b4c0e806e956/checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py#L15\r\n\r\nThe Terraform resource \"azurerm_virtual_machine\" does not have an attribute named \"allow_extension_operations\" (see [Terraform Resouce Docu](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/virtual_machine).\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckCategories, CheckResult\nfrom checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\n\n\nclass AzureInstanceExtensions(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure Virtual Machine Extensions are not Installed\"\n id = \"CKV_AZURE_50\"\n supported_resources = ['azurerm_virtual_machine', 'azurerm_linux_virtual_machine']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources,\n missing_block_result=CheckResult.PASSED)\n\n def get_inspected_key(self):\n return 'allow_extension_operations'\n\n def get_expected_value(self):\n return False\n\n\ncheck = AzureInstanceExtensions()\n", "path": "checkov/terraform/checks/resource/azure/AzureInstanceExtensions.py"}]} | 942 | 354 |
gh_patches_debug_9846 | rasdani/github-patches | git_diff | microsoft__playwright-python-472 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Interactive mode (REPL) (v 1.8.0a1) Error
**from playwright.sync_api import sync_playwright
playwright = sync_playwright().start()**
Traceback (most recent call last):
File "C:\Python37\lib\site-packages\playwright\_impl\_transport.py", line 27, in _get_stderr_fileno
return sys.stderr.fileno()
io.UnsupportedOperation: fileno
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
playwright = sync_playwright().start()
File "C:\Python37\lib\site-packages\playwright\sync_api\_context_manager.py", line 73, in start
return self.__enter__()
File "C:\Python37\lib\site-packages\playwright\sync_api\_context_manager.py", line 67, in __enter__
dispatcher_fiber.switch()
File "C:\Python37\lib\site-packages\playwright\sync_api\_context_manager.py", line 48, in greenlet_main
loop.run_until_complete(self._connection.run_as_sync())
File "C:\Python37\lib\asyncio\base_events.py", line 587, in run_until_complete
return future.result()
File "C:\Python37\lib\site-packages\playwright\_impl\_connection.py", line 134, in run_as_sync
await self.run()
File "C:\Python37\lib\site-packages\playwright\_impl\_connection.py", line 139, in run
await self._transport.run()
File "C:\Python37\lib\site-packages\playwright\_impl\_transport.py", line 62, in run
stderr=_get_stderr_fileno(),
File "C:\Python37\lib\site-packages\playwright\_impl\_transport.py", line 34, in _get_stderr_fileno
return sys.__stderr__.fileno()
AttributeError: 'NoneType' object has no attribute 'fileno'
</issue>
<code>
[start of playwright/_impl/_transport.py]
1 # Copyright (c) Microsoft Corporation.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import asyncio
16 import io
17 import json
18 import os
19 import sys
20 from pathlib import Path
21 from typing import Dict, Optional
22
23
24 # Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77
25 def _get_stderr_fileno() -> Optional[int]:
26 try:
27 return sys.stderr.fileno()
28 except (AttributeError, io.UnsupportedOperation):
29 # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
30 # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
31 # This is potentially dangerous, but the best we can do.
32 if not hasattr(sys, "__stderr__"):
33 return None
34 return sys.__stderr__.fileno()
35
36
37 class Transport:
38 def __init__(self, driver_executable: Path) -> None:
39 super().__init__()
40 self.on_message = lambda _: None
41 self._stopped = False
42 self._driver_executable = driver_executable
43 self._loop: asyncio.AbstractEventLoop
44
45 def stop(self) -> None:
46 self._stopped = True
47 self._output.close()
48
49 async def run(self) -> None:
50 self._loop = asyncio.get_running_loop()
51
52 driver_env = os.environ.copy()
53 # VSCode's JavaScript Debug Terminal provides it but driver/pkg does not support it
54 driver_env.pop("NODE_OPTIONS", None)
55
56 proc = await asyncio.create_subprocess_exec(
57 str(self._driver_executable),
58 "run-driver",
59 env=driver_env,
60 stdin=asyncio.subprocess.PIPE,
61 stdout=asyncio.subprocess.PIPE,
62 stderr=_get_stderr_fileno(),
63 limit=32768,
64 )
65 assert proc.stdout
66 assert proc.stdin
67 self._output = proc.stdin
68
69 while not self._stopped:
70 try:
71 buffer = await proc.stdout.readexactly(4)
72 length = int.from_bytes(buffer, byteorder="little", signed=False)
73 buffer = bytes(0)
74 while length:
75 to_read = min(length, 32768)
76 data = await proc.stdout.readexactly(to_read)
77 length -= to_read
78 if len(buffer):
79 buffer = buffer + data
80 else:
81 buffer = data
82 obj = json.loads(buffer)
83
84 if "DEBUGP" in os.environ: # pragma: no cover
85 print("\x1b[33mRECV>\x1b[0m", json.dumps(obj, indent=2))
86 self.on_message(obj)
87 except asyncio.IncompleteReadError:
88 break
89 await asyncio.sleep(0)
90
91 def send(self, message: Dict) -> None:
92 msg = json.dumps(message)
93 if "DEBUGP" in os.environ: # pragma: no cover
94 print("\x1b[32mSEND>\x1b[0m", json.dumps(message, indent=2))
95 data = msg.encode()
96 self._output.write(
97 len(data).to_bytes(4, byteorder="little", signed=False) + data
98 )
99
[end of playwright/_impl/_transport.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/playwright/_impl/_transport.py b/playwright/_impl/_transport.py
--- a/playwright/_impl/_transport.py
+++ b/playwright/_impl/_transport.py
@@ -29,7 +29,7 @@
# pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.
# https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors
# This is potentially dangerous, but the best we can do.
- if not hasattr(sys, "__stderr__"):
+ if not hasattr(sys, "__stderr__") or not sys.__stderr__:
return None
return sys.__stderr__.fileno()
| {"golden_diff": "diff --git a/playwright/_impl/_transport.py b/playwright/_impl/_transport.py\n--- a/playwright/_impl/_transport.py\n+++ b/playwright/_impl/_transport.py\n@@ -29,7 +29,7 @@\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n- if not hasattr(sys, \"__stderr__\"):\n+ if not hasattr(sys, \"__stderr__\") or not sys.__stderr__:\n return None\n return sys.__stderr__.fileno()\n", "issue": "Interactive mode (REPL) (v 1.8.0a1) Error\n**from playwright.sync_api import sync_playwright\r\nplaywright = sync_playwright().start()**\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\_impl\\_transport.py\", line 27, in _get_stderr_fileno\r\n return sys.stderr.fileno()\r\nio.UnsupportedOperation: fileno\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<pyshell#1>\", line 1, in <module>\r\n playwright = sync_playwright().start()\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\sync_api\\_context_manager.py\", line 73, in start\r\n return self.__enter__()\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\sync_api\\_context_manager.py\", line 67, in __enter__\r\n dispatcher_fiber.switch()\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\sync_api\\_context_manager.py\", line 48, in greenlet_main\r\n loop.run_until_complete(self._connection.run_as_sync())\r\n File \"C:\\Python37\\lib\\asyncio\\base_events.py\", line 587, in run_until_complete\r\n return future.result()\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\_impl\\_connection.py\", line 134, in run_as_sync\r\n await self.run()\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\_impl\\_connection.py\", line 139, in run\r\n await self._transport.run()\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\_impl\\_transport.py\", line 62, in run\r\n stderr=_get_stderr_fileno(),\r\n File \"C:\\Python37\\lib\\site-packages\\playwright\\_impl\\_transport.py\", line 34, in _get_stderr_fileno\r\n return sys.__stderr__.fileno()\r\nAttributeError: 'NoneType' object has no attribute 'fileno'\n", "before_files": [{"content": "# Copyright (c) Microsoft Corporation.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport asyncio\nimport io\nimport json\nimport os\nimport sys\nfrom pathlib import Path\nfrom typing import Dict, Optional\n\n\n# Sourced from: https://github.com/pytest-dev/pytest/blob/da01ee0a4bb0af780167ecd228ab3ad249511302/src/_pytest/faulthandler.py#L69-L77\ndef _get_stderr_fileno() -> Optional[int]:\n try:\n return sys.stderr.fileno()\n except (AttributeError, io.UnsupportedOperation):\n # pytest-xdist monkeypatches sys.stderr with an object that is not an actual file.\n # https://docs.python.org/3/library/faulthandler.html#issue-with-file-descriptors\n # This is potentially dangerous, but the best we can do.\n if not hasattr(sys, \"__stderr__\"):\n return None\n return sys.__stderr__.fileno()\n\n\nclass Transport:\n def __init__(self, driver_executable: Path) -> None:\n super().__init__()\n self.on_message = lambda _: None\n self._stopped = False\n self._driver_executable = driver_executable\n self._loop: asyncio.AbstractEventLoop\n\n def stop(self) -> None:\n self._stopped = True\n self._output.close()\n\n async def run(self) -> None:\n self._loop = asyncio.get_running_loop()\n\n driver_env = os.environ.copy()\n # VSCode's JavaScript Debug Terminal provides it but driver/pkg does not support it\n driver_env.pop(\"NODE_OPTIONS\", None)\n\n proc = await asyncio.create_subprocess_exec(\n str(self._driver_executable),\n \"run-driver\",\n env=driver_env,\n stdin=asyncio.subprocess.PIPE,\n stdout=asyncio.subprocess.PIPE,\n stderr=_get_stderr_fileno(),\n limit=32768,\n )\n assert proc.stdout\n assert proc.stdin\n self._output = proc.stdin\n\n while not self._stopped:\n try:\n buffer = await proc.stdout.readexactly(4)\n length = int.from_bytes(buffer, byteorder=\"little\", signed=False)\n buffer = bytes(0)\n while length:\n to_read = min(length, 32768)\n data = await proc.stdout.readexactly(to_read)\n length -= to_read\n if len(buffer):\n buffer = buffer + data\n else:\n buffer = data\n obj = json.loads(buffer)\n\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[33mRECV>\\x1b[0m\", json.dumps(obj, indent=2))\n self.on_message(obj)\n except asyncio.IncompleteReadError:\n break\n await asyncio.sleep(0)\n\n def send(self, message: Dict) -> None:\n msg = json.dumps(message)\n if \"DEBUGP\" in os.environ: # pragma: no cover\n print(\"\\x1b[32mSEND>\\x1b[0m\", json.dumps(message, indent=2))\n data = msg.encode()\n self._output.write(\n len(data).to_bytes(4, byteorder=\"little\", signed=False) + data\n )\n", "path": "playwright/_impl/_transport.py"}]} | 2,031 | 150 |
gh_patches_debug_829 | rasdani/github-patches | git_diff | ivy-llc__ivy-15926 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
log
</issue>
<code>
[start of ivy/functional/frontends/paddle/tensor/math.py]
1 # global
2 import ivy
3 from ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import (
5 to_ivy_arrays_and_back,
6 )
7
8
9 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
10 @to_ivy_arrays_and_back
11 def sin(x, name=None):
12 return ivy.sin(x)
13
14
15 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
16 @to_ivy_arrays_and_back
17 def cos(x, name=None):
18 return ivy.cos(x)
19
20
21 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
22 @to_ivy_arrays_and_back
23 def acos(x, name=None):
24 return ivy.acos(x)
25
26
27 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
28 @to_ivy_arrays_and_back
29 def cosh(x, name=None):
30 return ivy.cosh(x)
31
32
33 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
34 @to_ivy_arrays_and_back
35 def tanh(x, name=None):
36 return ivy.tanh(x)
37
38
39 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
40 @to_ivy_arrays_and_back
41 def acosh(x, name=None):
42 return ivy.acosh(x)
43
44
45 @with_supported_dtypes({"2.4.2 and below": ("float32", "float64")}, "paddle")
46 @to_ivy_arrays_and_back
47 def asin(x, name=None):
48 return ivy.asin(x)
49
[end of ivy/functional/frontends/paddle/tensor/math.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py
--- a/ivy/functional/frontends/paddle/tensor/math.py
+++ b/ivy/functional/frontends/paddle/tensor/math.py
@@ -46,3 +46,9 @@
@to_ivy_arrays_and_back
def asin(x, name=None):
return ivy.asin(x)
+
+
+@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
+@to_ivy_arrays_and_back
+def log(x, name=None):
+ return ivy.log(x)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/tensor/math.py b/ivy/functional/frontends/paddle/tensor/math.py\n--- a/ivy/functional/frontends/paddle/tensor/math.py\n+++ b/ivy/functional/frontends/paddle/tensor/math.py\n@@ -46,3 +46,9 @@\n @to_ivy_arrays_and_back\n def asin(x, name=None):\n return ivy.asin(x)\n+\n+\n+@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n+@to_ivy_arrays_and_back\n+def log(x, name=None):\n+ return ivy.log(x)\n", "issue": "log\n\n", "before_files": [{"content": "# global\nimport ivy\nfrom ivy.func_wrapper import with_unsupported_dtypes, with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import (\n to_ivy_arrays_and_back,\n)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef sin(x, name=None):\n return ivy.sin(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cos(x, name=None):\n return ivy.cos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acos(x, name=None):\n return ivy.acos(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef cosh(x, name=None):\n return ivy.cosh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef tanh(x, name=None):\n return ivy.tanh(x)\n\n\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef acosh(x, name=None):\n return ivy.acosh(x)\n\n\n@with_supported_dtypes({\"2.4.2 and below\": (\"float32\", \"float64\")}, \"paddle\")\n@to_ivy_arrays_and_back\ndef asin(x, name=None):\n return ivy.asin(x)\n", "path": "ivy/functional/frontends/paddle/tensor/math.py"}]} | 1,078 | 156 |
gh_patches_debug_36271 | rasdani/github-patches | git_diff | opsdroid__opsdroid-1835 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade redis database backend to use aioredis v2
Looks like v2 of `aioredis` is out and has breaking changes which affect us. In #1809 I've pinned us to v1 for now but we should upgrade things to work with v2.
Specifically importing the parser fails
https://github.com/opsdroid/opsdroid/blob/a45490d1bdceca39b49880e20262b55ea0be101d/opsdroid/database/redis/__init__.py#L6
```python-traceback
ImportError while importing test module '/home/runner/work/opsdroid/opsdroid/opsdroid/database/redis/tests/test_redis.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
/opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
opsdroid/database/redis/tests/test_redis.py:8: in <module>
from opsdroid.database.redis import RedisDatabase
opsdroid/database/redis/__init__.py:6: in <module>
from aioredis import parser
E ImportError: cannot import name 'parser' from 'aioredis' (/home/runner/work/opsdroid/opsdroid/.tox/py37-e2e/lib/python3.7/site-packages/aioredis/__init__.py)
```
</issue>
<code>
[start of opsdroid/database/redis/__init__.py]
1 """Module for storing data within Redis."""
2 import json
3 import logging
4
5 import aioredis
6 from aioredis import parser
7 from voluptuous import Any
8
9 from opsdroid.database import Database
10 from opsdroid.helper import JSONEncoder, JSONDecoder
11
12 _LOGGER = logging.getLogger(__name__)
13 CONFIG_SCHEMA = {"host": str, "port": Any(int, str), "database": int, "password": str}
14
15
16 class RedisDatabase(Database):
17 """Database class for storing data within a Redis instance."""
18
19 def __init__(self, config, opsdroid=None):
20 """Initialise the redis database.
21
22 Set basic properties of the database. Initialise properties like
23 name, connection arguments, database file, table name and config.
24
25 Args:
26 config (dict): The configuration of the database which consists
27 of `file` and `table` name of the sqlite database
28 specified in `configuration.yaml` file.
29 opsdroid (OpsDroid): An instance of opsdroid.core.
30
31 """
32 super().__init__(config, opsdroid=opsdroid)
33 self.config = config
34 self.client = None
35 self.host = self.config.get("host", "localhost")
36 self.port = self.config.get("port", 6379)
37 self.database = self.config.get("database", 0)
38 self.password = self.config.get("password", None)
39 _LOGGER.debug(_("Loaded Redis database connector."))
40
41 async def connect(self):
42 """Connect to the database.
43
44 This method will connect to a Redis database. By default it will
45 connect to Redis on localhost on port 6379
46
47 """
48 try:
49 self.client = await aioredis.create_pool(
50 address=(self.host, int(self.port)),
51 db=self.database,
52 password=self.password,
53 parser=parser.PyReader,
54 )
55
56 _LOGGER.info(
57 _("Connected to Redis database %s from %s on port %s."),
58 self.database,
59 self.host,
60 self.port,
61 )
62 except OSError:
63 _LOGGER.warning(
64 _("Unable to connect to Redis database on address: %s port: %s."),
65 self.host,
66 self.port,
67 )
68
69 async def put(self, key, data):
70 """Store the data object in Redis against the key.
71
72 Args:
73 key (string): The key to store the data object under.
74 data (object): The data object to store.
75
76 """
77 if self.client:
78 _LOGGER.debug(_("Putting %s into Redis."), key)
79 await self.client.execute("SET", key, json.dumps(data, cls=JSONEncoder))
80
81 async def get(self, key):
82 """Get data from Redis for a given key.
83
84 Args:
85 key (string): The key to lookup in the database.
86
87 Returns:
88 object or None: The data object stored for that key, or None if no
89 object found for that key.
90
91 """
92 if self.client:
93 _LOGGER.debug(_("Getting %s from Redis."), key)
94 data = await self.client.execute("GET", key)
95
96 if data:
97 return json.loads(data, object_hook=JSONDecoder())
98
99 return None
100
101 async def delete(self, key):
102 """Delete data from Redis for a given key.
103
104 Args:
105 key (string): The key to delete in the database.
106
107 """
108 if self.client:
109 _LOGGER.debug(_("Deleting %s from Redis."), key)
110 await self.client.execute("DEL", key)
111
112 async def disconnect(self):
113 """Disconnect from the database."""
114 if self.client:
115 self.client.close()
116
[end of opsdroid/database/redis/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/opsdroid/database/redis/__init__.py b/opsdroid/database/redis/__init__.py
--- a/opsdroid/database/redis/__init__.py
+++ b/opsdroid/database/redis/__init__.py
@@ -2,8 +2,7 @@
import json
import logging
-import aioredis
-from aioredis import parser
+from aioredis import Redis
from voluptuous import Any
from opsdroid.database import Database
@@ -46,12 +45,13 @@
"""
try:
- self.client = await aioredis.create_pool(
- address=(self.host, int(self.port)),
+ self.client = Redis(
+ host=self.host,
+ port=int(self.port),
db=self.database,
password=self.password,
- parser=parser.PyReader,
)
+ await self.client.ping() # to actually initiate a connection
_LOGGER.info(
_("Connected to Redis database %s from %s on port %s."),
@@ -76,7 +76,9 @@
"""
if self.client:
_LOGGER.debug(_("Putting %s into Redis."), key)
- await self.client.execute("SET", key, json.dumps(data, cls=JSONEncoder))
+ await self.client.execute_command(
+ "SET", key, json.dumps(data, cls=JSONEncoder)
+ )
async def get(self, key):
"""Get data from Redis for a given key.
@@ -91,7 +93,7 @@
"""
if self.client:
_LOGGER.debug(_("Getting %s from Redis."), key)
- data = await self.client.execute("GET", key)
+ data = await self.client.execute_command("GET", key)
if data:
return json.loads(data, object_hook=JSONDecoder())
@@ -107,9 +109,9 @@
"""
if self.client:
_LOGGER.debug(_("Deleting %s from Redis."), key)
- await self.client.execute("DEL", key)
+ await self.client.execute_command("DEL", key)
async def disconnect(self):
"""Disconnect from the database."""
if self.client:
- self.client.close()
+ await self.client.close()
| {"golden_diff": "diff --git a/opsdroid/database/redis/__init__.py b/opsdroid/database/redis/__init__.py\n--- a/opsdroid/database/redis/__init__.py\n+++ b/opsdroid/database/redis/__init__.py\n@@ -2,8 +2,7 @@\n import json\n import logging\n \n-import aioredis\n-from aioredis import parser\n+from aioredis import Redis\n from voluptuous import Any\n \n from opsdroid.database import Database\n@@ -46,12 +45,13 @@\n \n \"\"\"\n try:\n- self.client = await aioredis.create_pool(\n- address=(self.host, int(self.port)),\n+ self.client = Redis(\n+ host=self.host,\n+ port=int(self.port),\n db=self.database,\n password=self.password,\n- parser=parser.PyReader,\n )\n+ await self.client.ping() # to actually initiate a connection\n \n _LOGGER.info(\n _(\"Connected to Redis database %s from %s on port %s.\"),\n@@ -76,7 +76,9 @@\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Putting %s into Redis.\"), key)\n- await self.client.execute(\"SET\", key, json.dumps(data, cls=JSONEncoder))\n+ await self.client.execute_command(\n+ \"SET\", key, json.dumps(data, cls=JSONEncoder)\n+ )\n \n async def get(self, key):\n \"\"\"Get data from Redis for a given key.\n@@ -91,7 +93,7 @@\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Getting %s from Redis.\"), key)\n- data = await self.client.execute(\"GET\", key)\n+ data = await self.client.execute_command(\"GET\", key)\n \n if data:\n return json.loads(data, object_hook=JSONDecoder())\n@@ -107,9 +109,9 @@\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Deleting %s from Redis.\"), key)\n- await self.client.execute(\"DEL\", key)\n+ await self.client.execute_command(\"DEL\", key)\n \n async def disconnect(self):\n \"\"\"Disconnect from the database.\"\"\"\n if self.client:\n- self.client.close()\n+ await self.client.close()\n", "issue": "Upgrade redis database backend to use aioredis v2\nLooks like v2 of `aioredis` is out and has breaking changes which affect us. In #1809 I've pinned us to v1 for now but we should upgrade things to work with v2.\r\n\r\nSpecifically importing the parser fails\r\n\r\nhttps://github.com/opsdroid/opsdroid/blob/a45490d1bdceca39b49880e20262b55ea0be101d/opsdroid/database/redis/__init__.py#L6\r\n\r\n```python-traceback\r\n ImportError while importing test module '/home/runner/work/opsdroid/opsdroid/opsdroid/database/redis/tests/test_redis.py'.\r\n Hint: make sure your test modules/packages have valid Python names.\r\n Traceback:\r\n /opt/hostedtoolcache/Python/3.7.11/x64/lib/python3.7/importlib/__init__.py:127: in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n opsdroid/database/redis/tests/test_redis.py:8: in <module>\r\n from opsdroid.database.redis import RedisDatabase\r\n opsdroid/database/redis/__init__.py:6: in <module>\r\n from aioredis import parser\r\n E ImportError: cannot import name 'parser' from 'aioredis' (/home/runner/work/opsdroid/opsdroid/.tox/py37-e2e/lib/python3.7/site-packages/aioredis/__init__.py)\r\n```\n", "before_files": [{"content": "\"\"\"Module for storing data within Redis.\"\"\"\nimport json\nimport logging\n\nimport aioredis\nfrom aioredis import parser\nfrom voluptuous import Any\n\nfrom opsdroid.database import Database\nfrom opsdroid.helper import JSONEncoder, JSONDecoder\n\n_LOGGER = logging.getLogger(__name__)\nCONFIG_SCHEMA = {\"host\": str, \"port\": Any(int, str), \"database\": int, \"password\": str}\n\n\nclass RedisDatabase(Database):\n \"\"\"Database class for storing data within a Redis instance.\"\"\"\n\n def __init__(self, config, opsdroid=None):\n \"\"\"Initialise the redis database.\n\n Set basic properties of the database. Initialise properties like\n name, connection arguments, database file, table name and config.\n\n Args:\n config (dict): The configuration of the database which consists\n of `file` and `table` name of the sqlite database\n specified in `configuration.yaml` file.\n opsdroid (OpsDroid): An instance of opsdroid.core.\n\n \"\"\"\n super().__init__(config, opsdroid=opsdroid)\n self.config = config\n self.client = None\n self.host = self.config.get(\"host\", \"localhost\")\n self.port = self.config.get(\"port\", 6379)\n self.database = self.config.get(\"database\", 0)\n self.password = self.config.get(\"password\", None)\n _LOGGER.debug(_(\"Loaded Redis database connector.\"))\n\n async def connect(self):\n \"\"\"Connect to the database.\n\n This method will connect to a Redis database. By default it will\n connect to Redis on localhost on port 6379\n\n \"\"\"\n try:\n self.client = await aioredis.create_pool(\n address=(self.host, int(self.port)),\n db=self.database,\n password=self.password,\n parser=parser.PyReader,\n )\n\n _LOGGER.info(\n _(\"Connected to Redis database %s from %s on port %s.\"),\n self.database,\n self.host,\n self.port,\n )\n except OSError:\n _LOGGER.warning(\n _(\"Unable to connect to Redis database on address: %s port: %s.\"),\n self.host,\n self.port,\n )\n\n async def put(self, key, data):\n \"\"\"Store the data object in Redis against the key.\n\n Args:\n key (string): The key to store the data object under.\n data (object): The data object to store.\n\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Putting %s into Redis.\"), key)\n await self.client.execute(\"SET\", key, json.dumps(data, cls=JSONEncoder))\n\n async def get(self, key):\n \"\"\"Get data from Redis for a given key.\n\n Args:\n key (string): The key to lookup in the database.\n\n Returns:\n object or None: The data object stored for that key, or None if no\n object found for that key.\n\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Getting %s from Redis.\"), key)\n data = await self.client.execute(\"GET\", key)\n\n if data:\n return json.loads(data, object_hook=JSONDecoder())\n\n return None\n\n async def delete(self, key):\n \"\"\"Delete data from Redis for a given key.\n\n Args:\n key (string): The key to delete in the database.\n\n \"\"\"\n if self.client:\n _LOGGER.debug(_(\"Deleting %s from Redis.\"), key)\n await self.client.execute(\"DEL\", key)\n\n async def disconnect(self):\n \"\"\"Disconnect from the database.\"\"\"\n if self.client:\n self.client.close()\n", "path": "opsdroid/database/redis/__init__.py"}]} | 1,915 | 491 |
gh_patches_debug_73 | rasdani/github-patches | git_diff | pypa__setuptools-1043 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No module named 'six'
Flask's Tox/Travis builds all started failing. Looks like a new version of setuptools was just released that has a problem with six.
~~~pytb
Obtaining file:///home/david/Projects/flask
Installing collected packages: Flask
Running setup.py develop for Flask
Complete output from command /home/david/Projects/flask/.tox/py/bin/python3 -c "import setuptools, tokenize;__file__='/home/david/Projects/flask/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" develop --no-deps:
/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py:336: UserWarning: Normalizing '0.13-dev' to '0.13.dev0'
normalized_version,
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/david/Projects/flask/setup.py", line 109, in <module>
'''
File "/usr/lib64/python3.6/distutils/core.py", line 134, in setup
ok = dist.parse_command_line()
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py", line 363, in parse_command_line
result = _Distribution.parse_command_line(self)
File "/usr/lib64/python3.6/distutils/dist.py", line 472, in parse_command_line
args = self._parse_command_opts(parser, args)
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py", line 674, in _parse_command_opts
nargs = _Distribution._parse_command_opts(self, parser, args)
File "/usr/lib64/python3.6/distutils/dist.py", line 528, in _parse_command_opts
cmd_class = self.get_command_class(command)
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py", line 495, in get_command_class
self.cmdclass[command] = cmdclass = ep.load()
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2303, in load
return self.resolve()
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/pkg_resources/__init__.py", line 2309, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/command/develop.py", line 11, in <module>
from setuptools.command.easy_install import easy_install
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/command/easy_install.py", line 49, in <module>
from setuptools.py27compat import rmtree_safe
File "/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/py27compat.py", line 7, in <module>
import six
ModuleNotFoundError: No module named 'six'
~~~
Example failed build log: https://travis-ci.org/pallets/flask/jobs/238166427#L242
</issue>
<code>
[start of setuptools/py27compat.py]
1 """
2 Compatibility Support for Python 2.7 and earlier
3 """
4
5 import platform
6
7 import six
8
9
10 def get_all_headers(message, key):
11 """
12 Given an HTTPMessage, return all headers matching a given key.
13 """
14 return message.get_all(key)
15
16
17 if six.PY2:
18 def get_all_headers(message, key):
19 return message.getheaders(key)
20
21
22 linux_py2_ascii = (
23 platform.system() == 'Linux' and
24 six.PY2
25 )
26
27 rmtree_safe = str if linux_py2_ascii else lambda x: x
28 """Workaround for http://bugs.python.org/issue24672"""
29
[end of setuptools/py27compat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setuptools/py27compat.py b/setuptools/py27compat.py
--- a/setuptools/py27compat.py
+++ b/setuptools/py27compat.py
@@ -4,7 +4,7 @@
import platform
-import six
+from setuptools.extern import six
def get_all_headers(message, key):
| {"golden_diff": "diff --git a/setuptools/py27compat.py b/setuptools/py27compat.py\n--- a/setuptools/py27compat.py\n+++ b/setuptools/py27compat.py\n@@ -4,7 +4,7 @@\n \n import platform\n \n-import six\n+from setuptools.extern import six\n \n \n def get_all_headers(message, key):\n", "issue": "No module named 'six'\nFlask's Tox/Travis builds all started failing. Looks like a new version of setuptools was just released that has a problem with six.\r\n\r\n~~~pytb\r\nObtaining file:///home/david/Projects/flask\r\nInstalling collected packages: Flask\r\n Running setup.py develop for Flask\r\n Complete output from command /home/david/Projects/flask/.tox/py/bin/python3 -c \"import setuptools, tokenize;__file__='/home/david/Projects/flask/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\\r\\n', '\\n');f.close();exec(compile(code, __file__, 'exec'))\" develop --no-deps:\r\n /home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py:336: UserWarning: Normalizing '0.13-dev' to '0.13.dev0'\r\n normalized_version,\r\n Traceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"/home/david/Projects/flask/setup.py\", line 109, in <module>\r\n '''\r\n File \"/usr/lib64/python3.6/distutils/core.py\", line 134, in setup\r\n ok = dist.parse_command_line()\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py\", line 363, in parse_command_line\r\n result = _Distribution.parse_command_line(self)\r\n File \"/usr/lib64/python3.6/distutils/dist.py\", line 472, in parse_command_line\r\n args = self._parse_command_opts(parser, args)\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py\", line 674, in _parse_command_opts\r\n nargs = _Distribution._parse_command_opts(self, parser, args)\r\n File \"/usr/lib64/python3.6/distutils/dist.py\", line 528, in _parse_command_opts\r\n cmd_class = self.get_command_class(command)\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/dist.py\", line 495, in get_command_class\r\n self.cmdclass[command] = cmdclass = ep.load()\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/pkg_resources/__init__.py\", line 2303, in load\r\n return self.resolve()\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/pkg_resources/__init__.py\", line 2309, in resolve\r\n module = __import__(self.module_name, fromlist=['__name__'], level=0)\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/command/develop.py\", line 11, in <module>\r\n from setuptools.command.easy_install import easy_install\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/command/easy_install.py\", line 49, in <module>\r\n from setuptools.py27compat import rmtree_safe\r\n File \"/home/david/Projects/flask/.tox/py/lib/python3.6/site-packages/setuptools/py27compat.py\", line 7, in <module>\r\n import six\r\n ModuleNotFoundError: No module named 'six'\r\n~~~\r\n\r\nExample failed build log: https://travis-ci.org/pallets/flask/jobs/238166427#L242\n", "before_files": [{"content": "\"\"\"\nCompatibility Support for Python 2.7 and earlier\n\"\"\"\n\nimport platform\n\nimport six\n\n\ndef get_all_headers(message, key):\n \"\"\"\n Given an HTTPMessage, return all headers matching a given key.\n \"\"\"\n return message.get_all(key)\n\n\nif six.PY2:\n def get_all_headers(message, key):\n return message.getheaders(key)\n\n\nlinux_py2_ascii = (\n platform.system() == 'Linux' and\n six.PY2\n)\n\nrmtree_safe = str if linux_py2_ascii else lambda x: x\n\"\"\"Workaround for http://bugs.python.org/issue24672\"\"\"\n", "path": "setuptools/py27compat.py"}]} | 1,531 | 74 |
gh_patches_debug_6810 | rasdani/github-patches | git_diff | Pylons__pyramid-2671 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
reify docstring doesn't render properly
The testsetup section is omitted from the rendered docs at http://pyramid.readthedocs.io/en/latest/api/decorator.html#pyramid.decorator.reify
Unfortunately this section is essential for understanding the example.
</issue>
<code>
[start of pyramid/decorator.py]
1 from functools import update_wrapper
2
3
4 class reify(object):
5 """ Use as a class method decorator. It operates almost exactly like the
6 Python ``@property`` decorator, but it puts the result of the method it
7 decorates into the instance dict after the first call, effectively
8 replacing the function it decorates with an instance variable. It is, in
9 Python parlance, a non-data descriptor. An example:
10
11 .. testsetup::
12
13 from pyramid.decorator import reify
14
15 class Foo(object):
16 @reify
17 def jammy(self):
18 print('jammy called')
19 return 1
20
21 And usage of Foo:
22
23 .. doctest::
24
25 >>> f = Foo()
26 >>> v = f.jammy
27 jammy called
28 >>> print(v)
29 1
30 >>> f.jammy
31 1
32 >>> # jammy func not called the second time; it replaced itself with 1
33 >>> # Note: reassignment is possible
34 >>> f.jammy = 2
35 >>> f.jammy
36 2
37 """
38 def __init__(self, wrapped):
39 self.wrapped = wrapped
40 update_wrapper(self, wrapped)
41
42 def __get__(self, inst, objtype=None):
43 if inst is None:
44 return self
45 val = self.wrapped(inst)
46 setattr(inst, self.wrapped.__name__, val)
47 return val
48
49
[end of pyramid/decorator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyramid/decorator.py b/pyramid/decorator.py
--- a/pyramid/decorator.py
+++ b/pyramid/decorator.py
@@ -8,6 +8,16 @@
replacing the function it decorates with an instance variable. It is, in
Python parlance, a non-data descriptor. An example:
+ .. code-block:: python
+
+ from pyramid.decorator import reify
+
+ class Foo(object):
+ @reify
+ def jammy(self):
+ print('jammy called')
+ return 1
+
.. testsetup::
from pyramid.decorator import reify
| {"golden_diff": "diff --git a/pyramid/decorator.py b/pyramid/decorator.py\n--- a/pyramid/decorator.py\n+++ b/pyramid/decorator.py\n@@ -8,6 +8,16 @@\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n \n+ .. code-block:: python\n+\n+ from pyramid.decorator import reify\n+\n+ class Foo(object):\n+ @reify\n+ def jammy(self):\n+ print('jammy called')\n+ return 1\n+\n .. testsetup::\n \n from pyramid.decorator import reify\n", "issue": "reify docstring doesn't render properly\nThe testsetup section is omitted from the rendered docs at http://pyramid.readthedocs.io/en/latest/api/decorator.html#pyramid.decorator.reify\n\nUnfortunately this section is essential for understanding the example.\n\n", "before_files": [{"content": "from functools import update_wrapper\n\n\nclass reify(object):\n \"\"\" Use as a class method decorator. It operates almost exactly like the\n Python ``@property`` decorator, but it puts the result of the method it\n decorates into the instance dict after the first call, effectively\n replacing the function it decorates with an instance variable. It is, in\n Python parlance, a non-data descriptor. An example:\n\n .. testsetup::\n\n from pyramid.decorator import reify\n\n class Foo(object):\n @reify\n def jammy(self):\n print('jammy called')\n return 1\n\n And usage of Foo:\n\n .. doctest::\n\n >>> f = Foo()\n >>> v = f.jammy\n jammy called\n >>> print(v)\n 1\n >>> f.jammy\n 1\n >>> # jammy func not called the second time; it replaced itself with 1\n >>> # Note: reassignment is possible\n >>> f.jammy = 2\n >>> f.jammy\n 2\n \"\"\"\n def __init__(self, wrapped):\n self.wrapped = wrapped\n update_wrapper(self, wrapped)\n\n def __get__(self, inst, objtype=None):\n if inst is None:\n return self\n val = self.wrapped(inst)\n setattr(inst, self.wrapped.__name__, val)\n return val\n\n", "path": "pyramid/decorator.py"}]} | 995 | 150 |
gh_patches_debug_10906 | rasdani/github-patches | git_diff | plone__Products.CMFPlone-973 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Group administration: AttributeError: 'NoneType' object has no attribute 'getGroupTitleOrName'
/Plone2/@@usergroup-groupmembership?groupname=None
gives me
Here is the full error message:
Display traceback as text
Traceback (innermost last):
Module ZPublisher.Publish, line 138, in publish
Module ZPublisher.mapply, line 77, in mapply
Module ZPublisher.Publish, line 48, in call_object
Module Products.CMFPlone.controlpanel.browser.usergroups_groupmembership, line 69, in **call**
Module Products.CMFPlone.controlpanel.browser.usergroups_groupmembership, line 16, in update
AttributeError: 'NoneType' object has no attribute 'getGroupTitleOrName'
This happens when you click on "new group" and then on the "group members" tab.
</issue>
<code>
[start of Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py]
1 from Products.CMFPlone import PloneMessageFactory as _
2 from zExceptions import Forbidden
3 from Products.CMFCore.utils import getToolByName
4 from Products.CMFPlone.controlpanel.browser.usergroups import \
5 UsersGroupsControlPanelView
6 from Products.CMFPlone.utils import normalizeString
7
8
9 class GroupMembershipControlPanel(UsersGroupsControlPanelView):
10
11 def update(self):
12 self.groupname = getattr(self.request, 'groupname')
13 self.gtool = getToolByName(self, 'portal_groups')
14 self.mtool = getToolByName(self, 'portal_membership')
15 self.group = self.gtool.getGroupById(self.groupname)
16 self.grouptitle = self.group.getGroupTitleOrName() or self.groupname
17
18 self.request.set('grouproles', self.group.getRoles() if self.group else [])
19 self.canAddUsers = True
20 if 'Manager' in self.request.get('grouproles') and not self.is_zope_manager:
21 self.canAddUsers = False
22
23 self.groupquery = self.makeQuery(groupname=self.groupname)
24 self.groupkeyquery = self.makeQuery(key=self.groupname)
25
26 form = self.request.form
27 submitted = form.get('form.submitted', False)
28
29 self.searchResults = []
30 self.searchString = ''
31 self.newSearch = False
32
33 if submitted:
34 # add/delete before we search so we don't show stale results
35 toAdd = form.get('add', [])
36 if toAdd:
37 if not self.canAddUsers:
38 raise Forbidden
39
40 for u in toAdd:
41 self.gtool.addPrincipalToGroup(u, self.groupname, self.request)
42 self.context.plone_utils.addPortalMessage(_(u'Changes made.'))
43
44 toDelete = form.get('delete', [])
45 if toDelete:
46 for u in toDelete:
47 self.gtool.removePrincipalFromGroup(u, self.groupname, self.request)
48 self.context.plone_utils.addPortalMessage(_(u'Changes made.'))
49
50 search = form.get('form.button.Search', None) is not None
51 edit = form.get('form.button.Edit', None) is not None and toDelete
52 add = form.get('form.button.Add', None) is not None and toAdd
53 findAll = form.get('form.button.FindAll', None) is not None and \
54 not self.many_users
55 # The search string should be cleared when one of the
56 # non-search buttons has been clicked.
57 if findAll or edit or add:
58 form['searchstring'] = ''
59 self.searchString = form.get('searchstring', '')
60 if findAll or bool(self.searchString):
61 self.searchResults = self.getPotentialMembers(self.searchString)
62
63 if search or findAll:
64 self.newSearch = True
65
66 self.groupMembers = self.getMembers()
67
68 def __call__(self):
69 self.update()
70 return self.index()
71
72 def isGroup(self, itemName):
73 return self.gtool.isGroup(itemName)
74
75 def getMembers(self):
76 searchResults = self.gtool.getGroupMembers(self.groupname)
77
78 groupResults = [self.gtool.getGroupById(m) for m in searchResults]
79 groupResults.sort(key=lambda x: x is not None and normalizeString(x.getGroupTitleOrName()))
80
81 userResults = [self.mtool.getMemberById(m) for m in searchResults]
82 userResults.sort(key=lambda x: x is not None and x.getProperty('fullname') is not None and normalizeString(x.getProperty('fullname')) or '')
83
84 mergedResults = groupResults + userResults
85 return filter(None, mergedResults)
86
87 def getPotentialMembers(self, searchString):
88 ignoredUsersGroups = [x.id for x in self.getMembers() + [self.group,] if x is not None]
89 return self.membershipSearch(searchString, ignore=ignoredUsersGroups)
90
[end of Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py
--- a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py
+++ b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py
@@ -13,6 +13,9 @@
self.gtool = getToolByName(self, 'portal_groups')
self.mtool = getToolByName(self, 'portal_membership')
self.group = self.gtool.getGroupById(self.groupname)
+ if self.group is None:
+ return
+
self.grouptitle = self.group.getGroupTitleOrName() or self.groupname
self.request.set('grouproles', self.group.getRoles() if self.group else [])
| {"golden_diff": "diff --git a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py\n--- a/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py\n+++ b/Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py\n@@ -13,6 +13,9 @@\n self.gtool = getToolByName(self, 'portal_groups')\n self.mtool = getToolByName(self, 'portal_membership')\n self.group = self.gtool.getGroupById(self.groupname)\n+ if self.group is None:\n+ return\n+\n self.grouptitle = self.group.getGroupTitleOrName() or self.groupname\n \n self.request.set('grouproles', self.group.getRoles() if self.group else [])\n", "issue": "Group administration: AttributeError: 'NoneType' object has no attribute 'getGroupTitleOrName'\n/Plone2/@@usergroup-groupmembership?groupname=None\n\ngives me\n\nHere is the full error message:\n\nDisplay traceback as text\n\nTraceback (innermost last):\n\nModule ZPublisher.Publish, line 138, in publish\nModule ZPublisher.mapply, line 77, in mapply\nModule ZPublisher.Publish, line 48, in call_object\nModule Products.CMFPlone.controlpanel.browser.usergroups_groupmembership, line 69, in **call**\nModule Products.CMFPlone.controlpanel.browser.usergroups_groupmembership, line 16, in update\nAttributeError: 'NoneType' object has no attribute 'getGroupTitleOrName'\n\nThis happens when you click on \"new group\" and then on the \"group members\" tab.\n\n", "before_files": [{"content": "from Products.CMFPlone import PloneMessageFactory as _\nfrom zExceptions import Forbidden\nfrom Products.CMFCore.utils import getToolByName\nfrom Products.CMFPlone.controlpanel.browser.usergroups import \\\n UsersGroupsControlPanelView\nfrom Products.CMFPlone.utils import normalizeString\n\n\nclass GroupMembershipControlPanel(UsersGroupsControlPanelView):\n\n def update(self):\n self.groupname = getattr(self.request, 'groupname')\n self.gtool = getToolByName(self, 'portal_groups')\n self.mtool = getToolByName(self, 'portal_membership')\n self.group = self.gtool.getGroupById(self.groupname)\n self.grouptitle = self.group.getGroupTitleOrName() or self.groupname\n\n self.request.set('grouproles', self.group.getRoles() if self.group else [])\n self.canAddUsers = True\n if 'Manager' in self.request.get('grouproles') and not self.is_zope_manager:\n self.canAddUsers = False\n\n self.groupquery = self.makeQuery(groupname=self.groupname)\n self.groupkeyquery = self.makeQuery(key=self.groupname)\n\n form = self.request.form\n submitted = form.get('form.submitted', False)\n\n self.searchResults = []\n self.searchString = ''\n self.newSearch = False\n\n if submitted:\n # add/delete before we search so we don't show stale results\n toAdd = form.get('add', [])\n if toAdd:\n if not self.canAddUsers:\n raise Forbidden\n\n for u in toAdd:\n self.gtool.addPrincipalToGroup(u, self.groupname, self.request)\n self.context.plone_utils.addPortalMessage(_(u'Changes made.'))\n\n toDelete = form.get('delete', [])\n if toDelete:\n for u in toDelete:\n self.gtool.removePrincipalFromGroup(u, self.groupname, self.request)\n self.context.plone_utils.addPortalMessage(_(u'Changes made.'))\n\n search = form.get('form.button.Search', None) is not None\n edit = form.get('form.button.Edit', None) is not None and toDelete\n add = form.get('form.button.Add', None) is not None and toAdd\n findAll = form.get('form.button.FindAll', None) is not None and \\\n not self.many_users\n # The search string should be cleared when one of the\n # non-search buttons has been clicked.\n if findAll or edit or add:\n form['searchstring'] = ''\n self.searchString = form.get('searchstring', '')\n if findAll or bool(self.searchString):\n self.searchResults = self.getPotentialMembers(self.searchString)\n\n if search or findAll:\n self.newSearch = True\n\n self.groupMembers = self.getMembers()\n\n def __call__(self):\n self.update()\n return self.index()\n\n def isGroup(self, itemName):\n return self.gtool.isGroup(itemName)\n\n def getMembers(self):\n searchResults = self.gtool.getGroupMembers(self.groupname)\n\n groupResults = [self.gtool.getGroupById(m) for m in searchResults]\n groupResults.sort(key=lambda x: x is not None and normalizeString(x.getGroupTitleOrName()))\n\n userResults = [self.mtool.getMemberById(m) for m in searchResults]\n userResults.sort(key=lambda x: x is not None and x.getProperty('fullname') is not None and normalizeString(x.getProperty('fullname')) or '')\n\n mergedResults = groupResults + userResults\n return filter(None, mergedResults)\n\n def getPotentialMembers(self, searchString):\n ignoredUsersGroups = [x.id for x in self.getMembers() + [self.group,] if x is not None]\n return self.membershipSearch(searchString, ignore=ignoredUsersGroups)\n", "path": "Products/CMFPlone/controlpanel/browser/usergroups_groupmembership.py"}]} | 1,737 | 180 |
gh_patches_debug_16347 | rasdani/github-patches | git_diff | liqd__a4-meinberlin-506 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Organisations listed in filter on project overview unsorted
The list of organisations listed in filter on the project overview page is unsorted and determined by the order of creating the organisations. I think it would be best to sort them alphabetically.
</issue>
<code>
[start of apps/projects/views.py]
1 from datetime import datetime
2 import django_filters
3 from django.apps import apps
4 from django.conf import settings
5 from django.utils.translation import ugettext_lazy as _
6
7 from adhocracy4.filters import views as filter_views
8 from adhocracy4.filters.filters import DefaultsFilterSet
9 from adhocracy4.projects import models as project_models
10
11 from apps.contrib.widgets import DropdownLinkWidget
12 from apps.dashboard import blueprints
13
14
15 class OrderingWidget(DropdownLinkWidget):
16 label = _('Ordering')
17 right = True
18
19
20 class OrganisationWidget(DropdownLinkWidget):
21 label = _('Organisation')
22
23
24 class ArchivedWidget(DropdownLinkWidget):
25 label = _('Archived')
26
27 def __init__(self, attrs=None):
28 choices = (
29 ('', _('All')),
30 ('false', _('No')),
31 ('true', _('Yes')),
32 )
33 super().__init__(attrs, choices)
34
35
36 class YearWidget(DropdownLinkWidget):
37 label = _('Year')
38
39 def __init__(self, attrs=None):
40 choices = (('', _('Any')),)
41 now = datetime.now().year
42 try:
43 first_year = project_models.Project.objects.earliest('created').\
44 created.year
45 except project_models.Project.DoesNotExist:
46 first_year = now
47 for year in range(now, first_year - 1, -1):
48 choices += (year, year),
49 super().__init__(attrs, choices)
50
51
52 class TypeWidget(DropdownLinkWidget):
53 label = _('Project Type')
54
55 def __init__(self, attrs=None):
56 choices = (('', _('Any')),)
57 for blueprint_key, blueprint in blueprints.blueprints:
58 choices += (blueprint_key, blueprint.title),
59 super().__init__(attrs, choices)
60
61
62 class ProjectFilterSet(DefaultsFilterSet):
63
64 defaults = {
65 'is_archived': 'false'
66 }
67
68 ordering = django_filters.OrderingFilter(
69 choices=(
70 ('-created', _('Most recent')),
71 ),
72 empty_label=None,
73 widget=OrderingWidget,
74 )
75
76 organisation = django_filters.ModelChoiceFilter(
77 queryset=apps.get_model(settings.A4_ORGANISATIONS_MODEL).objects.all(),
78 widget=OrganisationWidget,
79 )
80
81 is_archived = django_filters.BooleanFilter(
82 widget=ArchivedWidget
83 )
84
85 created = django_filters.NumberFilter(
86 name='created',
87 lookup_expr='year',
88 widget=YearWidget,
89 )
90
91 typ = django_filters.CharFilter(
92 widget=TypeWidget,
93 )
94
95 class Meta:
96 model = project_models.Project
97 fields = ['organisation', 'is_archived', 'created', 'typ']
98
99
100 class ProjectListView(filter_views.FilteredListView):
101 model = project_models.Project
102 paginate_by = 16
103 filter_set = ProjectFilterSet
104
105 def get_queryset(self):
106 return super().get_queryset().filter(is_draft=False)
107
[end of apps/projects/views.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/projects/views.py b/apps/projects/views.py
--- a/apps/projects/views.py
+++ b/apps/projects/views.py
@@ -54,7 +54,8 @@
def __init__(self, attrs=None):
choices = (('', _('Any')),)
- for blueprint_key, blueprint in blueprints.blueprints:
+ sorted_blueprints = sorted(blueprints.blueprints, key=lambda a: a[1])
+ for blueprint_key, blueprint in sorted_blueprints:
choices += (blueprint_key, blueprint.title),
super().__init__(attrs, choices)
@@ -74,7 +75,8 @@
)
organisation = django_filters.ModelChoiceFilter(
- queryset=apps.get_model(settings.A4_ORGANISATIONS_MODEL).objects.all(),
+ queryset=apps.get_model(settings.A4_ORGANISATIONS_MODEL).objects
+ .order_by('name'),
widget=OrganisationWidget,
)
| {"golden_diff": "diff --git a/apps/projects/views.py b/apps/projects/views.py\n--- a/apps/projects/views.py\n+++ b/apps/projects/views.py\n@@ -54,7 +54,8 @@\n \n def __init__(self, attrs=None):\n choices = (('', _('Any')),)\n- for blueprint_key, blueprint in blueprints.blueprints:\n+ sorted_blueprints = sorted(blueprints.blueprints, key=lambda a: a[1])\n+ for blueprint_key, blueprint in sorted_blueprints:\n choices += (blueprint_key, blueprint.title),\n super().__init__(attrs, choices)\n \n@@ -74,7 +75,8 @@\n )\n \n organisation = django_filters.ModelChoiceFilter(\n- queryset=apps.get_model(settings.A4_ORGANISATIONS_MODEL).objects.all(),\n+ queryset=apps.get_model(settings.A4_ORGANISATIONS_MODEL).objects\n+ .order_by('name'),\n widget=OrganisationWidget,\n )\n", "issue": "Organisations listed in filter on project overview unsorted\nThe list of organisations listed in filter on the project overview page is unsorted and determined by the order of creating the organisations. I think it would be best to sort them alphabetically.\n", "before_files": [{"content": "from datetime import datetime\nimport django_filters\nfrom django.apps import apps\nfrom django.conf import settings\nfrom django.utils.translation import ugettext_lazy as _\n\nfrom adhocracy4.filters import views as filter_views\nfrom adhocracy4.filters.filters import DefaultsFilterSet\nfrom adhocracy4.projects import models as project_models\n\nfrom apps.contrib.widgets import DropdownLinkWidget\nfrom apps.dashboard import blueprints\n\n\nclass OrderingWidget(DropdownLinkWidget):\n label = _('Ordering')\n right = True\n\n\nclass OrganisationWidget(DropdownLinkWidget):\n label = _('Organisation')\n\n\nclass ArchivedWidget(DropdownLinkWidget):\n label = _('Archived')\n\n def __init__(self, attrs=None):\n choices = (\n ('', _('All')),\n ('false', _('No')),\n ('true', _('Yes')),\n )\n super().__init__(attrs, choices)\n\n\nclass YearWidget(DropdownLinkWidget):\n label = _('Year')\n\n def __init__(self, attrs=None):\n choices = (('', _('Any')),)\n now = datetime.now().year\n try:\n first_year = project_models.Project.objects.earliest('created').\\\n created.year\n except project_models.Project.DoesNotExist:\n first_year = now\n for year in range(now, first_year - 1, -1):\n choices += (year, year),\n super().__init__(attrs, choices)\n\n\nclass TypeWidget(DropdownLinkWidget):\n label = _('Project Type')\n\n def __init__(self, attrs=None):\n choices = (('', _('Any')),)\n for blueprint_key, blueprint in blueprints.blueprints:\n choices += (blueprint_key, blueprint.title),\n super().__init__(attrs, choices)\n\n\nclass ProjectFilterSet(DefaultsFilterSet):\n\n defaults = {\n 'is_archived': 'false'\n }\n\n ordering = django_filters.OrderingFilter(\n choices=(\n ('-created', _('Most recent')),\n ),\n empty_label=None,\n widget=OrderingWidget,\n )\n\n organisation = django_filters.ModelChoiceFilter(\n queryset=apps.get_model(settings.A4_ORGANISATIONS_MODEL).objects.all(),\n widget=OrganisationWidget,\n )\n\n is_archived = django_filters.BooleanFilter(\n widget=ArchivedWidget\n )\n\n created = django_filters.NumberFilter(\n name='created',\n lookup_expr='year',\n widget=YearWidget,\n )\n\n typ = django_filters.CharFilter(\n widget=TypeWidget,\n )\n\n class Meta:\n model = project_models.Project\n fields = ['organisation', 'is_archived', 'created', 'typ']\n\n\nclass ProjectListView(filter_views.FilteredListView):\n model = project_models.Project\n paginate_by = 16\n filter_set = ProjectFilterSet\n\n def get_queryset(self):\n return super().get_queryset().filter(is_draft=False)\n", "path": "apps/projects/views.py"}]} | 1,412 | 206 |
gh_patches_debug_13266 | rasdani/github-patches | git_diff | liqd__a4-opin-1158 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dashboard styling: spacing when entering the first information of project
We also need more space between texts, boxes and bottons on the page where I add the first information for a project.

</issue>
<code>
[start of euth/projects/forms.py]
1 from django import forms
2
3 from adhocracy4.projects.models import Project
4 from euth.users.fields import UserSearchField
5
6
7 class AddModeratorForm(forms.ModelForm):
8 user = UserSearchField(required=False, identifier='moderators',)
9
10 class Meta:
11 model = Project
12 fields = ('user',)
13
[end of euth/projects/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/euth/projects/forms.py b/euth/projects/forms.py
--- a/euth/projects/forms.py
+++ b/euth/projects/forms.py
@@ -1,11 +1,17 @@
from django import forms
+from django.utils.translation import ugettext_lazy as _
from adhocracy4.projects.models import Project
from euth.users.fields import UserSearchField
class AddModeratorForm(forms.ModelForm):
- user = UserSearchField(required=False, identifier='moderators',)
+ user = UserSearchField(required=False,
+ identifier='moderators',
+ help_text=_('Type in the username '
+ 'of a user you would '
+ 'like to add as moderator.'),
+ label=_('Search for username'))
class Meta:
model = Project
| {"golden_diff": "diff --git a/euth/projects/forms.py b/euth/projects/forms.py\n--- a/euth/projects/forms.py\n+++ b/euth/projects/forms.py\n@@ -1,11 +1,17 @@\n from django import forms\n+from django.utils.translation import ugettext_lazy as _\n \n from adhocracy4.projects.models import Project\n from euth.users.fields import UserSearchField\n \n \n class AddModeratorForm(forms.ModelForm):\n- user = UserSearchField(required=False, identifier='moderators',)\n+ user = UserSearchField(required=False,\n+ identifier='moderators',\n+ help_text=_('Type in the username '\n+ 'of a user you would '\n+ 'like to add as moderator.'),\n+ label=_('Search for username'))\n \n class Meta:\n model = Project\n", "issue": "dashboard styling: spacing when entering the first information of project\nWe also need more space between texts, boxes and bottons on the page where I add the first information for a project.\r\n\r\n\r\n\n", "before_files": [{"content": "from django import forms\n\nfrom adhocracy4.projects.models import Project\nfrom euth.users.fields import UserSearchField\n\n\nclass AddModeratorForm(forms.ModelForm):\n user = UserSearchField(required=False, identifier='moderators',)\n\n class Meta:\n model = Project\n fields = ('user',)\n", "path": "euth/projects/forms.py"}]} | 743 | 170 |
gh_patches_debug_2517 | rasdani/github-patches | git_diff | encode__uvicorn-436 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
--limit-max-requests not working
Hi! I'm trying to figure out why my workers are not restarting as expected when using the `--limit-max-requests` flag. I ran `uvicorn` in debug mode and noticed that the `self.server_state.total_requests` count is not increasing (stays at 0 after each request) so `self.server_state.total_requests >= self.config.limit_max_requests` never returns `True`.
When looking into where the `total_requests` was used, I noticed that the `protocols.http.[auto/h11/httptools]` were never getting called. I tried forcing the `--http h11` and `--http httptools` parameters, without any change in behavior. Any help would be appreciated!
</issue>
<code>
[start of uvicorn/workers.py]
1 import asyncio
2
3 from gunicorn.workers.base import Worker
4
5 from uvicorn.config import Config
6 from uvicorn.main import Server
7
8
9 class UvicornWorker(Worker):
10 """
11 A worker class for Gunicorn that interfaces with an ASGI consumer callable,
12 rather than a WSGI callable.
13 """
14
15 CONFIG_KWARGS = {"loop": "uvloop", "http": "httptools"}
16
17 def __init__(self, *args, **kwargs):
18 super(UvicornWorker, self).__init__(*args, **kwargs)
19
20 self.log.level = self.log.loglevel
21
22 config_kwargs = {
23 "app": None,
24 "logger": self.log,
25 "timeout_keep_alive": self.cfg.keepalive,
26 "timeout_notify": self.timeout,
27 "callback_notify": self.callback_notify,
28 }
29
30 if self.cfg.is_ssl:
31 ssl_kwargs = {
32 "ssl_keyfile": self.cfg.ssl_options.get("keyfile"),
33 "ssl_certfile": self.cfg.ssl_options.get("certfile"),
34 "ssl_version": self.cfg.ssl_options.get("ssl_version"),
35 "ssl_cert_reqs": self.cfg.ssl_options.get("cert_reqs"),
36 "ssl_ca_certs": self.cfg.ssl_options.get("ca_certs"),
37 "ssl_ciphers": self.cfg.ssl_options.get("ciphers"),
38 }
39 config_kwargs.update(ssl_kwargs)
40
41 config_kwargs.update(self.CONFIG_KWARGS)
42
43 self.config = Config(**config_kwargs)
44
45 def init_process(self):
46 self.config.setup_event_loop()
47 super(UvicornWorker, self).init_process()
48
49 def init_signals(self):
50 pass
51
52 def run(self):
53 self.config.app = self.wsgi
54 server = Server(config=self.config)
55 loop = asyncio.get_event_loop()
56 loop.run_until_complete(
57 server.serve(sockets=self.sockets, shutdown_servers=False)
58 )
59
60 async def callback_notify(self):
61 self.notify()
62
63
64 class UvicornH11Worker(UvicornWorker):
65 CONFIG_KWARGS = {"loop": "asyncio", "http": "h11"}
66
[end of uvicorn/workers.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/uvicorn/workers.py b/uvicorn/workers.py
--- a/uvicorn/workers.py
+++ b/uvicorn/workers.py
@@ -25,6 +25,7 @@
"timeout_keep_alive": self.cfg.keepalive,
"timeout_notify": self.timeout,
"callback_notify": self.callback_notify,
+ "limit_max_requests": self.max_requests,
}
if self.cfg.is_ssl:
| {"golden_diff": "diff --git a/uvicorn/workers.py b/uvicorn/workers.py\n--- a/uvicorn/workers.py\n+++ b/uvicorn/workers.py\n@@ -25,6 +25,7 @@\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n+ \"limit_max_requests\": self.max_requests,\n }\n \n if self.cfg.is_ssl:\n", "issue": "--limit-max-requests not working\nHi! I'm trying to figure out why my workers are not restarting as expected when using the `--limit-max-requests` flag. I ran `uvicorn` in debug mode and noticed that the `self.server_state.total_requests` count is not increasing (stays at 0 after each request) so `self.server_state.total_requests >= self.config.limit_max_requests` never returns `True`. \r\nWhen looking into where the `total_requests` was used, I noticed that the `protocols.http.[auto/h11/httptools]` were never getting called. I tried forcing the `--http h11` and `--http httptools` parameters, without any change in behavior. Any help would be appreciated!\n", "before_files": [{"content": "import asyncio\n\nfrom gunicorn.workers.base import Worker\n\nfrom uvicorn.config import Config\nfrom uvicorn.main import Server\n\n\nclass UvicornWorker(Worker):\n \"\"\"\n A worker class for Gunicorn that interfaces with an ASGI consumer callable,\n rather than a WSGI callable.\n \"\"\"\n\n CONFIG_KWARGS = {\"loop\": \"uvloop\", \"http\": \"httptools\"}\n\n def __init__(self, *args, **kwargs):\n super(UvicornWorker, self).__init__(*args, **kwargs)\n\n self.log.level = self.log.loglevel\n\n config_kwargs = {\n \"app\": None,\n \"logger\": self.log,\n \"timeout_keep_alive\": self.cfg.keepalive,\n \"timeout_notify\": self.timeout,\n \"callback_notify\": self.callback_notify,\n }\n\n if self.cfg.is_ssl:\n ssl_kwargs = {\n \"ssl_keyfile\": self.cfg.ssl_options.get(\"keyfile\"),\n \"ssl_certfile\": self.cfg.ssl_options.get(\"certfile\"),\n \"ssl_version\": self.cfg.ssl_options.get(\"ssl_version\"),\n \"ssl_cert_reqs\": self.cfg.ssl_options.get(\"cert_reqs\"),\n \"ssl_ca_certs\": self.cfg.ssl_options.get(\"ca_certs\"),\n \"ssl_ciphers\": self.cfg.ssl_options.get(\"ciphers\"),\n }\n config_kwargs.update(ssl_kwargs)\n\n config_kwargs.update(self.CONFIG_KWARGS)\n\n self.config = Config(**config_kwargs)\n\n def init_process(self):\n self.config.setup_event_loop()\n super(UvicornWorker, self).init_process()\n\n def init_signals(self):\n pass\n\n def run(self):\n self.config.app = self.wsgi\n server = Server(config=self.config)\n loop = asyncio.get_event_loop()\n loop.run_until_complete(\n server.serve(sockets=self.sockets, shutdown_servers=False)\n )\n\n async def callback_notify(self):\n self.notify()\n\n\nclass UvicornH11Worker(UvicornWorker):\n CONFIG_KWARGS = {\"loop\": \"asyncio\", \"http\": \"h11\"}\n", "path": "uvicorn/workers.py"}]} | 1,270 | 98 |
gh_patches_debug_9521 | rasdani/github-patches | git_diff | mkdocs__mkdocs-272 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Show installed version in command-line
I'd expect `mkdocs help` to display the currently installed version, would be nice to have for the next versions
</issue>
<code>
[start of mkdocs/main.py]
1 #!/usr/bin/env python
2 # coding: utf-8
3 from __future__ import print_function
4
5 import sys
6
7 from mkdocs.build import build
8 from mkdocs.config import load_config
9 from mkdocs.exceptions import ConfigurationError
10 from mkdocs.gh_deploy import gh_deploy
11 from mkdocs.new import new
12 from mkdocs.serve import serve
13
14
15 def arg_to_option(arg):
16 """
17 Convert command line arguments into two-tuples of config key/value pairs.
18 """
19 arg = arg.lstrip('--')
20 option = True
21 if '=' in arg:
22 arg, option = arg.split('=', 1)
23 return (arg.replace('-', '_'), option)
24
25
26 def main(cmd, args, options=None):
27 """
28 Build the documentation, and optionally start the devserver.
29 """
30 clean_site_dir = 'clean' in options
31 if cmd == 'serve':
32 config = load_config(options=options)
33 serve(config, options=options)
34 elif cmd == 'build':
35 config = load_config(options=options)
36 build(config, clean_site_dir=clean_site_dir)
37 elif cmd == 'json':
38 config = load_config(options=options)
39 build(config, dump_json=True, clean_site_dir=clean_site_dir)
40 elif cmd == 'gh-deploy':
41 config = load_config(options=options)
42 build(config, clean_site_dir=clean_site_dir)
43 gh_deploy(config)
44 elif cmd == 'new':
45 new(args, options)
46 else:
47 print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')
48
49
50 def run_main():
51 """
52 Invokes main() with the contents of sys.argv
53
54 This is a separate function so it can be invoked
55 by a setuptools console_script.
56 """
57 cmd = sys.argv[1] if len(sys.argv) >= 2 else None
58 opts = [arg_to_option(arg) for arg in sys.argv[2:] if arg.startswith('--')]
59 try:
60 main(cmd, args=sys.argv[2:], options=dict(opts))
61 except ConfigurationError as e:
62 print(e.args[0], file=sys.stderr)
63
64
65 if __name__ == '__main__':
66 run_main()
67
[end of mkdocs/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mkdocs/main.py b/mkdocs/main.py
--- a/mkdocs/main.py
+++ b/mkdocs/main.py
@@ -4,6 +4,7 @@
import sys
+from mkdocs import __version__
from mkdocs.build import build
from mkdocs.config import load_config
from mkdocs.exceptions import ConfigurationError
@@ -44,6 +45,7 @@
elif cmd == 'new':
new(args, options)
else:
+ print('MkDocs (version {0})'.format(__version__))
print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')
| {"golden_diff": "diff --git a/mkdocs/main.py b/mkdocs/main.py\n--- a/mkdocs/main.py\n+++ b/mkdocs/main.py\n@@ -4,6 +4,7 @@\n \n import sys\n \n+from mkdocs import __version__\n from mkdocs.build import build\n from mkdocs.config import load_config\n from mkdocs.exceptions import ConfigurationError\n@@ -44,6 +45,7 @@\n elif cmd == 'new':\n new(args, options)\n else:\n+ print('MkDocs (version {0})'.format(__version__))\n print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')\n", "issue": "Show installed version in command-line\nI'd expect `mkdocs help` to display the currently installed version, would be nice to have for the next versions\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# coding: utf-8\nfrom __future__ import print_function\n\nimport sys\n\nfrom mkdocs.build import build\nfrom mkdocs.config import load_config\nfrom mkdocs.exceptions import ConfigurationError\nfrom mkdocs.gh_deploy import gh_deploy\nfrom mkdocs.new import new\nfrom mkdocs.serve import serve\n\n\ndef arg_to_option(arg):\n \"\"\"\n Convert command line arguments into two-tuples of config key/value pairs.\n \"\"\"\n arg = arg.lstrip('--')\n option = True\n if '=' in arg:\n arg, option = arg.split('=', 1)\n return (arg.replace('-', '_'), option)\n\n\ndef main(cmd, args, options=None):\n \"\"\"\n Build the documentation, and optionally start the devserver.\n \"\"\"\n clean_site_dir = 'clean' in options\n if cmd == 'serve':\n config = load_config(options=options)\n serve(config, options=options)\n elif cmd == 'build':\n config = load_config(options=options)\n build(config, clean_site_dir=clean_site_dir)\n elif cmd == 'json':\n config = load_config(options=options)\n build(config, dump_json=True, clean_site_dir=clean_site_dir)\n elif cmd == 'gh-deploy':\n config = load_config(options=options)\n build(config, clean_site_dir=clean_site_dir)\n gh_deploy(config)\n elif cmd == 'new':\n new(args, options)\n else:\n print('mkdocs [help|new|build|serve|gh-deploy|json] {options}')\n\n\ndef run_main():\n \"\"\"\n Invokes main() with the contents of sys.argv\n\n This is a separate function so it can be invoked\n by a setuptools console_script.\n \"\"\"\n cmd = sys.argv[1] if len(sys.argv) >= 2 else None\n opts = [arg_to_option(arg) for arg in sys.argv[2:] if arg.startswith('--')]\n try:\n main(cmd, args=sys.argv[2:], options=dict(opts))\n except ConfigurationError as e:\n print(e.args[0], file=sys.stderr)\n\n\nif __name__ == '__main__':\n run_main()\n", "path": "mkdocs/main.py"}]} | 1,157 | 146 |
gh_patches_debug_30007 | rasdani/github-patches | git_diff | vispy__vispy-335 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding `elapsed` property to Timer event?
`event.elapsed` would be a shortcut to ~~`event.dt * event.iteration`~~. Actually it's a bit more complicated because `event.dt` is not constant, so it should rather be the sum of all `event.dt`s.
</issue>
<code>
[start of vispy/app/timer.py]
1 # -*- coding: utf-8 -*-
2 # Copyright (c) 2014, Vispy Development Team.
3 # Distributed under the (new) BSD License. See LICENSE.txt for more info.
4
5 from __future__ import division
6
7 from ..util.event import Event, EmitterGroup
8 from ..util.ptime import time as precision_time
9 from ..ext.six import string_types
10 from .base import BaseTimerBackend as TimerBackend # noqa
11 from . import use_app, Application
12
13
14 class Timer(object):
15
16 """Timer used to schedule events in the future or on a repeating schedule
17
18 Parameters
19 ----------
20 interval : float
21 Time between events.
22 connect : function | None
23 The function to call.
24 iterations : int
25 Number of iterations. Can be -1 for infinite.
26 start : bool
27 Whether to start the timer.
28 app : instance of vispy.app.Application
29 The application to attach the timer to.
30 """
31
32 def __init__(self, interval=0.0, connect=None, iterations=-1, start=False,
33 app=None):
34 self.events = EmitterGroup(source=self,
35 start=Event,
36 stop=Event,
37 timeout=Event)
38 #self.connect = self.events.timeout.connect
39 #self.disconnect = self.events.timeout.disconnect
40
41 # Get app instance
42 if app is None:
43 self._app = use_app()
44 elif isinstance(app, Application):
45 self._app = app
46 elif isinstance(app, string_types):
47 self._app = Application(app)
48 else:
49 raise ValueError('Invalid value for app %r' % app)
50
51 # Ensure app has backend app object
52 self._app.native
53
54 # Instantiate the backed with the right class
55 self._backend = self._app.backend_module.TimerBackend(self)
56
57 self._interval = interval
58 self._running = False
59 self._last_emit_time = None
60 self.iter_count = 0
61 self.max_iterations = iterations
62 if connect is not None:
63 self.connect(connect)
64 if start:
65 self.start()
66
67 @property
68 def app(self):
69 """ The vispy Application instance on which this Timer is based.
70 """
71 return self._app
72
73 @property
74 def interval(self):
75 return self._interval
76
77 @interval.setter
78 def interval(self, val):
79 self._interval = val
80 if self.running:
81 self.stop()
82 self.start()
83
84 @property
85 def running(self):
86 return self._running
87
88 def start(self, interval=None, iterations=None):
89 """Start the timer.
90
91 A timeout event will be generated every *interval* seconds.
92 If *interval* is None, then self.interval will be used.
93
94 If *iterations* is specified, the timer will stop after
95 emitting that number of events. If unspecified, then
96 the previous value of self.iterations will be used. If the value is
97 negative, then the timer will continue running until stop() is called.
98 """
99 self.iter_count = 0
100 if interval is not None:
101 self.interval = interval
102 if iterations is not None:
103 self.max_iterations = iterations
104 self._backend._vispy_start(self.interval)
105 self._running = True
106 self._last_emit_time = precision_time()
107 self.events.start(type='timer_start')
108
109 def stop(self):
110 """Stop the timer."""
111 self._backend._vispy_stop()
112 self._running = False
113 self.events.stop(type='timer_stop')
114
115 # use timer.app.run() and .quit() instead.
116 # def run_event_loop(self):
117 #"""Execute the event loop for this Timer's backend.
118 #"""
119 # return self._backend._vispy_run()
120
121 # def quit_event_loop(self):
122 #"""Exit the event loop for this Timer's backend.
123 #"""
124 # return self._backend._vispy_quit()
125
126 @property
127 def native(self):
128 """ The native timer on which this Timer is based.
129 """
130 return self._backend._vispy_get_native_timer()
131
132 def _timeout(self, *args):
133 # called when the backend timer has triggered.
134 if not self.running:
135 return
136 if self.max_iterations >= 0 and self.iter_count >= self.max_iterations:
137 self.stop()
138 return
139
140 # compute dt since last event
141 now = precision_time()
142 dt = now - self._last_emit_time
143 self._last_emit_time = now
144
145 self.events.timeout(
146 type='timer_timeout',
147 iteration=self.iter_count,
148 dt=dt)
149 self.iter_count += 1
150
151 def connect(self, callback):
152 """ Alias for self.events.timeout.connect() """
153 return self.events.timeout.connect(callback)
154
155 def disconnect(self, callback=None):
156 """ Alias for self.events.timeout.disconnect() """
157 return self.events.timeout.disconnect(callback)
158
[end of vispy/app/timer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/vispy/app/timer.py b/vispy/app/timer.py
--- a/vispy/app/timer.py
+++ b/vispy/app/timer.py
@@ -56,6 +56,7 @@
self._interval = interval
self._running = False
+ self._first_emit_time = None
self._last_emit_time = None
self.iter_count = 0
self.max_iterations = iterations
@@ -81,6 +82,10 @@
self.stop()
self.start()
+ @property
+ def elapsed(self):
+ return precision_time() - self._first_emit_time
+
@property
def running(self):
return self._running
@@ -103,6 +108,7 @@
self.max_iterations = iterations
self._backend._vispy_start(self.interval)
self._running = True
+ self._first_emit_time = precision_time()
self._last_emit_time = precision_time()
self.events.start(type='timer_start')
@@ -140,11 +146,13 @@
# compute dt since last event
now = precision_time()
dt = now - self._last_emit_time
+ elapsed = now - self._first_emit_time
self._last_emit_time = now
self.events.timeout(
type='timer_timeout',
iteration=self.iter_count,
+ elapsed=elapsed,
dt=dt)
self.iter_count += 1
| {"golden_diff": "diff --git a/vispy/app/timer.py b/vispy/app/timer.py\n--- a/vispy/app/timer.py\n+++ b/vispy/app/timer.py\n@@ -56,6 +56,7 @@\n \n self._interval = interval\n self._running = False\n+ self._first_emit_time = None\n self._last_emit_time = None\n self.iter_count = 0\n self.max_iterations = iterations\n@@ -81,6 +82,10 @@\n self.stop()\n self.start()\n \n+ @property\n+ def elapsed(self):\n+ return precision_time() - self._first_emit_time\n+\n @property\n def running(self):\n return self._running\n@@ -103,6 +108,7 @@\n self.max_iterations = iterations\n self._backend._vispy_start(self.interval)\n self._running = True\n+ self._first_emit_time = precision_time()\n self._last_emit_time = precision_time()\n self.events.start(type='timer_start')\n \n@@ -140,11 +146,13 @@\n # compute dt since last event\n now = precision_time()\n dt = now - self._last_emit_time\n+ elapsed = now - self._first_emit_time\n self._last_emit_time = now\n \n self.events.timeout(\n type='timer_timeout',\n iteration=self.iter_count,\n+ elapsed=elapsed,\n dt=dt)\n self.iter_count += 1\n", "issue": "Adding `elapsed` property to Timer event?\n`event.elapsed` would be a shortcut to ~~`event.dt * event.iteration`~~. Actually it's a bit more complicated because `event.dt` is not constant, so it should rather be the sum of all `event.dt`s.\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n# Copyright (c) 2014, Vispy Development Team.\n# Distributed under the (new) BSD License. See LICENSE.txt for more info.\n\nfrom __future__ import division\n\nfrom ..util.event import Event, EmitterGroup\nfrom ..util.ptime import time as precision_time\nfrom ..ext.six import string_types\nfrom .base import BaseTimerBackend as TimerBackend # noqa\nfrom . import use_app, Application\n\n\nclass Timer(object):\n\n \"\"\"Timer used to schedule events in the future or on a repeating schedule\n\n Parameters\n ----------\n interval : float\n Time between events.\n connect : function | None\n The function to call.\n iterations : int\n Number of iterations. Can be -1 for infinite.\n start : bool\n Whether to start the timer.\n app : instance of vispy.app.Application\n The application to attach the timer to.\n \"\"\"\n\n def __init__(self, interval=0.0, connect=None, iterations=-1, start=False,\n app=None):\n self.events = EmitterGroup(source=self,\n start=Event,\n stop=Event,\n timeout=Event)\n #self.connect = self.events.timeout.connect\n #self.disconnect = self.events.timeout.disconnect\n\n # Get app instance\n if app is None:\n self._app = use_app()\n elif isinstance(app, Application):\n self._app = app\n elif isinstance(app, string_types):\n self._app = Application(app)\n else:\n raise ValueError('Invalid value for app %r' % app)\n \n # Ensure app has backend app object\n self._app.native\n \n # Instantiate the backed with the right class\n self._backend = self._app.backend_module.TimerBackend(self)\n\n self._interval = interval\n self._running = False\n self._last_emit_time = None\n self.iter_count = 0\n self.max_iterations = iterations\n if connect is not None:\n self.connect(connect)\n if start:\n self.start()\n\n @property\n def app(self):\n \"\"\" The vispy Application instance on which this Timer is based.\n \"\"\"\n return self._app\n\n @property\n def interval(self):\n return self._interval\n\n @interval.setter\n def interval(self, val):\n self._interval = val\n if self.running:\n self.stop()\n self.start()\n\n @property\n def running(self):\n return self._running\n\n def start(self, interval=None, iterations=None):\n \"\"\"Start the timer.\n\n A timeout event will be generated every *interval* seconds.\n If *interval* is None, then self.interval will be used.\n\n If *iterations* is specified, the timer will stop after\n emitting that number of events. If unspecified, then\n the previous value of self.iterations will be used. If the value is\n negative, then the timer will continue running until stop() is called.\n \"\"\"\n self.iter_count = 0\n if interval is not None:\n self.interval = interval\n if iterations is not None:\n self.max_iterations = iterations\n self._backend._vispy_start(self.interval)\n self._running = True\n self._last_emit_time = precision_time()\n self.events.start(type='timer_start')\n\n def stop(self):\n \"\"\"Stop the timer.\"\"\"\n self._backend._vispy_stop()\n self._running = False\n self.events.stop(type='timer_stop')\n\n # use timer.app.run() and .quit() instead.\n # def run_event_loop(self):\n #\"\"\"Execute the event loop for this Timer's backend.\n #\"\"\"\n # return self._backend._vispy_run()\n\n # def quit_event_loop(self):\n #\"\"\"Exit the event loop for this Timer's backend.\n #\"\"\"\n # return self._backend._vispy_quit()\n\n @property\n def native(self):\n \"\"\" The native timer on which this Timer is based.\n \"\"\"\n return self._backend._vispy_get_native_timer()\n\n def _timeout(self, *args):\n # called when the backend timer has triggered.\n if not self.running:\n return\n if self.max_iterations >= 0 and self.iter_count >= self.max_iterations:\n self.stop()\n return\n\n # compute dt since last event\n now = precision_time()\n dt = now - self._last_emit_time\n self._last_emit_time = now\n\n self.events.timeout(\n type='timer_timeout',\n iteration=self.iter_count,\n dt=dt)\n self.iter_count += 1\n\n def connect(self, callback):\n \"\"\" Alias for self.events.timeout.connect() \"\"\"\n return self.events.timeout.connect(callback)\n\n def disconnect(self, callback=None):\n \"\"\" Alias for self.events.timeout.disconnect() \"\"\"\n return self.events.timeout.disconnect(callback)\n", "path": "vispy/app/timer.py"}]} | 2,016 | 332 |
gh_patches_debug_15129 | rasdani/github-patches | git_diff | fossasia__open-event-server-5346 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Sort message-settings by id
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Sort message-settings by id
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
**Stacktrace**
<!-- If applicable, add stacktrace to help explain your problem. -->
**Additional details (please complete the following information):**
- OS: [e.g. MacOS, Ubuntu, CentOS]
- Python Version [e.g. `3.5`, `3.6`]
- `HEAD` Commit hash [e.g. `4629c62`]
**Additional context**
<!-- Add any other context about the problem here. -->
**Wanna work on this issue**
</issue>
<code>
[start of app/api/message_settings.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList
2
3 from app.api.bootstrap import api
4 from app.api.schema.message_settings import MessageSettingSchema
5 from app.models import db
6 from app.models.message_setting import MessageSettings
7
8
9 class MessageSettingsList(ResourceList):
10 """
11 List Events Role Permission
12 """
13 decorators = (api.has_permission('is_admin', methods="GET"),)
14 methods = ['GET']
15 schema = MessageSettingSchema
16 data_layer = {'session': db.session,
17 'model': MessageSettings}
18
19
20 class MessageSettingsDetail(ResourceDetail):
21 """
22 Events Role Permission detail by id
23 """
24 schema = MessageSettingSchema
25 decorators = (api.has_permission('is_admin', methods="PATCH"),)
26 methods = ['GET', 'PATCH']
27 data_layer = {'session': db.session,
28 'model': MessageSettings}
29
[end of app/api/message_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/message_settings.py b/app/api/message_settings.py
--- a/app/api/message_settings.py
+++ b/app/api/message_settings.py
@@ -10,11 +10,23 @@
"""
List Events Role Permission
"""
+ def query(self, view_kwargs):
+ """
+ query method for Message Setting List
+ :param view_kwargs:
+ :return:
+ """
+ query_ = db.session.query(MessageSettings).order_by(MessageSettings.id)
+ return query_
+
decorators = (api.has_permission('is_admin', methods="GET"),)
methods = ['GET']
schema = MessageSettingSchema
data_layer = {'session': db.session,
- 'model': MessageSettings}
+ 'model': MessageSettings,
+ 'methods': {
+ 'query': query
+ }}
class MessageSettingsDetail(ResourceDetail):
| {"golden_diff": "diff --git a/app/api/message_settings.py b/app/api/message_settings.py\n--- a/app/api/message_settings.py\n+++ b/app/api/message_settings.py\n@@ -10,11 +10,23 @@\n \"\"\"\n List Events Role Permission\n \"\"\"\n+ def query(self, view_kwargs):\n+ \"\"\"\n+ query method for Message Setting List\n+ :param view_kwargs:\n+ :return:\n+ \"\"\"\n+ query_ = db.session.query(MessageSettings).order_by(MessageSettings.id)\n+ return query_\n+\n decorators = (api.has_permission('is_admin', methods=\"GET\"),)\n methods = ['GET']\n schema = MessageSettingSchema\n data_layer = {'session': db.session,\n- 'model': MessageSettings}\n+ 'model': MessageSettings,\n+ 'methods': {\n+ 'query': query\n+ }}\n \n \n class MessageSettingsDetail(ResourceDetail):\n", "issue": "Sort message-settings by id\n**Describe the bug**\r\n<!-- A clear and concise description of what the bug is. -->\r\nSort message-settings by id\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Go to '...'\r\n2. Click on '....'\r\n3. Scroll down to '....'\r\n4. See error\r\n\r\n**Expected behavior**\r\n<!-- A clear and concise description of what you expected to happen. -->\r\n\r\n**Stacktrace**\r\n<!-- If applicable, add stacktrace to help explain your problem. -->\r\n\r\n**Additional details (please complete the following information):**\r\n - OS: [e.g. MacOS, Ubuntu, CentOS]\r\n - Python Version [e.g. `3.5`, `3.6`]\r\n - `HEAD` Commit hash [e.g. `4629c62`]\r\n\r\n**Additional context**\r\n<!-- Add any other context about the problem here. -->\r\n**Wanna work on this issue**\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList\n\nfrom app.api.bootstrap import api\nfrom app.api.schema.message_settings import MessageSettingSchema\nfrom app.models import db\nfrom app.models.message_setting import MessageSettings\n\n\nclass MessageSettingsList(ResourceList):\n \"\"\"\n List Events Role Permission\n \"\"\"\n decorators = (api.has_permission('is_admin', methods=\"GET\"),)\n methods = ['GET']\n schema = MessageSettingSchema\n data_layer = {'session': db.session,\n 'model': MessageSettings}\n\n\nclass MessageSettingsDetail(ResourceDetail):\n \"\"\"\n Events Role Permission detail by id\n \"\"\"\n schema = MessageSettingSchema\n decorators = (api.has_permission('is_admin', methods=\"PATCH\"),)\n methods = ['GET', 'PATCH']\n data_layer = {'session': db.session,\n 'model': MessageSettings}\n", "path": "app/api/message_settings.py"}]} | 955 | 195 |
gh_patches_debug_67410 | rasdani/github-patches | git_diff | mitmproxy__mitmproxy-4066 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Mitmweb fails with addons/options-configure.py example.
I am new to learn it, but i follow official [demo][1], it can't working?
```python
Proxy server listening at http://*:8888
ERROR:tornado.application:Uncaught exception GET /options.json (127.0.0.1)
HTTPServerRequest(protocol='http', host='127.0.0.1:8081', method='GET', uri='/options.json', version='HTTP/1.1', remote_ip='127.0.0.1')
Traceback (most recent call last):
File "c:\users\jekoie\appdata\local\programs\python\python37-32\lib\site-packages\tornado\web.py", line 1697, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "c:\users\jekoie\appdata\local\programs\python\python37-32\lib\site-packages\mitmproxy\tools\web\app.py", line 453, in get
self.write(optmanager.dump_dicts(self.master.options))
File "c:\users\jekoie\appdata\local\programs\python\python37-32\lib\site-packages\mitmproxy\optmanager.py", line 469, in dump_dicts
t = typecheck.typespec_to_str(o.typespec)
File "c:\users\jekoie\appdata\local\programs\python\python37-32\lib\site-packages\mitmproxy\utils\typecheck.py", line 85, in typespec_to_str
raise NotImplementedError
NotImplementedError
ERROR:tornado.access:500 GET /options.json (127.0.0.1) 3.91ms
````
[1]: https://docs.mitmproxy.org/stable/addons-options/#handling-configuration-updates
</issue>
<code>
[start of mitmproxy/utils/typecheck.py]
1 import typing
2
3 Type = typing.Union[
4 typing.Any # anything more elaborate really fails with mypy at the moment.
5 ]
6
7
8 def sequence_type(typeinfo: typing.Type[typing.List]) -> Type:
9 """Return the type of a sequence, e.g. typing.List"""
10 return typeinfo.__args__[0] # type: ignore
11
12
13 def tuple_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:
14 """Return the types of a typing.Tuple"""
15 return typeinfo.__args__ # type: ignore
16
17
18 def union_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:
19 """return the types of a typing.Union"""
20 return typeinfo.__args__ # type: ignore
21
22
23 def mapping_types(typeinfo: typing.Type[typing.Mapping]) -> typing.Tuple[Type, Type]:
24 """return the types of a mapping, e.g. typing.Dict"""
25 return typeinfo.__args__ # type: ignore
26
27
28 def check_option_type(name: str, value: typing.Any, typeinfo: Type) -> None:
29 """
30 Check if the provided value is an instance of typeinfo and raises a
31 TypeError otherwise. This function supports only those types required for
32 options.
33 """
34 e = TypeError("Expected {} for {}, but got {}.".format(
35 typeinfo,
36 name,
37 type(value)
38 ))
39
40 typename = str(typeinfo)
41
42 if typename.startswith("typing.Union"):
43 for T in union_types(typeinfo):
44 try:
45 check_option_type(name, value, T)
46 except TypeError:
47 pass
48 else:
49 return
50 raise e
51 elif typename.startswith("typing.Tuple"):
52 types = tuple_types(typeinfo)
53 if not isinstance(value, (tuple, list)):
54 raise e
55 if len(types) != len(value):
56 raise e
57 for i, (x, T) in enumerate(zip(value, types)):
58 check_option_type("{}[{}]".format(name, i), x, T)
59 return
60 elif typename.startswith("typing.Sequence"):
61 T = sequence_type(typeinfo)
62 if not isinstance(value, (tuple, list)):
63 raise e
64 for v in value:
65 check_option_type(name, v, T)
66 elif typename.startswith("typing.IO"):
67 if hasattr(value, "read"):
68 return
69 else:
70 raise e
71 elif typename.startswith("typing.Any"):
72 return
73 elif not isinstance(value, typeinfo):
74 raise e
75
76
77 def typespec_to_str(typespec: typing.Any) -> str:
78 if typespec in (str, int, bool):
79 t = typespec.__name__
80 elif typespec == typing.Optional[str]:
81 t = 'optional str'
82 elif typespec == typing.Sequence[str]:
83 t = 'sequence of str'
84 else:
85 raise NotImplementedError
86 return t
87
[end of mitmproxy/utils/typecheck.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/mitmproxy/utils/typecheck.py b/mitmproxy/utils/typecheck.py
--- a/mitmproxy/utils/typecheck.py
+++ b/mitmproxy/utils/typecheck.py
@@ -81,6 +81,8 @@
t = 'optional str'
elif typespec == typing.Sequence[str]:
t = 'sequence of str'
+ elif typespec == typing.Optional[int]:
+ t = 'optional int'
else:
raise NotImplementedError
return t
| {"golden_diff": "diff --git a/mitmproxy/utils/typecheck.py b/mitmproxy/utils/typecheck.py\n--- a/mitmproxy/utils/typecheck.py\n+++ b/mitmproxy/utils/typecheck.py\n@@ -81,6 +81,8 @@\n t = 'optional str'\n elif typespec == typing.Sequence[str]:\n t = 'sequence of str'\n+ elif typespec == typing.Optional[int]:\n+ t = 'optional int'\n else:\n raise NotImplementedError\n return t\n", "issue": "Mitmweb fails with addons/options-configure.py example.\nI am new to learn it, but i follow official [demo][1], it can't working?\r\n```python\r\nProxy server listening at http://*:8888\r\nERROR:tornado.application:Uncaught exception GET /options.json (127.0.0.1)\r\nHTTPServerRequest(protocol='http', host='127.0.0.1:8081', method='GET', uri='/options.json', version='HTTP/1.1', remote_ip='127.0.0.1')\r\nTraceback (most recent call last):\r\n File \"c:\\users\\jekoie\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages\\tornado\\web.py\", line 1697, in _execute\r\n result = method(*self.path_args, **self.path_kwargs)\r\n File \"c:\\users\\jekoie\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages\\mitmproxy\\tools\\web\\app.py\", line 453, in get\r\n self.write(optmanager.dump_dicts(self.master.options))\r\n File \"c:\\users\\jekoie\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages\\mitmproxy\\optmanager.py\", line 469, in dump_dicts\r\n t = typecheck.typespec_to_str(o.typespec)\r\n File \"c:\\users\\jekoie\\appdata\\local\\programs\\python\\python37-32\\lib\\site-packages\\mitmproxy\\utils\\typecheck.py\", line 85, in typespec_to_str\r\n raise NotImplementedError\r\nNotImplementedError\r\nERROR:tornado.access:500 GET /options.json (127.0.0.1) 3.91ms\r\n````\r\n\r\n[1]: https://docs.mitmproxy.org/stable/addons-options/#handling-configuration-updates\n", "before_files": [{"content": "import typing\n\nType = typing.Union[\n typing.Any # anything more elaborate really fails with mypy at the moment.\n]\n\n\ndef sequence_type(typeinfo: typing.Type[typing.List]) -> Type:\n \"\"\"Return the type of a sequence, e.g. typing.List\"\"\"\n return typeinfo.__args__[0] # type: ignore\n\n\ndef tuple_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:\n \"\"\"Return the types of a typing.Tuple\"\"\"\n return typeinfo.__args__ # type: ignore\n\n\ndef union_types(typeinfo: typing.Type[typing.Tuple]) -> typing.Sequence[Type]:\n \"\"\"return the types of a typing.Union\"\"\"\n return typeinfo.__args__ # type: ignore\n\n\ndef mapping_types(typeinfo: typing.Type[typing.Mapping]) -> typing.Tuple[Type, Type]:\n \"\"\"return the types of a mapping, e.g. typing.Dict\"\"\"\n return typeinfo.__args__ # type: ignore\n\n\ndef check_option_type(name: str, value: typing.Any, typeinfo: Type) -> None:\n \"\"\"\n Check if the provided value is an instance of typeinfo and raises a\n TypeError otherwise. This function supports only those types required for\n options.\n \"\"\"\n e = TypeError(\"Expected {} for {}, but got {}.\".format(\n typeinfo,\n name,\n type(value)\n ))\n\n typename = str(typeinfo)\n\n if typename.startswith(\"typing.Union\"):\n for T in union_types(typeinfo):\n try:\n check_option_type(name, value, T)\n except TypeError:\n pass\n else:\n return\n raise e\n elif typename.startswith(\"typing.Tuple\"):\n types = tuple_types(typeinfo)\n if not isinstance(value, (tuple, list)):\n raise e\n if len(types) != len(value):\n raise e\n for i, (x, T) in enumerate(zip(value, types)):\n check_option_type(\"{}[{}]\".format(name, i), x, T)\n return\n elif typename.startswith(\"typing.Sequence\"):\n T = sequence_type(typeinfo)\n if not isinstance(value, (tuple, list)):\n raise e\n for v in value:\n check_option_type(name, v, T)\n elif typename.startswith(\"typing.IO\"):\n if hasattr(value, \"read\"):\n return\n else:\n raise e\n elif typename.startswith(\"typing.Any\"):\n return\n elif not isinstance(value, typeinfo):\n raise e\n\n\ndef typespec_to_str(typespec: typing.Any) -> str:\n if typespec in (str, int, bool):\n t = typespec.__name__\n elif typespec == typing.Optional[str]:\n t = 'optional str'\n elif typespec == typing.Sequence[str]:\n t = 'sequence of str'\n else:\n raise NotImplementedError\n return t\n", "path": "mitmproxy/utils/typecheck.py"}]} | 1,758 | 106 |
gh_patches_debug_53877 | rasdani/github-patches | git_diff | pyca__cryptography-7406 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
release.py should link to GH create PAT page
We can pre-fill what permissions are needed to improve the UX of doing a release. Example URL: https://github.com/settings/tokens/new?description=foo&scopes=repo,workflow
@reaperhulk do you know what scopes are required?
</issue>
<code>
[start of release.py]
1 # This file is dual licensed under the terms of the Apache License, Version
2 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
3 # for complete details.
4
5 import getpass
6 import glob
7 import io
8 import os
9 import subprocess
10 import time
11 import zipfile
12
13 import click
14
15 import requests
16
17
18 def run(*args, **kwargs):
19 print("[running] {0}".format(list(args)))
20 subprocess.check_call(list(args), **kwargs)
21
22
23 def wait_for_build_complete_github_actions(session, token, run_url):
24 while True:
25 response = session.get(
26 run_url,
27 headers={
28 "Content-Type": "application/json",
29 "Authorization": "token {}".format(token),
30 },
31 )
32 response.raise_for_status()
33 if response.json()["conclusion"] is not None:
34 break
35 time.sleep(3)
36
37
38 def download_artifacts_github_actions(session, token, run_url):
39 response = session.get(
40 run_url,
41 headers={
42 "Content-Type": "application/json",
43 "Authorization": "token {}".format(token),
44 },
45 )
46 response.raise_for_status()
47
48 response = session.get(
49 response.json()["artifacts_url"],
50 headers={
51 "Content-Type": "application/json",
52 "Authorization": "token {}".format(token),
53 },
54 )
55 response.raise_for_status()
56 paths = []
57 for artifact in response.json()["artifacts"]:
58 response = session.get(
59 artifact["archive_download_url"],
60 headers={
61 "Content-Type": "application/json",
62 "Authorization": "token {}".format(token),
63 },
64 )
65 with zipfile.ZipFile(io.BytesIO(response.content)) as z:
66 for name in z.namelist():
67 if not name.endswith(".whl"):
68 continue
69 p = z.open(name)
70 out_path = os.path.join(
71 os.path.dirname(__file__),
72 "dist",
73 os.path.basename(name),
74 )
75 with open(out_path, "wb") as f:
76 f.write(p.read())
77 paths.append(out_path)
78 return paths
79
80
81 def fetch_github_actions_wheels(token, version):
82 session = requests.Session()
83
84 response = session.get(
85 (
86 "https://api.github.com/repos/pyca/cryptography/actions/workflows/"
87 "wheel-builder.yml/runs?event=push"
88 ),
89 headers={
90 "Content-Type": "application/json",
91 "Authorization": "token {}".format(token),
92 },
93 )
94 response.raise_for_status()
95 run_url = response.json()["workflow_runs"][0]["url"]
96 wait_for_build_complete_github_actions(session, token, run_url)
97 return download_artifacts_github_actions(session, token, run_url)
98
99
100 @click.command()
101 @click.argument("version")
102 def release(version):
103 """
104 ``version`` should be a string like '0.4' or '1.0'.
105 """
106 github_token = getpass.getpass("Github person access token: ")
107
108 # Tag and push the tag (this will trigger the wheel builder in Actions)
109 run("git", "tag", "-s", version, "-m", "{0} release".format(version))
110 run("git", "push", "--tags")
111
112 # Generate and upload vector packages
113 run("python", "setup.py", "sdist", "bdist_wheel", cwd="vectors/")
114 packages = glob.glob(
115 "vectors/dist/cryptography_vectors-{0}*".format(version)
116 )
117 run("twine", "upload", "-s", *packages)
118
119 # Generate sdist for upload
120 run("python", "setup.py", "sdist")
121 sdist = glob.glob("dist/cryptography-{0}*".format(version))
122
123 # Wait for Actions to complete and download the wheels
124 github_actions_wheel_paths = fetch_github_actions_wheels(
125 github_token, version
126 )
127
128 # Upload sdist and wheels
129 run("twine", "upload", "-s", *sdist)
130 run("twine", "upload", *github_actions_wheel_paths)
131
132
133 if __name__ == "__main__":
134 release()
135
[end of release.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/release.py b/release.py
--- a/release.py
+++ b/release.py
@@ -103,6 +103,11 @@
"""
``version`` should be a string like '0.4' or '1.0'.
"""
+ print(
+ f"Create a new GH PAT at: "
+ f"https://github.com/settings/tokens/new?"
+ f"description={version}&scopes=repo"
+ )
github_token = getpass.getpass("Github person access token: ")
# Tag and push the tag (this will trigger the wheel builder in Actions)
| {"golden_diff": "diff --git a/release.py b/release.py\n--- a/release.py\n+++ b/release.py\n@@ -103,6 +103,11 @@\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n+ print(\n+ f\"Create a new GH PAT at: \"\n+ f\"https://github.com/settings/tokens/new?\"\n+ f\"description={version}&scopes=repo\"\n+ )\n github_token = getpass.getpass(\"Github person access token: \")\n \n # Tag and push the tag (this will trigger the wheel builder in Actions)\n", "issue": "release.py should link to GH create PAT page\nWe can pre-fill what permissions are needed to improve the UX of doing a release. Example URL: https://github.com/settings/tokens/new?description=foo&scopes=repo,workflow\r\n\r\n@reaperhulk do you know what scopes are required?\n", "before_files": [{"content": "# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport getpass\nimport glob\nimport io\nimport os\nimport subprocess\nimport time\nimport zipfile\n\nimport click\n\nimport requests\n\n\ndef run(*args, **kwargs):\n print(\"[running] {0}\".format(list(args)))\n subprocess.check_call(list(args), **kwargs)\n\n\ndef wait_for_build_complete_github_actions(session, token, run_url):\n while True:\n response = session.get(\n run_url,\n headers={\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"token {}\".format(token),\n },\n )\n response.raise_for_status()\n if response.json()[\"conclusion\"] is not None:\n break\n time.sleep(3)\n\n\ndef download_artifacts_github_actions(session, token, run_url):\n response = session.get(\n run_url,\n headers={\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"token {}\".format(token),\n },\n )\n response.raise_for_status()\n\n response = session.get(\n response.json()[\"artifacts_url\"],\n headers={\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"token {}\".format(token),\n },\n )\n response.raise_for_status()\n paths = []\n for artifact in response.json()[\"artifacts\"]:\n response = session.get(\n artifact[\"archive_download_url\"],\n headers={\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"token {}\".format(token),\n },\n )\n with zipfile.ZipFile(io.BytesIO(response.content)) as z:\n for name in z.namelist():\n if not name.endswith(\".whl\"):\n continue\n p = z.open(name)\n out_path = os.path.join(\n os.path.dirname(__file__),\n \"dist\",\n os.path.basename(name),\n )\n with open(out_path, \"wb\") as f:\n f.write(p.read())\n paths.append(out_path)\n return paths\n\n\ndef fetch_github_actions_wheels(token, version):\n session = requests.Session()\n\n response = session.get(\n (\n \"https://api.github.com/repos/pyca/cryptography/actions/workflows/\"\n \"wheel-builder.yml/runs?event=push\"\n ),\n headers={\n \"Content-Type\": \"application/json\",\n \"Authorization\": \"token {}\".format(token),\n },\n )\n response.raise_for_status()\n run_url = response.json()[\"workflow_runs\"][0][\"url\"]\n wait_for_build_complete_github_actions(session, token, run_url)\n return download_artifacts_github_actions(session, token, run_url)\n\n\[email protected]()\[email protected](\"version\")\ndef release(version):\n \"\"\"\n ``version`` should be a string like '0.4' or '1.0'.\n \"\"\"\n github_token = getpass.getpass(\"Github person access token: \")\n\n # Tag and push the tag (this will trigger the wheel builder in Actions)\n run(\"git\", \"tag\", \"-s\", version, \"-m\", \"{0} release\".format(version))\n run(\"git\", \"push\", \"--tags\")\n\n # Generate and upload vector packages\n run(\"python\", \"setup.py\", \"sdist\", \"bdist_wheel\", cwd=\"vectors/\")\n packages = glob.glob(\n \"vectors/dist/cryptography_vectors-{0}*\".format(version)\n )\n run(\"twine\", \"upload\", \"-s\", *packages)\n\n # Generate sdist for upload\n run(\"python\", \"setup.py\", \"sdist\")\n sdist = glob.glob(\"dist/cryptography-{0}*\".format(version))\n\n # Wait for Actions to complete and download the wheels\n github_actions_wheel_paths = fetch_github_actions_wheels(\n github_token, version\n )\n\n # Upload sdist and wheels\n run(\"twine\", \"upload\", \"-s\", *sdist)\n run(\"twine\", \"upload\", *github_actions_wheel_paths)\n\n\nif __name__ == \"__main__\":\n release()\n", "path": "release.py"}]} | 1,773 | 135 |
gh_patches_debug_22074 | rasdani/github-patches | git_diff | GPflow__GPflow-165 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
tests fail with tensorflow 10.0rc0
One test fails with the new tensorflow pre-release, it's related to the custom-op test:
``` python
ImportError: No module named 'tensorflow.python.kernel_tests'
```
In addition, there sees to be an issue with one of the notebooks ( #161 ), I'm looking into this.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 from __future__ import print_function
4 from setuptools import setup
5 import re
6 import os
7 import sys
8
9 # load version form _version.py
10 VERSIONFILE = "GPflow/_version.py"
11 verstrline = open(VERSIONFILE, "rt").read()
12 VSRE = r"^__version__ = ['\"]([^'\"]*)['\"]"
13 mo = re.search(VSRE, verstrline, re.M)
14 if mo:
15 verstr = mo.group(1)
16 else:
17 raise RuntimeError("Unable to find version string in %s." % (VERSIONFILE,))
18
19 # Compile the bespoke TensorFlow ops in-place. Not sure how this would work if this script wasn't executed as `develop`.
20 compile_command = "g++ -std=c++11 -shared ./GPflow/tfops/vec_to_tri.cc " \
21 "GPflow/tfops/tri_to_vec.cc -o GPflow/tfops/matpackops.so " \
22 "-fPIC -I $(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')"
23 if sys.platform == "darwin":
24 # Additional command for Macs, as instructed by the TensorFlow docs
25 compile_command += " -undefined dynamic_lookup"
26 os.system(compile_command)
27
28 setup(name='GPflow',
29 version=verstr,
30 author="James Hensman, Alex Matthews",
31 author_email="[email protected]",
32 description=("Gaussian process methods in tensorflow"),
33 license="BSD 3-clause",
34 keywords="machine-learning gaussian-processes kernels tensorflow",
35 url="http://github.com/gpflow/gpflow",
36 ext_modules=[],
37 packages=["GPflow"],
38 package_dir={'GPflow': 'GPflow'},
39 py_modules=['GPflow.__init__'],
40 test_suite='testing',
41 install_requires=['numpy>=1.9', 'scipy>=0.16', 'tensorflow>=0.7.1'],
42 classifiers=['License :: OSI Approved :: BSD License',
43 'Natural Language :: English',
44 'Operating System :: MacOS :: MacOS X',
45 'Operating System :: Microsoft :: Windows',
46 'Operating System :: POSIX :: Linux',
47 'Programming Language :: Python :: 2.7',
48 'Topic :: Scientific/Engineering :: Artificial Intelligence']
49 )
50
[end of setup.py]
[start of GPflow/_version.py]
1 # Copyright 2016 James Hensman
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15
16 __version__ = "0.2.1"
17
[end of GPflow/_version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/GPflow/_version.py b/GPflow/_version.py
--- a/GPflow/_version.py
+++ b/GPflow/_version.py
@@ -13,4 +13,4 @@
# limitations under the License.
-__version__ = "0.2.1"
+__version__ = "0.3.0"
diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -33,12 +33,14 @@
license="BSD 3-clause",
keywords="machine-learning gaussian-processes kernels tensorflow",
url="http://github.com/gpflow/gpflow",
+ package_data={'GPflow': ['GPflow/tfops/*.so']},
+ include_package_data=True,
ext_modules=[],
packages=["GPflow"],
package_dir={'GPflow': 'GPflow'},
py_modules=['GPflow.__init__'],
test_suite='testing',
- install_requires=['numpy>=1.9', 'scipy>=0.16', 'tensorflow>=0.7.1'],
+ install_requires=['numpy>=1.9', 'scipy>=0.16', 'tensorflow>=0.10.0rc0'],
classifiers=['License :: OSI Approved :: BSD License',
'Natural Language :: English',
'Operating System :: MacOS :: MacOS X',
| {"golden_diff": "diff --git a/GPflow/_version.py b/GPflow/_version.py\n--- a/GPflow/_version.py\n+++ b/GPflow/_version.py\n@@ -13,4 +13,4 @@\n # limitations under the License.\n \n \n-__version__ = \"0.2.1\"\n+__version__ = \"0.3.0\"\ndiff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -33,12 +33,14 @@\n license=\"BSD 3-clause\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/gpflow/gpflow\",\n+ package_data={'GPflow': ['GPflow/tfops/*.so']},\n+ include_package_data=True,\n ext_modules=[],\n packages=[\"GPflow\"],\n package_dir={'GPflow': 'GPflow'},\n py_modules=['GPflow.__init__'],\n test_suite='testing',\n- install_requires=['numpy>=1.9', 'scipy>=0.16', 'tensorflow>=0.7.1'],\n+ install_requires=['numpy>=1.9', 'scipy>=0.16', 'tensorflow>=0.10.0rc0'],\n classifiers=['License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n", "issue": "tests fail with tensorflow 10.0rc0\nOne test fails with the new tensorflow pre-release, it's related to the custom-op test:\n\n``` python\nImportError: No module named 'tensorflow.python.kernel_tests'\n```\n\nIn addition, there sees to be an issue with one of the notebooks ( #161 ), I'm looking into this.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\nfrom __future__ import print_function\nfrom setuptools import setup\nimport re\nimport os\nimport sys\n\n# load version form _version.py\nVERSIONFILE = \"GPflow/_version.py\"\nverstrline = open(VERSIONFILE, \"rt\").read()\nVSRE = r\"^__version__ = ['\\\"]([^'\\\"]*)['\\\"]\"\nmo = re.search(VSRE, verstrline, re.M)\nif mo:\n verstr = mo.group(1)\nelse:\n raise RuntimeError(\"Unable to find version string in %s.\" % (VERSIONFILE,))\n\n# Compile the bespoke TensorFlow ops in-place. Not sure how this would work if this script wasn't executed as `develop`.\ncompile_command = \"g++ -std=c++11 -shared ./GPflow/tfops/vec_to_tri.cc \" \\\n \"GPflow/tfops/tri_to_vec.cc -o GPflow/tfops/matpackops.so \" \\\n \"-fPIC -I $(python -c 'import tensorflow as tf; print(tf.sysconfig.get_include())')\"\nif sys.platform == \"darwin\":\n # Additional command for Macs, as instructed by the TensorFlow docs\n compile_command += \" -undefined dynamic_lookup\"\nos.system(compile_command)\n\nsetup(name='GPflow',\n version=verstr,\n author=\"James Hensman, Alex Matthews\",\n author_email=\"[email protected]\",\n description=(\"Gaussian process methods in tensorflow\"),\n license=\"BSD 3-clause\",\n keywords=\"machine-learning gaussian-processes kernels tensorflow\",\n url=\"http://github.com/gpflow/gpflow\",\n ext_modules=[],\n packages=[\"GPflow\"],\n package_dir={'GPflow': 'GPflow'},\n py_modules=['GPflow.__init__'],\n test_suite='testing',\n install_requires=['numpy>=1.9', 'scipy>=0.16', 'tensorflow>=0.7.1'],\n classifiers=['License :: OSI Approved :: BSD License',\n 'Natural Language :: English',\n 'Operating System :: MacOS :: MacOS X',\n 'Operating System :: Microsoft :: Windows',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python :: 2.7',\n 'Topic :: Scientific/Engineering :: Artificial Intelligence']\n )\n", "path": "setup.py"}, {"content": "# Copyright 2016 James Hensman\n# \n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n# \n# http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\n__version__ = \"0.2.1\"\n", "path": "GPflow/_version.py"}]} | 1,376 | 299 |
gh_patches_debug_13731 | rasdani/github-patches | git_diff | readthedocs__readthedocs.org-4801 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Locally hosted RTD instance doesn't allow git file:/// URLs
## Details
I installed a local RTD instance according to the Installation guide and imported test project. Now, I want to import my git project manually,
## Expected Result
I expected that the instance should accept all valid Git URLs.
## Actual Result
When I enter file:///.../../x.git URL, the manual import page shows "Invalid scheme for URL" error. I checked that I can clone this URL from a terminal.
</issue>
<code>
[start of readthedocs/projects/validators.py]
1 """Validators for projects app."""
2
3 # From https://github.com/django/django/pull/3477/files
4 from __future__ import absolute_import
5 import re
6
7 from django.conf import settings
8 from django.core.exceptions import ValidationError
9 from django.utils.deconstruct import deconstructible
10 from django.utils.translation import ugettext_lazy as _
11 from django.core.validators import RegexValidator
12 from future.backports.urllib.parse import urlparse
13
14
15 domain_regex = (
16 r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\.)+(?:[A-Z]{2,6}\.?|[A-Z0-9-]{2,}(?<!-)\.?)|'
17 r'localhost|' # localhost...
18 r'\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}|' # ...or ipv4
19 r'\[?[A-F0-9]*:[A-F0-9:]+\]?)' # ...or ipv6
20 )
21
22
23 @deconstructible
24 class DomainNameValidator(RegexValidator):
25 message = _('Enter a valid plain or internationalized domain name value')
26 regex = re.compile(domain_regex, re.IGNORECASE)
27
28 def __init__(self, accept_idna=True, **kwargs):
29 message = kwargs.get('message')
30 self.accept_idna = accept_idna
31 super(DomainNameValidator, self).__init__(**kwargs)
32 if not self.accept_idna and message is None:
33 self.message = _('Enter a valid domain name value')
34
35 def __call__(self, value):
36 try:
37 super(DomainNameValidator, self).__call__(value)
38 except ValidationError as exc:
39 if not self.accept_idna:
40 raise
41 if not value:
42 raise
43 try:
44 idnavalue = value.encode('idna')
45 except UnicodeError:
46 raise exc
47 super(DomainNameValidator, self).__call__(idnavalue)
48
49
50 validate_domain_name = DomainNameValidator()
51
52
53 @deconstructible
54 class RepositoryURLValidator(object):
55
56 disallow_relative_url = True
57
58 # Pattern for ``[email protected]:user/repo`` pattern
59 re_git_user = re.compile(r'^[\w]+@.+')
60
61 def __call__(self, value):
62 allow_private_repos = getattr(settings, 'ALLOW_PRIVATE_REPOS', False)
63 public_schemes = ['https', 'http', 'git', 'ftps', 'ftp']
64 private_schemes = ['ssh', 'ssh+git']
65 valid_schemes = public_schemes
66 if allow_private_repos:
67 valid_schemes += private_schemes
68 url = urlparse(value)
69
70 # Malicious characters go first
71 if '&&' in value or '|' in value:
72 raise ValidationError(_('Invalid character in the URL'))
73 elif url.scheme in valid_schemes:
74 return value
75
76 # Repo URL is not a supported scheme at this point, but there are
77 # several cases where we might support it
78 # Launchpad
79 elif value.startswith('lp:'):
80 return value
81 # Relative paths are conditionally supported
82 elif value.startswith('.') and not self.disallow_relative_url:
83 return value
84 # SSH cloning and ``[email protected]:user/project.git``
85 elif self.re_git_user.search(value) or url.scheme in private_schemes:
86 if allow_private_repos:
87 return value
88
89 # Throw a more helpful error message
90 raise ValidationError('Manual cloning via SSH is not supported')
91
92 # No more valid URLs without supported URL schemes
93 raise ValidationError(_('Invalid scheme for URL'))
94
95
96 class SubmoduleURLValidator(RepositoryURLValidator):
97
98 """
99 A URL validator for repository submodules
100
101 If a repository has a relative submodule, the URL path is effectively the
102 supermodule's remote ``origin`` URL with the relative path applied.
103
104 From the git docs::
105
106 ``<repository>`` is the URL of the new submodule's origin repository.
107 This may be either an absolute URL, or (if it begins with ``./`` or
108 ``../``), the location relative to the superproject's default remote
109 repository
110 """
111
112 disallow_relative_url = False
113
114
115 validate_repository_url = RepositoryURLValidator()
116 validate_submodule_url = SubmoduleURLValidator()
117
[end of readthedocs/projects/validators.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/readthedocs/projects/validators.py b/readthedocs/projects/validators.py
--- a/readthedocs/projects/validators.py
+++ b/readthedocs/projects/validators.py
@@ -62,9 +62,12 @@
allow_private_repos = getattr(settings, 'ALLOW_PRIVATE_REPOS', False)
public_schemes = ['https', 'http', 'git', 'ftps', 'ftp']
private_schemes = ['ssh', 'ssh+git']
+ local_schemes = ['file']
valid_schemes = public_schemes
if allow_private_repos:
valid_schemes += private_schemes
+ if getattr(settings, 'DEBUG'): # allow `file://` urls in dev
+ valid_schemes += local_schemes
url = urlparse(value)
# Malicious characters go first
| {"golden_diff": "diff --git a/readthedocs/projects/validators.py b/readthedocs/projects/validators.py\n--- a/readthedocs/projects/validators.py\n+++ b/readthedocs/projects/validators.py\n@@ -62,9 +62,12 @@\n allow_private_repos = getattr(settings, 'ALLOW_PRIVATE_REPOS', False)\n public_schemes = ['https', 'http', 'git', 'ftps', 'ftp']\n private_schemes = ['ssh', 'ssh+git']\n+ local_schemes = ['file']\n valid_schemes = public_schemes\n if allow_private_repos:\n valid_schemes += private_schemes\n+ if getattr(settings, 'DEBUG'): # allow `file://` urls in dev\n+ valid_schemes += local_schemes\n url = urlparse(value)\n \n # Malicious characters go first\n", "issue": "Locally hosted RTD instance doesn't allow git file:/// URLs\n## Details\r\nI installed a local RTD instance according to the Installation guide and imported test project. Now, I want to import my git project manually, \r\n## Expected Result\r\n\r\nI expected that the instance should accept all valid Git URLs.\r\n\r\n## Actual Result\r\n\r\nWhen I enter file:///.../../x.git URL, the manual import page shows \"Invalid scheme for URL\" error. I checked that I can clone this URL from a terminal.\r\n\n", "before_files": [{"content": "\"\"\"Validators for projects app.\"\"\"\n\n# From https://github.com/django/django/pull/3477/files\nfrom __future__ import absolute_import\nimport re\n\nfrom django.conf import settings\nfrom django.core.exceptions import ValidationError\nfrom django.utils.deconstruct import deconstructible\nfrom django.utils.translation import ugettext_lazy as _\nfrom django.core.validators import RegexValidator\nfrom future.backports.urllib.parse import urlparse\n\n\ndomain_regex = (\n r'(?:(?:[A-Z0-9](?:[A-Z0-9-]{0,61}[A-Z0-9])?\\.)+(?:[A-Z]{2,6}\\.?|[A-Z0-9-]{2,}(?<!-)\\.?)|'\n r'localhost|' # localhost...\n r'\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}|' # ...or ipv4\n r'\\[?[A-F0-9]*:[A-F0-9:]+\\]?)' # ...or ipv6\n)\n\n\n@deconstructible\nclass DomainNameValidator(RegexValidator):\n message = _('Enter a valid plain or internationalized domain name value')\n regex = re.compile(domain_regex, re.IGNORECASE)\n\n def __init__(self, accept_idna=True, **kwargs):\n message = kwargs.get('message')\n self.accept_idna = accept_idna\n super(DomainNameValidator, self).__init__(**kwargs)\n if not self.accept_idna and message is None:\n self.message = _('Enter a valid domain name value')\n\n def __call__(self, value):\n try:\n super(DomainNameValidator, self).__call__(value)\n except ValidationError as exc:\n if not self.accept_idna:\n raise\n if not value:\n raise\n try:\n idnavalue = value.encode('idna')\n except UnicodeError:\n raise exc\n super(DomainNameValidator, self).__call__(idnavalue)\n\n\nvalidate_domain_name = DomainNameValidator()\n\n\n@deconstructible\nclass RepositoryURLValidator(object):\n\n disallow_relative_url = True\n\n # Pattern for ``[email protected]:user/repo`` pattern\n re_git_user = re.compile(r'^[\\w]+@.+')\n\n def __call__(self, value):\n allow_private_repos = getattr(settings, 'ALLOW_PRIVATE_REPOS', False)\n public_schemes = ['https', 'http', 'git', 'ftps', 'ftp']\n private_schemes = ['ssh', 'ssh+git']\n valid_schemes = public_schemes\n if allow_private_repos:\n valid_schemes += private_schemes\n url = urlparse(value)\n\n # Malicious characters go first\n if '&&' in value or '|' in value:\n raise ValidationError(_('Invalid character in the URL'))\n elif url.scheme in valid_schemes:\n return value\n\n # Repo URL is not a supported scheme at this point, but there are\n # several cases where we might support it\n # Launchpad\n elif value.startswith('lp:'):\n return value\n # Relative paths are conditionally supported\n elif value.startswith('.') and not self.disallow_relative_url:\n return value\n # SSH cloning and ``[email protected]:user/project.git``\n elif self.re_git_user.search(value) or url.scheme in private_schemes:\n if allow_private_repos:\n return value\n\n # Throw a more helpful error message\n raise ValidationError('Manual cloning via SSH is not supported')\n\n # No more valid URLs without supported URL schemes\n raise ValidationError(_('Invalid scheme for URL'))\n\n\nclass SubmoduleURLValidator(RepositoryURLValidator):\n\n \"\"\"\n A URL validator for repository submodules\n\n If a repository has a relative submodule, the URL path is effectively the\n supermodule's remote ``origin`` URL with the relative path applied.\n\n From the git docs::\n\n ``<repository>`` is the URL of the new submodule's origin repository.\n This may be either an absolute URL, or (if it begins with ``./`` or\n ``../``), the location relative to the superproject's default remote\n repository\n \"\"\"\n\n disallow_relative_url = False\n\n\nvalidate_repository_url = RepositoryURLValidator()\nvalidate_submodule_url = SubmoduleURLValidator()\n", "path": "readthedocs/projects/validators.py"}]} | 1,834 | 184 |
gh_patches_debug_14591 | rasdani/github-patches | git_diff | mathesar-foundation__mathesar-144 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Function to order table by set of columns
**Problem**
<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->
We should be able to get the records of a table in an ordering by any set of its columns. The records should be paginated.
**Proposed solution**
<!-- A clear and concise description of your proposed solution or feature. -->
We need a function at the data layer (i.e., in the `db` library) that performs this query.
**Additional context**
<!-- Add any other context or screenshots about the feature request here.-->
The interesting bit will be figuring out how to paginate the results, but without having to reperform the (costly) ordering query each time.
</issue>
<code>
[start of db/records.py]
1 from sqlalchemy import delete, select
2 from sqlalchemy.inspection import inspect
3
4
5 def _get_primary_key_column(table):
6 primary_key_list = list(inspect(table).primary_key)
7 # We do not support getting by composite primary keys
8 assert len(primary_key_list) == 1
9 return primary_key_list[0]
10
11
12 def get_record(table, engine, id_value):
13 primary_key_column = _get_primary_key_column(table)
14 query = select(table).where(primary_key_column == id_value)
15 with engine.begin() as conn:
16 result = conn.execute(query).fetchall()
17 assert len(result) <= 1
18 return result[0] if result else None
19
20
21 def get_records(table, engine, limit=None, offset=None):
22 query = select(table).limit(limit).offset(offset)
23 with engine.begin() as conn:
24 return conn.execute(query).fetchall()
25
26
27 def create_record_or_records(table, engine, record_data):
28 """
29 record_data can be a dictionary, tuple, or list of dictionaries or tuples.
30 if record_data is a list, it creates multiple records.
31 """
32 id_value = None
33 with engine.begin() as connection:
34 result = connection.execute(table.insert(), record_data)
35 # If there was only a single record created, return the record.
36 if result.rowcount == 1:
37 # We need to manually commit insertion so that we can retrieve the record.
38 connection.commit()
39 id_value = result.inserted_primary_key[0]
40 if id_value is not None:
41 return get_record(table, engine, id_value)
42 # Do not return any records if multiple rows were added.
43 return None
44
45
46 def update_record(table, engine, id_value, record_data):
47 primary_key_column = _get_primary_key_column(table)
48 with engine.begin() as connection:
49 connection.execute(
50 table.update().where(primary_key_column == id_value).values(record_data)
51 )
52 return get_record(table, engine, id_value)
53
54
55 def delete_record(table, engine, id_value):
56 primary_key_column = _get_primary_key_column(table)
57 query = delete(table).where(primary_key_column == id_value)
58 with engine.begin() as conn:
59 return conn.execute(query)
60
[end of db/records.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/db/records.py b/db/records.py
--- a/db/records.py
+++ b/db/records.py
@@ -18,8 +18,20 @@
return result[0] if result else None
-def get_records(table, engine, limit=None, offset=None):
- query = select(table).limit(limit).offset(offset)
+def get_records(table, engine, limit=None, offset=None, order_by=[]):
+ """
+ Returns records from a table.
+
+ Args:
+ table: SQLAlchemy table object
+ engine: SQLAlchemy engine object
+ limit: int, gives number of rows to return
+ offset: int, gives number of rows to skip
+ order_by: list of SQLAlchemy ColumnElements to order by. Should
+ usually be either a list of string column names, or a
+ list of columns from the given table.
+ """
+ query = select(table).order_by(*order_by).limit(limit).offset(offset)
with engine.begin() as conn:
return conn.execute(query).fetchall()
| {"golden_diff": "diff --git a/db/records.py b/db/records.py\n--- a/db/records.py\n+++ b/db/records.py\n@@ -18,8 +18,20 @@\n return result[0] if result else None\n \n \n-def get_records(table, engine, limit=None, offset=None):\n- query = select(table).limit(limit).offset(offset)\n+def get_records(table, engine, limit=None, offset=None, order_by=[]):\n+ \"\"\"\n+ Returns records from a table.\n+\n+ Args:\n+ table: SQLAlchemy table object\n+ engine: SQLAlchemy engine object\n+ limit: int, gives number of rows to return\n+ offset: int, gives number of rows to skip\n+ order_by: list of SQLAlchemy ColumnElements to order by. Should\n+ usually be either a list of string column names, or a\n+ list of columns from the given table.\n+ \"\"\"\n+ query = select(table).order_by(*order_by).limit(limit).offset(offset)\n with engine.begin() as conn:\n return conn.execute(query).fetchall()\n", "issue": "Function to order table by set of columns\n**Problem**\r\n<!-- Please provide a clear and concise description of the problem that this feature request is designed to solve.-->\r\n\r\nWe should be able to get the records of a table in an ordering by any set of its columns. The records should be paginated.\r\n\r\n**Proposed solution**\r\n<!-- A clear and concise description of your proposed solution or feature. -->\r\n\r\nWe need a function at the data layer (i.e., in the `db` library) that performs this query.\r\n\r\n**Additional context**\r\n<!-- Add any other context or screenshots about the feature request here.-->\r\n\r\nThe interesting bit will be figuring out how to paginate the results, but without having to reperform the (costly) ordering query each time.\n", "before_files": [{"content": "from sqlalchemy import delete, select\nfrom sqlalchemy.inspection import inspect\n\n\ndef _get_primary_key_column(table):\n primary_key_list = list(inspect(table).primary_key)\n # We do not support getting by composite primary keys\n assert len(primary_key_list) == 1\n return primary_key_list[0]\n\n\ndef get_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = select(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n result = conn.execute(query).fetchall()\n assert len(result) <= 1\n return result[0] if result else None\n\n\ndef get_records(table, engine, limit=None, offset=None):\n query = select(table).limit(limit).offset(offset)\n with engine.begin() as conn:\n return conn.execute(query).fetchall()\n\n\ndef create_record_or_records(table, engine, record_data):\n \"\"\"\n record_data can be a dictionary, tuple, or list of dictionaries or tuples.\n if record_data is a list, it creates multiple records.\n \"\"\"\n id_value = None\n with engine.begin() as connection:\n result = connection.execute(table.insert(), record_data)\n # If there was only a single record created, return the record.\n if result.rowcount == 1:\n # We need to manually commit insertion so that we can retrieve the record.\n connection.commit()\n id_value = result.inserted_primary_key[0]\n if id_value is not None:\n return get_record(table, engine, id_value)\n # Do not return any records if multiple rows were added.\n return None\n\n\ndef update_record(table, engine, id_value, record_data):\n primary_key_column = _get_primary_key_column(table)\n with engine.begin() as connection:\n connection.execute(\n table.update().where(primary_key_column == id_value).values(record_data)\n )\n return get_record(table, engine, id_value)\n\n\ndef delete_record(table, engine, id_value):\n primary_key_column = _get_primary_key_column(table)\n query = delete(table).where(primary_key_column == id_value)\n with engine.begin() as conn:\n return conn.execute(query)\n", "path": "db/records.py"}]} | 1,270 | 241 |
gh_patches_debug_37987 | rasdani/github-patches | git_diff | scikit-image__scikit-image-4348 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
regionsprops_table is not mentioned in the doc
We should use it in an existing example, or add a new example.
</issue>
<code>
[start of doc/examples/segmentation/plot_regionprops.py]
1 """
2 =========================
3 Measure region properties
4 =========================
5
6 This example shows how to measure properties of labelled image regions.
7
8 """
9 import math
10 import matplotlib.pyplot as plt
11 import numpy as np
12
13 from skimage.draw import ellipse
14 from skimage.measure import label, regionprops
15 from skimage.transform import rotate
16
17
18 image = np.zeros((600, 600))
19
20 rr, cc = ellipse(300, 350, 100, 220)
21 image[rr, cc] = 1
22
23 image = rotate(image, angle=15, order=0)
24
25 label_img = label(image)
26 regions = regionprops(label_img)
27
28 fig, ax = plt.subplots()
29 ax.imshow(image, cmap=plt.cm.gray)
30
31 for props in regions:
32 y0, x0 = props.centroid
33 orientation = props.orientation
34 x1 = x0 + math.cos(orientation) * 0.5 * props.major_axis_length
35 y1 = y0 - math.sin(orientation) * 0.5 * props.major_axis_length
36 x2 = x0 - math.sin(orientation) * 0.5 * props.minor_axis_length
37 y2 = y0 - math.cos(orientation) * 0.5 * props.minor_axis_length
38
39 ax.plot((x0, x1), (y0, y1), '-r', linewidth=2.5)
40 ax.plot((x0, x2), (y0, y2), '-r', linewidth=2.5)
41 ax.plot(x0, y0, '.g', markersize=15)
42
43 minr, minc, maxr, maxc = props.bbox
44 bx = (minc, maxc, maxc, minc, minc)
45 by = (minr, minr, maxr, maxr, minr)
46 ax.plot(bx, by, '-b', linewidth=2.5)
47
48 ax.axis((0, 600, 600, 0))
49 plt.show()
50
[end of doc/examples/segmentation/plot_regionprops.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/doc/examples/segmentation/plot_regionprops.py b/doc/examples/segmentation/plot_regionprops.py
--- a/doc/examples/segmentation/plot_regionprops.py
+++ b/doc/examples/segmentation/plot_regionprops.py
@@ -3,15 +3,17 @@
Measure region properties
=========================
-This example shows how to measure properties of labelled image regions.
+This example shows how to measure properties of labelled image regions. We
+analyze an image with two ellipses.
"""
import math
import matplotlib.pyplot as plt
import numpy as np
+import pandas as pd
from skimage.draw import ellipse
-from skimage.measure import label, regionprops
+from skimage.measure import label, regionprops, regionprops_table
from skimage.transform import rotate
@@ -22,19 +24,27 @@
image = rotate(image, angle=15, order=0)
+rr, cc = ellipse(100, 100, 60, 50)
+image[rr, cc] = 1
+
label_img = label(image)
regions = regionprops(label_img)
+#####################################################################
+# We use the :py:func:`skimage.measure.regionprops` result to draw certain
+# properties on each region. For example, in red, we plot the major and minor
+# axes of each ellipse.
+
fig, ax = plt.subplots()
ax.imshow(image, cmap=plt.cm.gray)
for props in regions:
y0, x0 = props.centroid
orientation = props.orientation
- x1 = x0 + math.cos(orientation) * 0.5 * props.major_axis_length
- y1 = y0 - math.sin(orientation) * 0.5 * props.major_axis_length
- x2 = x0 - math.sin(orientation) * 0.5 * props.minor_axis_length
- y2 = y0 - math.cos(orientation) * 0.5 * props.minor_axis_length
+ x1 = x0 + math.cos(orientation) * 0.5 * props.minor_axis_length
+ y1 = y0 - math.sin(orientation) * 0.5 * props.minor_axis_length
+ x2 = x0 - math.sin(orientation) * 0.5 * props.major_axis_length
+ y2 = y0 - math.cos(orientation) * 0.5 * props.major_axis_length
ax.plot((x0, x1), (y0, y1), '-r', linewidth=2.5)
ax.plot((x0, x2), (y0, y2), '-r', linewidth=2.5)
@@ -47,3 +57,22 @@
ax.axis((0, 600, 600, 0))
plt.show()
+
+#####################################################################
+# We use the :py:func:`skimage.measure.regionprops_table` to compute
+# (selected) properties for each region. Note that
+# ``skimage.measure.regionprops_table`` actually computes the properties,
+# whereas ``skimage.measure.regionprops`` computes them when they come in use
+# (lazy evaluation).
+
+props = regionprops_table(label_img, properties=('centroid',
+ 'orientation',
+ 'major_axis_length',
+ 'minor_axis_length'))
+
+#####################################################################
+# We now display a table of these selected properties (one region per row),
+# the ``skimage.measure.regionprops_table`` result being a pandas-compatible
+# dict.
+
+pd.DataFrame(props)
| {"golden_diff": "diff --git a/doc/examples/segmentation/plot_regionprops.py b/doc/examples/segmentation/plot_regionprops.py\n--- a/doc/examples/segmentation/plot_regionprops.py\n+++ b/doc/examples/segmentation/plot_regionprops.py\n@@ -3,15 +3,17 @@\n Measure region properties\n =========================\n \n-This example shows how to measure properties of labelled image regions.\n+This example shows how to measure properties of labelled image regions. We\n+analyze an image with two ellipses.\n \n \"\"\"\n import math\n import matplotlib.pyplot as plt\n import numpy as np\n+import pandas as pd\n \n from skimage.draw import ellipse\n-from skimage.measure import label, regionprops\n+from skimage.measure import label, regionprops, regionprops_table\n from skimage.transform import rotate\n \n \n@@ -22,19 +24,27 @@\n \n image = rotate(image, angle=15, order=0)\n \n+rr, cc = ellipse(100, 100, 60, 50)\n+image[rr, cc] = 1\n+\n label_img = label(image)\n regions = regionprops(label_img)\n \n+#####################################################################\n+# We use the :py:func:`skimage.measure.regionprops` result to draw certain\n+# properties on each region. For example, in red, we plot the major and minor\n+# axes of each ellipse.\n+\n fig, ax = plt.subplots()\n ax.imshow(image, cmap=plt.cm.gray)\n \n for props in regions:\n y0, x0 = props.centroid\n orientation = props.orientation\n- x1 = x0 + math.cos(orientation) * 0.5 * props.major_axis_length\n- y1 = y0 - math.sin(orientation) * 0.5 * props.major_axis_length\n- x2 = x0 - math.sin(orientation) * 0.5 * props.minor_axis_length\n- y2 = y0 - math.cos(orientation) * 0.5 * props.minor_axis_length\n+ x1 = x0 + math.cos(orientation) * 0.5 * props.minor_axis_length\n+ y1 = y0 - math.sin(orientation) * 0.5 * props.minor_axis_length\n+ x2 = x0 - math.sin(orientation) * 0.5 * props.major_axis_length\n+ y2 = y0 - math.cos(orientation) * 0.5 * props.major_axis_length\n \n ax.plot((x0, x1), (y0, y1), '-r', linewidth=2.5)\n ax.plot((x0, x2), (y0, y2), '-r', linewidth=2.5)\n@@ -47,3 +57,22 @@\n \n ax.axis((0, 600, 600, 0))\n plt.show()\n+\n+#####################################################################\n+# We use the :py:func:`skimage.measure.regionprops_table` to compute\n+# (selected) properties for each region. Note that\n+# ``skimage.measure.regionprops_table`` actually computes the properties,\n+# whereas ``skimage.measure.regionprops`` computes them when they come in use\n+# (lazy evaluation).\n+\n+props = regionprops_table(label_img, properties=('centroid',\n+ 'orientation',\n+ 'major_axis_length',\n+ 'minor_axis_length'))\n+\n+#####################################################################\n+# We now display a table of these selected properties (one region per row),\n+# the ``skimage.measure.regionprops_table`` result being a pandas-compatible\n+# dict.\n+\n+pd.DataFrame(props)\n", "issue": "regionsprops_table is not mentioned in the doc\nWe should use it in an existing example, or add a new example.\n", "before_files": [{"content": "\"\"\"\n=========================\nMeasure region properties\n=========================\n\nThis example shows how to measure properties of labelled image regions.\n\n\"\"\"\nimport math\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom skimage.draw import ellipse\nfrom skimage.measure import label, regionprops\nfrom skimage.transform import rotate\n\n\nimage = np.zeros((600, 600))\n\nrr, cc = ellipse(300, 350, 100, 220)\nimage[rr, cc] = 1\n\nimage = rotate(image, angle=15, order=0)\n\nlabel_img = label(image)\nregions = regionprops(label_img)\n\nfig, ax = plt.subplots()\nax.imshow(image, cmap=plt.cm.gray)\n\nfor props in regions:\n y0, x0 = props.centroid\n orientation = props.orientation\n x1 = x0 + math.cos(orientation) * 0.5 * props.major_axis_length\n y1 = y0 - math.sin(orientation) * 0.5 * props.major_axis_length\n x2 = x0 - math.sin(orientation) * 0.5 * props.minor_axis_length\n y2 = y0 - math.cos(orientation) * 0.5 * props.minor_axis_length\n\n ax.plot((x0, x1), (y0, y1), '-r', linewidth=2.5)\n ax.plot((x0, x2), (y0, y2), '-r', linewidth=2.5)\n ax.plot(x0, y0, '.g', markersize=15)\n\n minr, minc, maxr, maxc = props.bbox\n bx = (minc, maxc, maxc, minc, minc)\n by = (minr, minr, maxr, maxr, minr)\n ax.plot(bx, by, '-b', linewidth=2.5)\n\nax.axis((0, 600, 600, 0))\nplt.show()\n", "path": "doc/examples/segmentation/plot_regionprops.py"}]} | 1,103 | 768 |
gh_patches_debug_49618 | rasdani/github-patches | git_diff | bridgecrewio__checkov-5936 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_GCP_79 SQL Server latest version is 2022 instead of 2019
**Describe the issue**
The `CKV_GCP_79` about SQL server is pinned at 2019 but 2022 is the latest version :
https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates
**Examples**
Related to this files :
https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py
https://github.com/bridgecrewio/checkov/blob/d07fdc994015772a9fa0dc1a12d1391b5765916c/tests/terraform/checks/resource/gcp/example_CloudSqlMajorVersion/main.tf#L213
</issue>
<code>
[start of checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py]
1 from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck
2 from checkov.common.models.enums import CheckCategories
3
4
5 class CloudSqlMajorVersion(BaseResourceValueCheck):
6 def __init__(self):
7 name = "Ensure SQL database is using latest Major version"
8 id = "CKV_GCP_79"
9 supported_resources = ['google_sql_database_instance']
10 categories = [CheckCategories.GENERAL_SECURITY]
11 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
12
13 def get_inspected_key(self):
14 return 'database_version'
15
16 def get_expected_values(self):
17 return ["POSTGRES_15", "MYSQL_8_0", "SQLSERVER_2019_STANDARD", "SQLSERVER_2019_WEB",
18 "SQLSERVER_2019_ENTERPRISE", "SQLSERVER_2019_EXPRESS"]
19
20
21 check = CloudSqlMajorVersion()
22
[end of checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py b/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py
--- a/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py
+++ b/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py
@@ -14,8 +14,8 @@
return 'database_version'
def get_expected_values(self):
- return ["POSTGRES_15", "MYSQL_8_0", "SQLSERVER_2019_STANDARD", "SQLSERVER_2019_WEB",
- "SQLSERVER_2019_ENTERPRISE", "SQLSERVER_2019_EXPRESS"]
+ return ["POSTGRES_15", "MYSQL_8_0", "SQLSERVER_2022_STANDARD", "SQLSERVER_2022_WEB",
+ "SQLSERVER_2022_ENTERPRISE", "SQLSERVER_2022_EXPRESS"]
check = CloudSqlMajorVersion()
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py b/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py\n--- a/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py\n+++ b/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py\n@@ -14,8 +14,8 @@\n return 'database_version'\n \n def get_expected_values(self):\n- return [\"POSTGRES_15\", \"MYSQL_8_0\", \"SQLSERVER_2019_STANDARD\", \"SQLSERVER_2019_WEB\",\n- \"SQLSERVER_2019_ENTERPRISE\", \"SQLSERVER_2019_EXPRESS\"]\n+ return [\"POSTGRES_15\", \"MYSQL_8_0\", \"SQLSERVER_2022_STANDARD\", \"SQLSERVER_2022_WEB\",\n+ \"SQLSERVER_2022_ENTERPRISE\", \"SQLSERVER_2022_EXPRESS\"]\n \n \n check = CloudSqlMajorVersion()\n", "issue": "CKV_GCP_79 SQL Server latest version is 2022 instead of 2019\n**Describe the issue**\r\nThe `CKV_GCP_79` about SQL server is pinned at 2019 but 2022 is the latest version : \r\nhttps://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates\r\n\r\n**Examples**\r\nRelated to this files : \r\n\r\nhttps://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py\r\n\r\nhttps://github.com/bridgecrewio/checkov/blob/d07fdc994015772a9fa0dc1a12d1391b5765916c/tests/terraform/checks/resource/gcp/example_CloudSqlMajorVersion/main.tf#L213\n", "before_files": [{"content": "from checkov.terraform.checks.resource.base_resource_value_check import BaseResourceValueCheck\nfrom checkov.common.models.enums import CheckCategories\n\n\nclass CloudSqlMajorVersion(BaseResourceValueCheck):\n def __init__(self):\n name = \"Ensure SQL database is using latest Major version\"\n id = \"CKV_GCP_79\"\n supported_resources = ['google_sql_database_instance']\n categories = [CheckCategories.GENERAL_SECURITY]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def get_inspected_key(self):\n return 'database_version'\n\n def get_expected_values(self):\n return [\"POSTGRES_15\", \"MYSQL_8_0\", \"SQLSERVER_2019_STANDARD\", \"SQLSERVER_2019_WEB\",\n \"SQLSERVER_2019_ENTERPRISE\", \"SQLSERVER_2019_EXPRESS\"]\n\n\ncheck = CloudSqlMajorVersion()\n", "path": "checkov/terraform/checks/resource/gcp/CloudSqlMajorVersion.py"}]} | 986 | 231 |
gh_patches_debug_20647 | rasdani/github-patches | git_diff | Lightning-AI__torchmetrics-1352 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
bug in pairwise_euclidean_distance
https://github.com/Lightning-AI/metrics/blob/e1c3fda24f90367803c2b04315ad7c8bced719db/torchmetrics/functional/pairwise/euclidean.py#L34
this line can become negative, resulting in a failure with the sqrt function and thus return "nan"
you can test this easily by checking this code:
`pairwise_euclidean_distance(torch.tensor([[772., 112.], [772.20001, 112.], [772.20001, 112.], [772., 112.00000], [772.2, 112.00000], [772.0, 112.00000], [772.01, 112.00000], [772.00000000000001, 112.00000], [772.000001, 112.00000], [772.00001, 112.00000], [772.0001, 112.00000], [772.001, 112.00000], [772.01, 112.00000], [772.99, 112.00000]], dtype=torch.float32))`
</issue>
<code>
[start of src/torchmetrics/functional/pairwise/euclidean.py]
1 # Copyright The PyTorch Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 from typing import Optional
15
16 from torch import Tensor
17 from typing_extensions import Literal
18
19 from torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix
20
21
22 def _pairwise_euclidean_distance_update(
23 x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None
24 ) -> Tensor:
25 """Calculates the pairwise euclidean distance matrix.
26
27 Args:
28 x: tensor of shape ``[N,d]``
29 y: tensor of shape ``[M,d]``
30 zero_diagonal: determines if the diagonal of the distance matrix should be set to zero
31 """
32 x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
33 x_norm = x.norm(dim=1, keepdim=True)
34 y_norm = y.norm(dim=1).T
35 distance = x_norm * x_norm + y_norm * y_norm - 2 * x.mm(y.T)
36 if zero_diagonal:
37 distance.fill_diagonal_(0)
38 return distance.sqrt()
39
40
41 def pairwise_euclidean_distance(
42 x: Tensor,
43 y: Optional[Tensor] = None,
44 reduction: Literal["mean", "sum", "none", None] = None,
45 zero_diagonal: Optional[bool] = None,
46 ) -> Tensor:
47 r"""Calculates pairwise euclidean distances:
48
49 .. math::
50 d_{euc}(x,y) = ||x - y||_2 = \sqrt{\sum_{d=1}^D (x_d - y_d)^2}
51
52 If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise between
53 the rows of :math:`x` and :math:`y`.
54 If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.
55
56 Args:
57 x: Tensor with shape ``[N, d]``
58 y: Tensor with shape ``[M, d]``, optional
59 reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`
60 (applied along column dimension) or `'none'`, `None` for no reduction
61 zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only `x` is given
62 this defaults to `True` else if `y` is also given it defaults to `False`
63
64 Returns:
65 A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix
66
67 Example:
68 >>> import torch
69 >>> from torchmetrics.functional import pairwise_euclidean_distance
70 >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)
71 >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)
72 >>> pairwise_euclidean_distance(x, y)
73 tensor([[3.1623, 2.0000],
74 [5.3852, 4.1231],
75 [8.9443, 7.6158]])
76 >>> pairwise_euclidean_distance(x)
77 tensor([[0.0000, 2.2361, 5.8310],
78 [2.2361, 0.0000, 3.6056],
79 [5.8310, 3.6056, 0.0000]])
80 """
81 distance = _pairwise_euclidean_distance_update(x, y, zero_diagonal)
82 return _reduce_distance_matrix(distance, reduction)
83
[end of src/torchmetrics/functional/pairwise/euclidean.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/torchmetrics/functional/pairwise/euclidean.py b/src/torchmetrics/functional/pairwise/euclidean.py
--- a/src/torchmetrics/functional/pairwise/euclidean.py
+++ b/src/torchmetrics/functional/pairwise/euclidean.py
@@ -13,6 +13,7 @@
# limitations under the License.
from typing import Optional
+import torch
from torch import Tensor
from typing_extensions import Literal
@@ -30,9 +31,13 @@
zero_diagonal: determines if the diagonal of the distance matrix should be set to zero
"""
x, y, zero_diagonal = _check_input(x, y, zero_diagonal)
- x_norm = x.norm(dim=1, keepdim=True)
- y_norm = y.norm(dim=1).T
- distance = x_norm * x_norm + y_norm * y_norm - 2 * x.mm(y.T)
+ # upcast to float64 to prevent precision issues
+ _orig_dtype = x.dtype
+ x = x.to(torch.float64)
+ y = y.to(torch.float64)
+ x_norm = (x * x).sum(dim=1, keepdim=True)
+ y_norm = (y * y).sum(dim=1)
+ distance = (x_norm + y_norm - 2 * x.mm(y.T)).to(_orig_dtype)
if zero_diagonal:
distance.fill_diagonal_(0)
return distance.sqrt()
| {"golden_diff": "diff --git a/src/torchmetrics/functional/pairwise/euclidean.py b/src/torchmetrics/functional/pairwise/euclidean.py\n--- a/src/torchmetrics/functional/pairwise/euclidean.py\n+++ b/src/torchmetrics/functional/pairwise/euclidean.py\n@@ -13,6 +13,7 @@\n # limitations under the License.\n from typing import Optional\n \n+import torch\n from torch import Tensor\n from typing_extensions import Literal\n \n@@ -30,9 +31,13 @@\n zero_diagonal: determines if the diagonal of the distance matrix should be set to zero\n \"\"\"\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n- x_norm = x.norm(dim=1, keepdim=True)\n- y_norm = y.norm(dim=1).T\n- distance = x_norm * x_norm + y_norm * y_norm - 2 * x.mm(y.T)\n+ # upcast to float64 to prevent precision issues\n+ _orig_dtype = x.dtype\n+ x = x.to(torch.float64)\n+ y = y.to(torch.float64)\n+ x_norm = (x * x).sum(dim=1, keepdim=True)\n+ y_norm = (y * y).sum(dim=1)\n+ distance = (x_norm + y_norm - 2 * x.mm(y.T)).to(_orig_dtype)\n if zero_diagonal:\n distance.fill_diagonal_(0)\n return distance.sqrt()\n", "issue": "bug in pairwise_euclidean_distance\nhttps://github.com/Lightning-AI/metrics/blob/e1c3fda24f90367803c2b04315ad7c8bced719db/torchmetrics/functional/pairwise/euclidean.py#L34\r\nthis line can become negative, resulting in a failure with the sqrt function and thus return \"nan\"\r\n\r\nyou can test this easily by checking this code:\r\n\r\n`pairwise_euclidean_distance(torch.tensor([[772., 112.], [772.20001, 112.], [772.20001, 112.], [772., 112.00000], [772.2, 112.00000], [772.0, 112.00000], [772.01, 112.00000], [772.00000000000001, 112.00000], [772.000001, 112.00000], [772.00001, 112.00000], [772.0001, 112.00000], [772.001, 112.00000], [772.01, 112.00000], [772.99, 112.00000]], dtype=torch.float32))`\n", "before_files": [{"content": "# Copyright The PyTorch Lightning team.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\nfrom typing import Optional\n\nfrom torch import Tensor\nfrom typing_extensions import Literal\n\nfrom torchmetrics.functional.pairwise.helpers import _check_input, _reduce_distance_matrix\n\n\ndef _pairwise_euclidean_distance_update(\n x: Tensor, y: Optional[Tensor] = None, zero_diagonal: Optional[bool] = None\n) -> Tensor:\n \"\"\"Calculates the pairwise euclidean distance matrix.\n\n Args:\n x: tensor of shape ``[N,d]``\n y: tensor of shape ``[M,d]``\n zero_diagonal: determines if the diagonal of the distance matrix should be set to zero\n \"\"\"\n x, y, zero_diagonal = _check_input(x, y, zero_diagonal)\n x_norm = x.norm(dim=1, keepdim=True)\n y_norm = y.norm(dim=1).T\n distance = x_norm * x_norm + y_norm * y_norm - 2 * x.mm(y.T)\n if zero_diagonal:\n distance.fill_diagonal_(0)\n return distance.sqrt()\n\n\ndef pairwise_euclidean_distance(\n x: Tensor,\n y: Optional[Tensor] = None,\n reduction: Literal[\"mean\", \"sum\", \"none\", None] = None,\n zero_diagonal: Optional[bool] = None,\n) -> Tensor:\n r\"\"\"Calculates pairwise euclidean distances:\n\n .. math::\n d_{euc}(x,y) = ||x - y||_2 = \\sqrt{\\sum_{d=1}^D (x_d - y_d)^2}\n\n If both :math:`x` and :math:`y` are passed in, the calculation will be performed pairwise between\n the rows of :math:`x` and :math:`y`.\n If only :math:`x` is passed in, the calculation will be performed between the rows of :math:`x`.\n\n Args:\n x: Tensor with shape ``[N, d]``\n y: Tensor with shape ``[M, d]``, optional\n reduction: reduction to apply along the last dimension. Choose between `'mean'`, `'sum'`\n (applied along column dimension) or `'none'`, `None` for no reduction\n zero_diagonal: if the diagonal of the distance matrix should be set to 0. If only `x` is given\n this defaults to `True` else if `y` is also given it defaults to `False`\n\n Returns:\n A ``[N,N]`` matrix of distances if only ``x`` is given, else a ``[N,M]`` matrix\n\n Example:\n >>> import torch\n >>> from torchmetrics.functional import pairwise_euclidean_distance\n >>> x = torch.tensor([[2, 3], [3, 5], [5, 8]], dtype=torch.float32)\n >>> y = torch.tensor([[1, 0], [2, 1]], dtype=torch.float32)\n >>> pairwise_euclidean_distance(x, y)\n tensor([[3.1623, 2.0000],\n [5.3852, 4.1231],\n [8.9443, 7.6158]])\n >>> pairwise_euclidean_distance(x)\n tensor([[0.0000, 2.2361, 5.8310],\n [2.2361, 0.0000, 3.6056],\n [5.8310, 3.6056, 0.0000]])\n \"\"\"\n distance = _pairwise_euclidean_distance_update(x, y, zero_diagonal)\n return _reduce_distance_matrix(distance, reduction)\n", "path": "src/torchmetrics/functional/pairwise/euclidean.py"}]} | 2,040 | 331 |
gh_patches_debug_17552 | rasdani/github-patches | git_diff | modin-project__modin-6959 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Remove `DataFrame.to_pickle_distributed` in favour of `DataFrame.modin.to_pickle_distributed`
</issue>
<code>
[start of modin/experimental/pandas/__init__.py]
1 # Licensed to Modin Development Team under one or more contributor license agreements.
2 # See the NOTICE file distributed with this work for additional information regarding
3 # copyright ownership. The Modin Development Team licenses this file to you under the
4 # Apache License, Version 2.0 (the "License"); you may not use this file except in
5 # compliance with the License. You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software distributed under
10 # the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF
11 # ANY KIND, either express or implied. See the License for the specific language
12 # governing permissions and limitations under the License.
13
14 """
15 The main module through which interaction with the experimental API takes place.
16
17 See `Experimental API Reference` for details.
18
19 Notes
20 -----
21 * Some of experimental APIs deviate from pandas in order to provide improved
22 performance.
23
24 * Although the use of experimental storage formats and engines is available through the
25 `modin.pandas` module when defining environment variable `MODIN_EXPERIMENTAL=true`,
26 the use of experimental I/O functions is available only through the
27 `modin.experimental.pandas` module.
28
29 Examples
30 --------
31 >>> import modin.experimental.pandas as pd
32 >>> df = pd.read_csv_glob("data*.csv")
33 """
34
35 import functools
36 import warnings
37
38 from modin.pandas import * # noqa F401, F403
39
40 from .io import ( # noqa F401
41 read_csv_glob,
42 read_custom_text,
43 read_json_glob,
44 read_parquet_glob,
45 read_pickle_distributed,
46 read_sql,
47 read_xml_glob,
48 to_pickle_distributed,
49 )
50
51 old_to_pickle_distributed = to_pickle_distributed
52
53
54 @functools.wraps(to_pickle_distributed)
55 def to_pickle_distributed(*args, **kwargs):
56 warnings.warn(
57 "`DataFrame.to_pickle_distributed` is deprecated and will be removed in a future version. "
58 + "Please use `DataFrame.modin.to_pickle_distributed` instead.",
59 category=FutureWarning,
60 )
61 return old_to_pickle_distributed(*args, **kwargs)
62
63
64 setattr(DataFrame, "to_pickle_distributed", to_pickle_distributed) # noqa: F405
65
[end of modin/experimental/pandas/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/modin/experimental/pandas/__init__.py b/modin/experimental/pandas/__init__.py
--- a/modin/experimental/pandas/__init__.py
+++ b/modin/experimental/pandas/__init__.py
@@ -32,9 +32,6 @@
>>> df = pd.read_csv_glob("data*.csv")
"""
-import functools
-import warnings
-
from modin.pandas import * # noqa F401, F403
from .io import ( # noqa F401
@@ -45,20 +42,4 @@
read_pickle_distributed,
read_sql,
read_xml_glob,
- to_pickle_distributed,
)
-
-old_to_pickle_distributed = to_pickle_distributed
-
-
[email protected](to_pickle_distributed)
-def to_pickle_distributed(*args, **kwargs):
- warnings.warn(
- "`DataFrame.to_pickle_distributed` is deprecated and will be removed in a future version. "
- + "Please use `DataFrame.modin.to_pickle_distributed` instead.",
- category=FutureWarning,
- )
- return old_to_pickle_distributed(*args, **kwargs)
-
-
-setattr(DataFrame, "to_pickle_distributed", to_pickle_distributed) # noqa: F405
| {"golden_diff": "diff --git a/modin/experimental/pandas/__init__.py b/modin/experimental/pandas/__init__.py\n--- a/modin/experimental/pandas/__init__.py\n+++ b/modin/experimental/pandas/__init__.py\n@@ -32,9 +32,6 @@\n >>> df = pd.read_csv_glob(\"data*.csv\")\n \"\"\"\n \n-import functools\n-import warnings\n-\n from modin.pandas import * # noqa F401, F403\n \n from .io import ( # noqa F401\n@@ -45,20 +42,4 @@\n read_pickle_distributed,\n read_sql,\n read_xml_glob,\n- to_pickle_distributed,\n )\n-\n-old_to_pickle_distributed = to_pickle_distributed\n-\n-\[email protected](to_pickle_distributed)\n-def to_pickle_distributed(*args, **kwargs):\n- warnings.warn(\n- \"`DataFrame.to_pickle_distributed` is deprecated and will be removed in a future version. \"\n- + \"Please use `DataFrame.modin.to_pickle_distributed` instead.\",\n- category=FutureWarning,\n- )\n- return old_to_pickle_distributed(*args, **kwargs)\n-\n-\n-setattr(DataFrame, \"to_pickle_distributed\", to_pickle_distributed) # noqa: F405\n", "issue": "Remove `DataFrame.to_pickle_distributed` in favour of `DataFrame.modin.to_pickle_distributed`\n\n", "before_files": [{"content": "# Licensed to Modin Development Team under one or more contributor license agreements.\n# See the NOTICE file distributed with this work for additional information regarding\n# copyright ownership. The Modin Development Team licenses this file to you under the\n# Apache License, Version 2.0 (the \"License\"); you may not use this file except in\n# compliance with the License. You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software distributed under\n# the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF\n# ANY KIND, either express or implied. See the License for the specific language\n# governing permissions and limitations under the License.\n\n\"\"\"\nThe main module through which interaction with the experimental API takes place.\n\nSee `Experimental API Reference` for details.\n\nNotes\n-----\n* Some of experimental APIs deviate from pandas in order to provide improved\n performance.\n\n* Although the use of experimental storage formats and engines is available through the\n `modin.pandas` module when defining environment variable `MODIN_EXPERIMENTAL=true`,\n the use of experimental I/O functions is available only through the\n `modin.experimental.pandas` module.\n\nExamples\n--------\n>>> import modin.experimental.pandas as pd\n>>> df = pd.read_csv_glob(\"data*.csv\")\n\"\"\"\n\nimport functools\nimport warnings\n\nfrom modin.pandas import * # noqa F401, F403\n\nfrom .io import ( # noqa F401\n read_csv_glob,\n read_custom_text,\n read_json_glob,\n read_parquet_glob,\n read_pickle_distributed,\n read_sql,\n read_xml_glob,\n to_pickle_distributed,\n)\n\nold_to_pickle_distributed = to_pickle_distributed\n\n\[email protected](to_pickle_distributed)\ndef to_pickle_distributed(*args, **kwargs):\n warnings.warn(\n \"`DataFrame.to_pickle_distributed` is deprecated and will be removed in a future version. \"\n + \"Please use `DataFrame.modin.to_pickle_distributed` instead.\",\n category=FutureWarning,\n )\n return old_to_pickle_distributed(*args, **kwargs)\n\n\nsetattr(DataFrame, \"to_pickle_distributed\", to_pickle_distributed) # noqa: F405\n", "path": "modin/experimental/pandas/__init__.py"}]} | 1,195 | 286 |
gh_patches_debug_44533 | rasdani/github-patches | git_diff | biolab__orange3-text-240 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
OWCorpus: save Text Features in Settings
<!--
This is an issue template. Please fill in the relevant details in the
sections below.
-->
##### Text version
<!-- From menu _Options→Add-ons→Orange3-Text_ or code `orangecontrib.text.version.full_version` -->
0.2.3
##### Orange version
<!-- From menu _Help→About→Version_ or code `Orange.version.full_version` -->
3.5.dev
##### Expected behavior
Corpus widget remembers set Text Features.
##### Actual behavior
Saved workflow (i.e. Corpus) doesn't store Text Features the user has set.
##### Steps to reproduce the behavior
Corpus (load a data set with several string attributes, set the Text Feature to be one of them, but not default).
Save and reload workflow.
Text Feature is reloaded to default instead of the selected one.
##### Additional info (worksheets, data, screenshots, ...)
</issue>
<code>
[start of orangecontrib/text/widgets/owloadcorpus.py]
1 import os
2
3 from Orange.data.io import FileFormat
4 from Orange.widgets import gui
5 from Orange.widgets.utils.itemmodels import VariableListModel
6 from Orange.widgets.data.owselectcolumns import VariablesListItemView
7 from Orange.widgets.settings import Setting
8 from Orange.widgets.widget import OWWidget, Msg
9 from orangecontrib.text.corpus import Corpus, get_sample_corpora_dir
10 from orangecontrib.text.widgets.utils import widgets
11
12
13 class Output:
14 CORPUS = "Corpus"
15
16
17 class OWLoadCorpus(OWWidget):
18 name = "Corpus"
19 description = "Load a corpus of text documents, (optionally) tagged with categories."
20 icon = "icons/TextFile.svg"
21 priority = 10
22
23 outputs = [(Output.CORPUS, Corpus)]
24 want_main_area = False
25 resizing_enabled = False
26
27 dlgFormats = (
28 "All readable files ({});;".format(
29 '*' + ' *'.join(FileFormat.readers.keys())) +
30 ";;".join("{} (*{})".format(f.DESCRIPTION, ' *'.join(f.EXTENSIONS))
31 for f in sorted(set(FileFormat.readers.values()),
32 key=list(FileFormat.readers.values()).index)))
33
34 recent_files = Setting([])
35
36 class Error(OWWidget.Error):
37 read_file = Msg("Can't read file {} ({})")
38
39 def __init__(self):
40 super().__init__()
41
42 self.corpus = None
43
44 # Browse file box
45 fbox = gui.widgetBox(self.controlArea, "Corpus file", orientation=0)
46 widget = widgets.FileWidget(recent_files=self.recent_files, icon_size=(16, 16), on_open=self.open_file,
47 directory_aliases={"Browse documentation corpora ...": get_sample_corpora_dir()},
48 dialog_format=self.dlgFormats, dialog_title='Open Orange Document Corpus',
49 allow_empty=False, reload_label='Reload', browse_label='Browse')
50 fbox.layout().addWidget(widget)
51
52 # Corpus info
53 ibox = gui.widgetBox(self.controlArea, "Corpus info", addSpace=True)
54 corp_info = "Corpus of 0 documents."
55 self.info_label = gui.label(ibox, self, corp_info)
56
57 # Used Text Features
58 fbox = gui.widgetBox(self.controlArea, orientation=0)
59 ubox = gui.widgetBox(fbox, "Used text features", addSpace=True)
60 self.used_attrs = VariableListModel(enable_dnd=True)
61 self.used_attrs_view = VariablesListItemView()
62 self.used_attrs_view.setModel(self.used_attrs)
63 ubox.layout().addWidget(self.used_attrs_view)
64
65 aa = self.used_attrs
66 aa.dataChanged.connect(self.update_feature_selection)
67 aa.rowsInserted.connect(self.update_feature_selection)
68 aa.rowsRemoved.connect(self.update_feature_selection)
69
70 # Ignored Text Features
71 ibox = gui.widgetBox(fbox, "Ignored text features", addSpace=True)
72 self.unused_attrs = VariableListModel(enable_dnd=True)
73 self.unused_attrs_view = VariablesListItemView()
74 self.unused_attrs_view.setModel(self.unused_attrs)
75 ibox.layout().addWidget(self.unused_attrs_view)
76
77 # load first file
78 widget.select(0)
79
80 def open_file(self, path):
81 self.Error.read_file.clear()
82 self.used_attrs[:] = []
83 self.unused_attrs[:] = []
84 if path:
85 try:
86 self.corpus = Corpus.from_file(path)
87 self.corpus.name = os.path.splitext(os.path.basename(path))[0]
88 self.info_label.setText("Corpus of {} documents.".format(len(self.corpus)))
89 self.used_attrs.extend(self.corpus.text_features)
90 self.unused_attrs.extend([f for f in self.corpus.domain.metas
91 if f.is_string and f not in self.corpus.text_features])
92 except BaseException as err:
93 self.Error.read_file(path, str(err))
94
95 def update_feature_selection(self):
96 # TODO fix VariablesListItemView so it does not emit
97 # duplicated data when reordering inside a single window
98 def remove_duplicates(l):
99 unique = []
100 for i in l:
101 if i not in unique:
102 unique.append(i)
103 return unique
104
105 if self.corpus is not None:
106 self.corpus.set_text_features(remove_duplicates(self.used_attrs))
107 self.send(Output.CORPUS, self.corpus)
108
109
110 if __name__ == '__main__':
111 from AnyQt.QtWidgets import QApplication
112 app = QApplication([])
113 widget = OWLoadCorpus()
114 widget.show()
115 app.exec()
116 widget.saveSettings()
117
[end of orangecontrib/text/widgets/owloadcorpus.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/orangecontrib/text/widgets/owloadcorpus.py b/orangecontrib/text/widgets/owloadcorpus.py
--- a/orangecontrib/text/widgets/owloadcorpus.py
+++ b/orangecontrib/text/widgets/owloadcorpus.py
@@ -4,7 +4,7 @@
from Orange.widgets import gui
from Orange.widgets.utils.itemmodels import VariableListModel
from Orange.widgets.data.owselectcolumns import VariablesListItemView
-from Orange.widgets.settings import Setting
+from Orange.widgets.settings import Setting, ContextSetting, PerfectDomainContextHandler
from Orange.widgets.widget import OWWidget, Msg
from orangecontrib.text.corpus import Corpus, get_sample_corpora_dir
from orangecontrib.text.widgets.utils import widgets
@@ -31,7 +31,12 @@
for f in sorted(set(FileFormat.readers.values()),
key=list(FileFormat.readers.values()).index)))
+ settingsHandler = PerfectDomainContextHandler(
+ match_values=PerfectDomainContextHandler.MATCH_VALUES_ALL
+ )
+
recent_files = Setting([])
+ used_attrs = ContextSetting([])
class Error(OWWidget.Error):
read_file = Msg("Can't read file {} ({})")
@@ -57,38 +62,41 @@
# Used Text Features
fbox = gui.widgetBox(self.controlArea, orientation=0)
ubox = gui.widgetBox(fbox, "Used text features", addSpace=True)
- self.used_attrs = VariableListModel(enable_dnd=True)
+ self.used_attrs_model = VariableListModel(enable_dnd=True)
self.used_attrs_view = VariablesListItemView()
- self.used_attrs_view.setModel(self.used_attrs)
+ self.used_attrs_view.setModel(self.used_attrs_model)
ubox.layout().addWidget(self.used_attrs_view)
- aa = self.used_attrs
+ aa = self.used_attrs_model
aa.dataChanged.connect(self.update_feature_selection)
aa.rowsInserted.connect(self.update_feature_selection)
aa.rowsRemoved.connect(self.update_feature_selection)
# Ignored Text Features
ibox = gui.widgetBox(fbox, "Ignored text features", addSpace=True)
- self.unused_attrs = VariableListModel(enable_dnd=True)
+ self.unused_attrs_model = VariableListModel(enable_dnd=True)
self.unused_attrs_view = VariablesListItemView()
- self.unused_attrs_view.setModel(self.unused_attrs)
+ self.unused_attrs_view.setModel(self.unused_attrs_model)
ibox.layout().addWidget(self.unused_attrs_view)
# load first file
widget.select(0)
def open_file(self, path):
+ self.closeContext()
self.Error.read_file.clear()
- self.used_attrs[:] = []
- self.unused_attrs[:] = []
+ self.used_attrs_model[:] = []
+ self.unused_attrs_model[:] = []
if path:
try:
self.corpus = Corpus.from_file(path)
self.corpus.name = os.path.splitext(os.path.basename(path))[0]
self.info_label.setText("Corpus of {} documents.".format(len(self.corpus)))
- self.used_attrs.extend(self.corpus.text_features)
- self.unused_attrs.extend([f for f in self.corpus.domain.metas
- if f.is_string and f not in self.corpus.text_features])
+ self.used_attrs = list(self.corpus.text_features)
+ self.openContext(self.corpus)
+ self.used_attrs_model.extend(self.used_attrs)
+ self.unused_attrs_model.extend([f for f in self.corpus.domain.metas
+ if f.is_string and f not in self.used_attrs_model])
except BaseException as err:
self.Error.read_file(path, str(err))
@@ -103,8 +111,9 @@
return unique
if self.corpus is not None:
- self.corpus.set_text_features(remove_duplicates(self.used_attrs))
+ self.corpus.set_text_features(remove_duplicates(self.used_attrs_model))
self.send(Output.CORPUS, self.corpus)
+ self.used_attrs = list(self.used_attrs_model)
if __name__ == '__main__':
| {"golden_diff": "diff --git a/orangecontrib/text/widgets/owloadcorpus.py b/orangecontrib/text/widgets/owloadcorpus.py\n--- a/orangecontrib/text/widgets/owloadcorpus.py\n+++ b/orangecontrib/text/widgets/owloadcorpus.py\n@@ -4,7 +4,7 @@\n from Orange.widgets import gui\n from Orange.widgets.utils.itemmodels import VariableListModel\n from Orange.widgets.data.owselectcolumns import VariablesListItemView\n-from Orange.widgets.settings import Setting\n+from Orange.widgets.settings import Setting, ContextSetting, PerfectDomainContextHandler\n from Orange.widgets.widget import OWWidget, Msg\n from orangecontrib.text.corpus import Corpus, get_sample_corpora_dir\n from orangecontrib.text.widgets.utils import widgets\n@@ -31,7 +31,12 @@\n for f in sorted(set(FileFormat.readers.values()),\n key=list(FileFormat.readers.values()).index)))\n \n+ settingsHandler = PerfectDomainContextHandler(\n+ match_values=PerfectDomainContextHandler.MATCH_VALUES_ALL\n+ )\n+\n recent_files = Setting([])\n+ used_attrs = ContextSetting([])\n \n class Error(OWWidget.Error):\n read_file = Msg(\"Can't read file {} ({})\")\n@@ -57,38 +62,41 @@\n # Used Text Features\n fbox = gui.widgetBox(self.controlArea, orientation=0)\n ubox = gui.widgetBox(fbox, \"Used text features\", addSpace=True)\n- self.used_attrs = VariableListModel(enable_dnd=True)\n+ self.used_attrs_model = VariableListModel(enable_dnd=True)\n self.used_attrs_view = VariablesListItemView()\n- self.used_attrs_view.setModel(self.used_attrs)\n+ self.used_attrs_view.setModel(self.used_attrs_model)\n ubox.layout().addWidget(self.used_attrs_view)\n \n- aa = self.used_attrs\n+ aa = self.used_attrs_model\n aa.dataChanged.connect(self.update_feature_selection)\n aa.rowsInserted.connect(self.update_feature_selection)\n aa.rowsRemoved.connect(self.update_feature_selection)\n \n # Ignored Text Features\n ibox = gui.widgetBox(fbox, \"Ignored text features\", addSpace=True)\n- self.unused_attrs = VariableListModel(enable_dnd=True)\n+ self.unused_attrs_model = VariableListModel(enable_dnd=True)\n self.unused_attrs_view = VariablesListItemView()\n- self.unused_attrs_view.setModel(self.unused_attrs)\n+ self.unused_attrs_view.setModel(self.unused_attrs_model)\n ibox.layout().addWidget(self.unused_attrs_view)\n \n # load first file\n widget.select(0)\n \n def open_file(self, path):\n+ self.closeContext()\n self.Error.read_file.clear()\n- self.used_attrs[:] = []\n- self.unused_attrs[:] = []\n+ self.used_attrs_model[:] = []\n+ self.unused_attrs_model[:] = []\n if path:\n try:\n self.corpus = Corpus.from_file(path)\n self.corpus.name = os.path.splitext(os.path.basename(path))[0]\n self.info_label.setText(\"Corpus of {} documents.\".format(len(self.corpus)))\n- self.used_attrs.extend(self.corpus.text_features)\n- self.unused_attrs.extend([f for f in self.corpus.domain.metas\n- if f.is_string and f not in self.corpus.text_features])\n+ self.used_attrs = list(self.corpus.text_features)\n+ self.openContext(self.corpus)\n+ self.used_attrs_model.extend(self.used_attrs)\n+ self.unused_attrs_model.extend([f for f in self.corpus.domain.metas\n+ if f.is_string and f not in self.used_attrs_model])\n except BaseException as err:\n self.Error.read_file(path, str(err))\n \n@@ -103,8 +111,9 @@\n return unique\n \n if self.corpus is not None:\n- self.corpus.set_text_features(remove_duplicates(self.used_attrs))\n+ self.corpus.set_text_features(remove_duplicates(self.used_attrs_model))\n self.send(Output.CORPUS, self.corpus)\n+ self.used_attrs = list(self.used_attrs_model)\n \n \n if __name__ == '__main__':\n", "issue": "OWCorpus: save Text Features in Settings\n<!--\r\nThis is an issue template. Please fill in the relevant details in the\r\nsections below.\r\n-->\r\n\r\n##### Text version\r\n<!-- From menu _Options\u2192Add-ons\u2192Orange3-Text_ or code `orangecontrib.text.version.full_version` -->\r\n0.2.3\r\n\r\n##### Orange version\r\n<!-- From menu _Help\u2192About\u2192Version_ or code `Orange.version.full_version` -->\r\n3.5.dev\r\n\r\n##### Expected behavior\r\nCorpus widget remembers set Text Features.\r\n\r\n\r\n##### Actual behavior\r\nSaved workflow (i.e. Corpus) doesn't store Text Features the user has set.\r\n\r\n\r\n##### Steps to reproduce the behavior\r\nCorpus (load a data set with several string attributes, set the Text Feature to be one of them, but not default).\r\nSave and reload workflow.\r\nText Feature is reloaded to default instead of the selected one.\r\n\r\n\r\n##### Additional info (worksheets, data, screenshots, ...)\r\n\r\n\r\n\n", "before_files": [{"content": "import os\n\nfrom Orange.data.io import FileFormat\nfrom Orange.widgets import gui\nfrom Orange.widgets.utils.itemmodels import VariableListModel\nfrom Orange.widgets.data.owselectcolumns import VariablesListItemView\nfrom Orange.widgets.settings import Setting\nfrom Orange.widgets.widget import OWWidget, Msg\nfrom orangecontrib.text.corpus import Corpus, get_sample_corpora_dir\nfrom orangecontrib.text.widgets.utils import widgets\n\n\nclass Output:\n CORPUS = \"Corpus\"\n\n\nclass OWLoadCorpus(OWWidget):\n name = \"Corpus\"\n description = \"Load a corpus of text documents, (optionally) tagged with categories.\"\n icon = \"icons/TextFile.svg\"\n priority = 10\n\n outputs = [(Output.CORPUS, Corpus)]\n want_main_area = False\n resizing_enabled = False\n\n dlgFormats = (\n \"All readable files ({});;\".format(\n '*' + ' *'.join(FileFormat.readers.keys())) +\n \";;\".join(\"{} (*{})\".format(f.DESCRIPTION, ' *'.join(f.EXTENSIONS))\n for f in sorted(set(FileFormat.readers.values()),\n key=list(FileFormat.readers.values()).index)))\n\n recent_files = Setting([])\n\n class Error(OWWidget.Error):\n read_file = Msg(\"Can't read file {} ({})\")\n\n def __init__(self):\n super().__init__()\n\n self.corpus = None\n\n # Browse file box\n fbox = gui.widgetBox(self.controlArea, \"Corpus file\", orientation=0)\n widget = widgets.FileWidget(recent_files=self.recent_files, icon_size=(16, 16), on_open=self.open_file,\n directory_aliases={\"Browse documentation corpora ...\": get_sample_corpora_dir()},\n dialog_format=self.dlgFormats, dialog_title='Open Orange Document Corpus',\n allow_empty=False, reload_label='Reload', browse_label='Browse')\n fbox.layout().addWidget(widget)\n\n # Corpus info\n ibox = gui.widgetBox(self.controlArea, \"Corpus info\", addSpace=True)\n corp_info = \"Corpus of 0 documents.\"\n self.info_label = gui.label(ibox, self, corp_info)\n\n # Used Text Features\n fbox = gui.widgetBox(self.controlArea, orientation=0)\n ubox = gui.widgetBox(fbox, \"Used text features\", addSpace=True)\n self.used_attrs = VariableListModel(enable_dnd=True)\n self.used_attrs_view = VariablesListItemView()\n self.used_attrs_view.setModel(self.used_attrs)\n ubox.layout().addWidget(self.used_attrs_view)\n\n aa = self.used_attrs\n aa.dataChanged.connect(self.update_feature_selection)\n aa.rowsInserted.connect(self.update_feature_selection)\n aa.rowsRemoved.connect(self.update_feature_selection)\n\n # Ignored Text Features\n ibox = gui.widgetBox(fbox, \"Ignored text features\", addSpace=True)\n self.unused_attrs = VariableListModel(enable_dnd=True)\n self.unused_attrs_view = VariablesListItemView()\n self.unused_attrs_view.setModel(self.unused_attrs)\n ibox.layout().addWidget(self.unused_attrs_view)\n\n # load first file\n widget.select(0)\n\n def open_file(self, path):\n self.Error.read_file.clear()\n self.used_attrs[:] = []\n self.unused_attrs[:] = []\n if path:\n try:\n self.corpus = Corpus.from_file(path)\n self.corpus.name = os.path.splitext(os.path.basename(path))[0]\n self.info_label.setText(\"Corpus of {} documents.\".format(len(self.corpus)))\n self.used_attrs.extend(self.corpus.text_features)\n self.unused_attrs.extend([f for f in self.corpus.domain.metas\n if f.is_string and f not in self.corpus.text_features])\n except BaseException as err:\n self.Error.read_file(path, str(err))\n\n def update_feature_selection(self):\n # TODO fix VariablesListItemView so it does not emit\n # duplicated data when reordering inside a single window\n def remove_duplicates(l):\n unique = []\n for i in l:\n if i not in unique:\n unique.append(i)\n return unique\n\n if self.corpus is not None:\n self.corpus.set_text_features(remove_duplicates(self.used_attrs))\n self.send(Output.CORPUS, self.corpus)\n\n\nif __name__ == '__main__':\n from AnyQt.QtWidgets import QApplication\n app = QApplication([])\n widget = OWLoadCorpus()\n widget.show()\n app.exec()\n widget.saveSettings()\n", "path": "orangecontrib/text/widgets/owloadcorpus.py"}]} | 1,957 | 879 |
gh_patches_debug_15637 | rasdani/github-patches | git_diff | zestedesavoir__zds-site-2216 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Un utilisateur peut s'écrire à lui même
Testé en préprod.
Scénario :
- Je vais dans l'interface des messages
- Dans les destinaites, je renseigne la chaine suivante (contenu entre guillemets) " , ".
- Le MP m'est envoyé à moi tout seul.
</issue>
<code>
[start of zds/mp/forms.py]
1 # coding: utf-8
2
3 from crispy_forms.helper import FormHelper
4 from crispy_forms.layout import Layout, Field, Hidden
5 from django import forms
6 from django.contrib.auth.models import User
7 from django.core.urlresolvers import reverse
8
9 from zds.mp.models import PrivateTopic
10 from zds.utils.forms import CommonLayoutEditor
11 from django.utils.translation import ugettext_lazy as _
12
13
14 class PrivateTopicForm(forms.Form):
15 participants = forms.CharField(
16 label=_('Participants'),
17 widget=forms.TextInput(
18 attrs={
19 'placeholder': _(u'Les participants doivent '
20 u'être séparés par une virgule.'),
21 'required': 'required',
22 'data-autocomplete': '{ "type": "multiple" }'}))
23
24 title = forms.CharField(
25 label=_('Titre'),
26 max_length=PrivateTopic._meta.get_field('title').max_length,
27 widget=forms.TextInput(
28 attrs={
29 'required': 'required'
30 }
31 )
32 )
33
34 subtitle = forms.CharField(
35 label=_('Sous-titre'),
36 max_length=PrivateTopic._meta.get_field('subtitle').max_length,
37 required=False
38 )
39
40 text = forms.CharField(
41 label='Texte',
42 required=False,
43 widget=forms.Textarea(
44 attrs={
45 'placeholder': _('Votre message au format Markdown.'),
46 'required': 'required'
47 }
48 )
49 )
50
51 def __init__(self, username, *args, **kwargs):
52 super(PrivateTopicForm, self).__init__(*args, **kwargs)
53 self.helper = FormHelper()
54 self.helper.form_class = 'content-wrapper'
55 self.helper.form_method = 'post'
56 self.username = username
57
58 self.helper.layout = Layout(
59 Field('participants', autocomplete='off'),
60 Field('title', autocomplete='off'),
61 Field('subtitle', autocomplete='off'),
62 CommonLayoutEditor(),
63 )
64
65 def clean(self):
66 cleaned_data = super(PrivateTopicForm, self).clean()
67
68 participants = cleaned_data.get('participants')
69 title = cleaned_data.get('title')
70 text = cleaned_data.get('text')
71
72 if participants is not None and participants.strip() == '':
73 self._errors['participants'] = self.error_class(
74 [_(u'Le champ participants ne peut être vide')])
75
76 if participants is not None and participants.strip() != '':
77 receivers = participants.strip().split(',')
78 for receiver in receivers:
79 if User.objects.filter(username__exact=receiver.strip()).count() == 0 and receiver.strip() != '':
80 self._errors['participants'] = self.error_class(
81 [_(u'Un des participants saisi est introuvable')])
82 elif receiver.strip().lower() == self.username.lower():
83 self._errors['participants'] = self.error_class(
84 [_(u'Vous ne pouvez pas vous écrire à vous-même !')])
85
86 if title is not None and title.strip() == '':
87 self._errors['title'] = self.error_class(
88 [_(u'Le champ titre ne peut être vide')])
89
90 if text is not None and text.strip() == '':
91 self._errors['text'] = self.error_class(
92 [_(u'Le champ text ne peut être vide')])
93
94 return cleaned_data
95
96
97 class PrivatePostForm(forms.Form):
98 text = forms.CharField(
99 label='',
100 widget=forms.Textarea(
101 attrs={
102 'placeholder': _('Votre message au format Markdown.'),
103 'required': 'required'
104 }
105 )
106 )
107
108 def __init__(self, topic, user, *args, **kwargs):
109 super(PrivatePostForm, self).__init__(*args, **kwargs)
110 self.helper = FormHelper()
111 self.helper.form_action = reverse(
112 'zds.mp.views.answer') + '?sujet=' + str(topic.pk)
113 self.helper.form_method = 'post'
114
115 self.helper.layout = Layout(
116 CommonLayoutEditor(),
117 Hidden('last_post', '{{ last_post_pk }}'),
118 )
119
120 if topic.alone():
121 self.helper['text'].wrap(
122 Field,
123 placeholder=_(u'Vous êtes seul dans cette conversation, '
124 u'vous ne pouvez plus y écrire.'),
125 disabled=True)
126
127 def clean(self):
128 cleaned_data = super(PrivatePostForm, self).clean()
129
130 text = cleaned_data.get('text')
131
132 if text is not None and text.strip() == '':
133 self._errors['text'] = self.error_class(
134 [_(u'Le champ text ne peut être vide')])
135
136 return cleaned_data
137
[end of zds/mp/forms.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/zds/mp/forms.py b/zds/mp/forms.py
--- a/zds/mp/forms.py
+++ b/zds/mp/forms.py
@@ -76,7 +76,8 @@
if participants is not None and participants.strip() != '':
receivers = participants.strip().split(',')
for receiver in receivers:
- if User.objects.filter(username__exact=receiver.strip()).count() == 0 and receiver.strip() != '':
+ if User.objects.filter(username__exact=receiver.strip()).count() == 0 and receiver.strip() != '' \
+ or receiver.strip() == '':
self._errors['participants'] = self.error_class(
[_(u'Un des participants saisi est introuvable')])
elif receiver.strip().lower() == self.username.lower():
| {"golden_diff": "diff --git a/zds/mp/forms.py b/zds/mp/forms.py\n--- a/zds/mp/forms.py\n+++ b/zds/mp/forms.py\n@@ -76,7 +76,8 @@\n if participants is not None and participants.strip() != '':\n receivers = participants.strip().split(',')\n for receiver in receivers:\n- if User.objects.filter(username__exact=receiver.strip()).count() == 0 and receiver.strip() != '':\n+ if User.objects.filter(username__exact=receiver.strip()).count() == 0 and receiver.strip() != '' \\\n+ or receiver.strip() == '':\n self._errors['participants'] = self.error_class(\n [_(u'Un des participants saisi est introuvable')])\n elif receiver.strip().lower() == self.username.lower():\n", "issue": "Un utilisateur peut s'\u00e9crire \u00e0 lui m\u00eame\nTest\u00e9 en pr\u00e9prod.\n\nSc\u00e9nario : \n- Je vais dans l'interface des messages\n- Dans les destinaites, je renseigne la chaine suivante (contenu entre guillemets) \" , \".\n- Le MP m'est envoy\u00e9 \u00e0 moi tout seul.\n\n", "before_files": [{"content": "# coding: utf-8\n\nfrom crispy_forms.helper import FormHelper\nfrom crispy_forms.layout import Layout, Field, Hidden\nfrom django import forms\nfrom django.contrib.auth.models import User\nfrom django.core.urlresolvers import reverse\n\nfrom zds.mp.models import PrivateTopic\nfrom zds.utils.forms import CommonLayoutEditor\nfrom django.utils.translation import ugettext_lazy as _\n\n\nclass PrivateTopicForm(forms.Form):\n participants = forms.CharField(\n label=_('Participants'),\n widget=forms.TextInput(\n attrs={\n 'placeholder': _(u'Les participants doivent '\n u'\u00eatre s\u00e9par\u00e9s par une virgule.'),\n 'required': 'required',\n 'data-autocomplete': '{ \"type\": \"multiple\" }'}))\n\n title = forms.CharField(\n label=_('Titre'),\n max_length=PrivateTopic._meta.get_field('title').max_length,\n widget=forms.TextInput(\n attrs={\n 'required': 'required'\n }\n )\n )\n\n subtitle = forms.CharField(\n label=_('Sous-titre'),\n max_length=PrivateTopic._meta.get_field('subtitle').max_length,\n required=False\n )\n\n text = forms.CharField(\n label='Texte',\n required=False,\n widget=forms.Textarea(\n attrs={\n 'placeholder': _('Votre message au format Markdown.'),\n 'required': 'required'\n }\n )\n )\n\n def __init__(self, username, *args, **kwargs):\n super(PrivateTopicForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_class = 'content-wrapper'\n self.helper.form_method = 'post'\n self.username = username\n\n self.helper.layout = Layout(\n Field('participants', autocomplete='off'),\n Field('title', autocomplete='off'),\n Field('subtitle', autocomplete='off'),\n CommonLayoutEditor(),\n )\n\n def clean(self):\n cleaned_data = super(PrivateTopicForm, self).clean()\n\n participants = cleaned_data.get('participants')\n title = cleaned_data.get('title')\n text = cleaned_data.get('text')\n\n if participants is not None and participants.strip() == '':\n self._errors['participants'] = self.error_class(\n [_(u'Le champ participants ne peut \u00eatre vide')])\n\n if participants is not None and participants.strip() != '':\n receivers = participants.strip().split(',')\n for receiver in receivers:\n if User.objects.filter(username__exact=receiver.strip()).count() == 0 and receiver.strip() != '':\n self._errors['participants'] = self.error_class(\n [_(u'Un des participants saisi est introuvable')])\n elif receiver.strip().lower() == self.username.lower():\n self._errors['participants'] = self.error_class(\n [_(u'Vous ne pouvez pas vous \u00e9crire \u00e0 vous-m\u00eame !')])\n\n if title is not None and title.strip() == '':\n self._errors['title'] = self.error_class(\n [_(u'Le champ titre ne peut \u00eatre vide')])\n\n if text is not None and text.strip() == '':\n self._errors['text'] = self.error_class(\n [_(u'Le champ text ne peut \u00eatre vide')])\n\n return cleaned_data\n\n\nclass PrivatePostForm(forms.Form):\n text = forms.CharField(\n label='',\n widget=forms.Textarea(\n attrs={\n 'placeholder': _('Votre message au format Markdown.'),\n 'required': 'required'\n }\n )\n )\n\n def __init__(self, topic, user, *args, **kwargs):\n super(PrivatePostForm, self).__init__(*args, **kwargs)\n self.helper = FormHelper()\n self.helper.form_action = reverse(\n 'zds.mp.views.answer') + '?sujet=' + str(topic.pk)\n self.helper.form_method = 'post'\n\n self.helper.layout = Layout(\n CommonLayoutEditor(),\n Hidden('last_post', '{{ last_post_pk }}'),\n )\n\n if topic.alone():\n self.helper['text'].wrap(\n Field,\n placeholder=_(u'Vous \u00eates seul dans cette conversation, '\n u'vous ne pouvez plus y \u00e9crire.'),\n disabled=True)\n\n def clean(self):\n cleaned_data = super(PrivatePostForm, self).clean()\n\n text = cleaned_data.get('text')\n\n if text is not None and text.strip() == '':\n self._errors['text'] = self.error_class(\n [_(u'Le champ text ne peut \u00eatre vide')])\n\n return cleaned_data\n", "path": "zds/mp/forms.py"}]} | 1,867 | 167 |
gh_patches_debug_1822 | rasdani/github-patches | git_diff | zigpy__zha-device-handlers-342 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Update zigpy version to use the new (old module) name for zigpy?
@dmulcahey Ready to update zigpy version to use new (old) module name without -homeassistant suffix?
@Adminiuga in the PR https://github.com/zigpy/zigpy/pull/363 changed the zigpy module name back to just "zigpy" (from "zigpy-homeassistant")
https://github.com/zigpy/zigpy/pull/363/commits/6c9e0e9412a322d4b9558977decf50ca4dfb5ffd
From https://pypi.org/project/zigpy-homeassistant/ back to https://pypi.org/project/zigpy/
</issue>
<code>
[start of setup.py]
1 """Setup module for ZHAQuirks."""
2
3 from setuptools import find_packages, setup
4
5 VERSION = "0.0.38"
6
7
8 def readme():
9 """Print long description."""
10 with open("README.md") as f:
11 return f.read()
12
13
14 setup(
15 name="zha-quirks",
16 version=VERSION,
17 description="Library implementing Zigpy quirks for ZHA in Home Assistant",
18 long_description=readme(),
19 long_description_content_type="text/markdown",
20 url="https://github.com/dmulcahey/zha-device-handlers",
21 author="David F. Mulcahey",
22 author_email="[email protected]",
23 license="Apache License Version 2.0",
24 keywords="zha quirks homeassistant hass",
25 packages=find_packages(exclude=["*.tests"]),
26 python_requires=">=3",
27 install_requires=["zigpy-homeassistant>=0.18.1"],
28 tests_require=["pytest"],
29 )
30
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -24,6 +24,6 @@
keywords="zha quirks homeassistant hass",
packages=find_packages(exclude=["*.tests"]),
python_requires=">=3",
- install_requires=["zigpy-homeassistant>=0.18.1"],
+ install_requires=["zigpy>=0.20.0"],
tests_require=["pytest"],
)
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -24,6 +24,6 @@\n keywords=\"zha quirks homeassistant hass\",\n packages=find_packages(exclude=[\"*.tests\"]),\n python_requires=\">=3\",\n- install_requires=[\"zigpy-homeassistant>=0.18.1\"],\n+ install_requires=[\"zigpy>=0.20.0\"],\n tests_require=[\"pytest\"],\n )\n", "issue": "Update zigpy version to use the new (old module) name for zigpy?\n@dmulcahey Ready to update zigpy version to use new (old) module name without -homeassistant suffix?\r\n\r\n@Adminiuga in the PR https://github.com/zigpy/zigpy/pull/363 changed the zigpy module name back to just \"zigpy\" (from \"zigpy-homeassistant\")\r\n\r\nhttps://github.com/zigpy/zigpy/pull/363/commits/6c9e0e9412a322d4b9558977decf50ca4dfb5ffd\r\n\r\nFrom https://pypi.org/project/zigpy-homeassistant/ back to https://pypi.org/project/zigpy/\n", "before_files": [{"content": "\"\"\"Setup module for ZHAQuirks.\"\"\"\n\nfrom setuptools import find_packages, setup\n\nVERSION = \"0.0.38\"\n\n\ndef readme():\n \"\"\"Print long description.\"\"\"\n with open(\"README.md\") as f:\n return f.read()\n\n\nsetup(\n name=\"zha-quirks\",\n version=VERSION,\n description=\"Library implementing Zigpy quirks for ZHA in Home Assistant\",\n long_description=readme(),\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/dmulcahey/zha-device-handlers\",\n author=\"David F. Mulcahey\",\n author_email=\"[email protected]\",\n license=\"Apache License Version 2.0\",\n keywords=\"zha quirks homeassistant hass\",\n packages=find_packages(exclude=[\"*.tests\"]),\n python_requires=\">=3\",\n install_requires=[\"zigpy-homeassistant>=0.18.1\"],\n tests_require=[\"pytest\"],\n)\n", "path": "setup.py"}]} | 957 | 100 |
gh_patches_debug_10358 | rasdani/github-patches | git_diff | crytic__slither-971 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Broken link in Slither recommendation due to typo in Wiki
Hi, there is a typo (oarameters instead of parameters) in the first-level header "Unindexed ERC20 event oarameters" of the wiki:
https://github.com/crytic/slither/wiki/Detector-Documentation#unindexed-erc20-event-oarameters
On [L. 19](https://github.com/crytic/slither/blob/3bc22a9b143828edec956f170bdef7234d6707d6/slither/detectors/erc/unindexed_event_parameters.py#L19) of the detector there is also the same typo on `WIKI_TITLE`.
</issue>
<code>
[start of slither/detectors/erc/unindexed_event_parameters.py]
1 """
2 Detect mistakenly un-indexed ERC20 event parameters
3 """
4 from slither.detectors.abstract_detector import AbstractDetector, DetectorClassification
5
6
7 class UnindexedERC20EventParameters(AbstractDetector):
8 """
9 Un-indexed ERC20 event parameters
10 """
11
12 ARGUMENT = "erc20-indexed"
13 HELP = "Un-indexed ERC20 event parameters"
14 IMPACT = DetectorClassification.INFORMATIONAL
15 CONFIDENCE = DetectorClassification.HIGH
16
17 WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#unindexed-erc20-event-parameters"
18
19 WIKI_TITLE = "Unindexed ERC20 event oarameters"
20 WIKI_DESCRIPTION = "Detects whether events defined by the `ERC20` specification that should have some parameters as `indexed` are missing the `indexed` keyword."
21
22 # region wiki_exploit_scenario
23 WIKI_EXPLOIT_SCENARIO = """
24 ```solidity
25 contract ERC20Bad {
26 // ...
27 event Transfer(address from, address to, uint value);
28 event Approval(address owner, address spender, uint value);
29
30 // ...
31 }
32 ```
33 `Transfer` and `Approval` events should have the 'indexed' keyword on their two first parameters, as defined by the `ERC20` specification.
34 Failure to include these keywords will exclude the parameter data in the transaction/block's bloom filter, so external tooling searching for these parameters may overlook them and fail to index logs from this token contract."""
35 # endregion wiki_exploit_scenario
36
37 WIKI_RECOMMENDATION = "Add the `indexed` keyword to event parameters that should include it, according to the `ERC20` specification."
38
39 STANDARD_JSON = False
40
41 @staticmethod
42 def detect_erc20_unindexed_event_params(contract):
43 """
44 Detect un-indexed ERC20 event parameters in a given contract.
45 :param contract: The contract to check ERC20 events for un-indexed parameters in.
46 :return: A list of tuple(event, parameter) of parameters which should be indexed.
47 """
48 # Create our result array
49 results = []
50
51 # If this contract isn't an ERC20 token, we return our empty results.
52 if not contract.is_erc20():
53 return results
54
55 # Loop through all events to look for poor form.
56 for event in contract.events_declared:
57
58 # If this is transfer/approval events, expect the first two parameters to be indexed.
59 if event.full_name in [
60 "Transfer(address,address,uint256)",
61 "Approval(address,address,uint256)",
62 ]:
63 if not event.elems[0].indexed:
64 results.append((event, event.elems[0]))
65 if not event.elems[1].indexed:
66 results.append((event, event.elems[1]))
67
68 # Return the results.
69 return results
70
71 def _detect(self):
72 """
73 Detect un-indexed ERC20 event parameters in all contracts.
74 """
75 results = []
76 for c in self.contracts:
77 unindexed_params = self.detect_erc20_unindexed_event_params(c)
78 if unindexed_params:
79 # Add each problematic event definition to our result list
80 for (event, parameter) in unindexed_params:
81
82 info = [
83 "ERC20 event ",
84 event,
85 f"does not index parameter {parameter}\n",
86 ]
87
88 # Add the events to the JSON (note: we do not add the params/vars as they have no source mapping).
89 res = self.generate_result(info)
90
91 res.add(event, {"parameter_name": parameter.name})
92 results.append(res)
93
94 return results
95
[end of slither/detectors/erc/unindexed_event_parameters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/slither/detectors/erc/unindexed_event_parameters.py b/slither/detectors/erc/unindexed_event_parameters.py
--- a/slither/detectors/erc/unindexed_event_parameters.py
+++ b/slither/detectors/erc/unindexed_event_parameters.py
@@ -16,7 +16,7 @@
WIKI = "https://github.com/crytic/slither/wiki/Detector-Documentation#unindexed-erc20-event-parameters"
- WIKI_TITLE = "Unindexed ERC20 event oarameters"
+ WIKI_TITLE = "Unindexed ERC20 event parameters"
WIKI_DESCRIPTION = "Detects whether events defined by the `ERC20` specification that should have some parameters as `indexed` are missing the `indexed` keyword."
# region wiki_exploit_scenario
| {"golden_diff": "diff --git a/slither/detectors/erc/unindexed_event_parameters.py b/slither/detectors/erc/unindexed_event_parameters.py\n--- a/slither/detectors/erc/unindexed_event_parameters.py\n+++ b/slither/detectors/erc/unindexed_event_parameters.py\n@@ -16,7 +16,7 @@\n \n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#unindexed-erc20-event-parameters\"\n \n- WIKI_TITLE = \"Unindexed ERC20 event oarameters\"\n+ WIKI_TITLE = \"Unindexed ERC20 event parameters\"\n WIKI_DESCRIPTION = \"Detects whether events defined by the `ERC20` specification that should have some parameters as `indexed` are missing the `indexed` keyword.\"\n \n # region wiki_exploit_scenario\n", "issue": "Broken link in Slither recommendation due to typo in Wiki\nHi, there is a typo (oarameters instead of parameters) in the first-level header \"Unindexed ERC20 event oarameters\" of the wiki: \r\nhttps://github.com/crytic/slither/wiki/Detector-Documentation#unindexed-erc20-event-oarameters\r\n\r\nOn [L. 19](https://github.com/crytic/slither/blob/3bc22a9b143828edec956f170bdef7234d6707d6/slither/detectors/erc/unindexed_event_parameters.py#L19) of the detector there is also the same typo on `WIKI_TITLE`.\r\n\n", "before_files": [{"content": "\"\"\"\nDetect mistakenly un-indexed ERC20 event parameters\n\"\"\"\nfrom slither.detectors.abstract_detector import AbstractDetector, DetectorClassification\n\n\nclass UnindexedERC20EventParameters(AbstractDetector):\n \"\"\"\n Un-indexed ERC20 event parameters\n \"\"\"\n\n ARGUMENT = \"erc20-indexed\"\n HELP = \"Un-indexed ERC20 event parameters\"\n IMPACT = DetectorClassification.INFORMATIONAL\n CONFIDENCE = DetectorClassification.HIGH\n\n WIKI = \"https://github.com/crytic/slither/wiki/Detector-Documentation#unindexed-erc20-event-parameters\"\n\n WIKI_TITLE = \"Unindexed ERC20 event oarameters\"\n WIKI_DESCRIPTION = \"Detects whether events defined by the `ERC20` specification that should have some parameters as `indexed` are missing the `indexed` keyword.\"\n\n # region wiki_exploit_scenario\n WIKI_EXPLOIT_SCENARIO = \"\"\"\n```solidity\ncontract ERC20Bad {\n // ...\n event Transfer(address from, address to, uint value);\n event Approval(address owner, address spender, uint value);\n\n // ...\n}\n```\n`Transfer` and `Approval` events should have the 'indexed' keyword on their two first parameters, as defined by the `ERC20` specification.\nFailure to include these keywords will exclude the parameter data in the transaction/block's bloom filter, so external tooling searching for these parameters may overlook them and fail to index logs from this token contract.\"\"\"\n # endregion wiki_exploit_scenario\n\n WIKI_RECOMMENDATION = \"Add the `indexed` keyword to event parameters that should include it, according to the `ERC20` specification.\"\n\n STANDARD_JSON = False\n\n @staticmethod\n def detect_erc20_unindexed_event_params(contract):\n \"\"\"\n Detect un-indexed ERC20 event parameters in a given contract.\n :param contract: The contract to check ERC20 events for un-indexed parameters in.\n :return: A list of tuple(event, parameter) of parameters which should be indexed.\n \"\"\"\n # Create our result array\n results = []\n\n # If this contract isn't an ERC20 token, we return our empty results.\n if not contract.is_erc20():\n return results\n\n # Loop through all events to look for poor form.\n for event in contract.events_declared:\n\n # If this is transfer/approval events, expect the first two parameters to be indexed.\n if event.full_name in [\n \"Transfer(address,address,uint256)\",\n \"Approval(address,address,uint256)\",\n ]:\n if not event.elems[0].indexed:\n results.append((event, event.elems[0]))\n if not event.elems[1].indexed:\n results.append((event, event.elems[1]))\n\n # Return the results.\n return results\n\n def _detect(self):\n \"\"\"\n Detect un-indexed ERC20 event parameters in all contracts.\n \"\"\"\n results = []\n for c in self.contracts:\n unindexed_params = self.detect_erc20_unindexed_event_params(c)\n if unindexed_params:\n # Add each problematic event definition to our result list\n for (event, parameter) in unindexed_params:\n\n info = [\n \"ERC20 event \",\n event,\n f\"does not index parameter {parameter}\\n\",\n ]\n\n # Add the events to the JSON (note: we do not add the params/vars as they have no source mapping).\n res = self.generate_result(info)\n\n res.add(event, {\"parameter_name\": parameter.name})\n results.append(res)\n\n return results\n", "path": "slither/detectors/erc/unindexed_event_parameters.py"}]} | 1,687 | 187 |
gh_patches_debug_27309 | rasdani/github-patches | git_diff | getpelican__pelican-1778 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
No tests in bdist
Discussion from IRC:
```
18:38 <ionelmc> can you stop including the tests in the installed package?
18:39 <+justmay> Fine with me. Care to submit a PR that addresses that?
18:39 <ionelmc> sure
18:47 <ionelmc> justmay: is `.mailmap` a stray file?
18:47 <ionelmc> look like your sdist is incomplete
18:48 <ionelmc> doesn't include all the docs and test conf
18:50 <ionelmc> justmay: oh snap. i have to move out the tests to fix this :-)
18:51 <ionelmc> because include_package_data is used any included data file overlaying the package is going into the bdist
18:51 <+justmay> ionelmc: .mailmap is there by design. See "Mapping Authors": https://www.kernel.org/pub/software/scm/git/docs/git-shortlog.html
18:52 <ionelmc> mkay
18:52 <ionelmc> justmay: you're not going to have a problem with the tests dir at the same level as pelican package right?
18:53 → e-Flex joined ([email protected])
18:54 <+justmay> There's no other way to prevent inclusion of tests in the installed package?
18:55 <ionelmc> justmay: there are two horrible ways
18:55 <ionelmc> don't include it in the sdist (highly undesirable)
18:55 <ionelmc> or
18:55 <ionelmc> manually specify package_data
18:55 <ionelmc> which i can do it correctly for you know, but it will be error prone to maintain
18:56 <ionelmc> s/know/now/
18:56 <ionelmc> i think that's also not desirable
18:56 <ionelmc> that's why i think moving them out is ok
18:57 <ionelmc> i'll fix the test configuration to work that way
18:57 <ionelmc> justmay: agree? :-)
18:59 <+justmay> ionelmc: Quite honestly, I don't have the bandwidth this morning to dig deeply enough into this topic. Would you submit an issue so we (i.e., the community) can discuss this and come to a consensus?
19:00 <ionelmc> justmay: there's already https://github.com/getpelican/pelican/issues/1409 - i seriously doubt a new issue will help in any way
19:01 <winlu> ionelmc: justs prune tests and be done with it
19:01 <ionelmc> justmay: it's either the relocation or manual package_data, make a choice :-)
19:01 <ionelmc> winlu: pruning the tests will remove them from sdist
```
Closes #1609. Closes #1545. Closes #1409.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from setuptools import setup
3
4 requires = ['feedgenerator >= 1.6', 'jinja2 >= 2.7', 'pygments', 'docutils',
5 'pytz >= 0a', 'blinker', 'unidecode', 'six >= 1.4',
6 'python-dateutil']
7
8 entry_points = {
9 'console_scripts': [
10 'pelican = pelican:main',
11 'pelican-import = pelican.tools.pelican_import:main',
12 'pelican-quickstart = pelican.tools.pelican_quickstart:main',
13 'pelican-themes = pelican.tools.pelican_themes:main'
14 ]
15 }
16
17
18 README = open('README.rst').read()
19 CHANGELOG = open('docs/changelog.rst').read()
20
21
22 setup(
23 name="pelican",
24 version="3.6.1.dev",
25 url='http://getpelican.com/',
26 author='Alexis Metaireau',
27 author_email='[email protected]',
28 description="A tool to generate a static blog from reStructuredText or "
29 "Markdown input files.",
30 long_description=README + '\n' + CHANGELOG,
31 packages=['pelican', 'pelican.tools'],
32 include_package_data=True,
33 install_requires=requires,
34 entry_points=entry_points,
35 classifiers=[
36 'Development Status :: 5 - Production/Stable',
37 'Environment :: Console',
38 'License :: OSI Approved :: GNU Affero General Public License v3',
39 'Operating System :: OS Independent',
40 'Programming Language :: Python :: 2',
41 'Programming Language :: Python :: 2.7',
42 'Programming Language :: Python :: 3',
43 'Programming Language :: Python :: 3.3',
44 'Programming Language :: Python :: 3.4',
45 'Topic :: Internet :: WWW/HTTP',
46 'Topic :: Software Development :: Libraries :: Python Modules',
47 ],
48 test_suite='pelican.tests',
49 )
50
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -1,4 +1,7 @@
#!/usr/bin/env python
+from os import walk
+from os.path import join, relpath, dirname
+
from setuptools import setup
requires = ['feedgenerator >= 1.6', 'jinja2 >= 2.7', 'pygments', 'docutils',
@@ -14,11 +17,9 @@
]
}
-
README = open('README.rst').read()
CHANGELOG = open('docs/changelog.rst').read()
-
setup(
name="pelican",
version="3.6.1.dev",
@@ -29,7 +30,19 @@
"Markdown input files.",
long_description=README + '\n' + CHANGELOG,
packages=['pelican', 'pelican.tools'],
- include_package_data=True,
+ package_data={
+ # we manually collect the package data, as opposed to using include_package_data=True
+ # because we don't want the tests to be included automatically as package data
+ # (MANIFEST.in is too greedy)
+ 'pelican': [
+ relpath(join(root, name), 'pelican')
+ for root, _, names in walk(join('pelican', 'themes')) for name in names
+ ],
+ 'pelican.tools': [
+ relpath(join(root, name), join('pelican', 'tools'))
+ for root, _, names in walk(join('pelican', 'tools', 'templates')) for name in names
+ ],
+ },
install_requires=requires,
entry_points=entry_points,
classifiers=[
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -1,4 +1,7 @@\n #!/usr/bin/env python\n+from os import walk\n+from os.path import join, relpath, dirname\n+\n from setuptools import setup\n \n requires = ['feedgenerator >= 1.6', 'jinja2 >= 2.7', 'pygments', 'docutils',\n@@ -14,11 +17,9 @@\n ]\n }\n \n-\n README = open('README.rst').read()\n CHANGELOG = open('docs/changelog.rst').read()\n \n-\n setup(\n name=\"pelican\",\n version=\"3.6.1.dev\",\n@@ -29,7 +30,19 @@\n \"Markdown input files.\",\n long_description=README + '\\n' + CHANGELOG,\n packages=['pelican', 'pelican.tools'],\n- include_package_data=True,\n+ package_data={\n+ # we manually collect the package data, as opposed to using include_package_data=True\n+ # because we don't want the tests to be included automatically as package data\n+ # (MANIFEST.in is too greedy)\n+ 'pelican': [\n+ relpath(join(root, name), 'pelican')\n+ for root, _, names in walk(join('pelican', 'themes')) for name in names\n+ ],\n+ 'pelican.tools': [\n+ relpath(join(root, name), join('pelican', 'tools'))\n+ for root, _, names in walk(join('pelican', 'tools', 'templates')) for name in names\n+ ],\n+ },\n install_requires=requires,\n entry_points=entry_points,\n classifiers=[\n", "issue": "No tests in bdist\nDiscussion from IRC:\n\n```\n18:38 <ionelmc> can you stop including the tests in the installed package?\n18:39 <+justmay> Fine with me. Care to submit a PR that addresses that?\n18:39 <ionelmc> sure\n18:47 <ionelmc> justmay: is `.mailmap` a stray file?\n18:47 <ionelmc> look like your sdist is incomplete\n18:48 <ionelmc> doesn't include all the docs and test conf\n18:50 <ionelmc> justmay: oh snap. i have to move out the tests to fix this :-)\n18:51 <ionelmc> because include_package_data is used any included data file overlaying the package is going into the bdist\n18:51 <+justmay> ionelmc: .mailmap is there by design. See \"Mapping Authors\": https://www.kernel.org/pub/software/scm/git/docs/git-shortlog.html\n18:52 <ionelmc> mkay\n18:52 <ionelmc> justmay: you're not going to have a problem with the tests dir at the same level as pelican package right?\n18:53 \u2192 e-Flex joined ([email protected])\n18:54 <+justmay> There's no other way to prevent inclusion of tests in the installed package?\n18:55 <ionelmc> justmay: there are two horrible ways\n18:55 <ionelmc> don't include it in the sdist (highly undesirable)\n18:55 <ionelmc> or\n18:55 <ionelmc> manually specify package_data \n18:55 <ionelmc> which i can do it correctly for you know, but it will be error prone to maintain\n18:56 <ionelmc> s/know/now/\n18:56 <ionelmc> i think that's also not desirable\n18:56 <ionelmc> that's why i think moving them out is ok\n18:57 <ionelmc> i'll fix the test configuration to work that way\n18:57 <ionelmc> justmay: agree? :-)\n18:59 <+justmay> ionelmc: Quite honestly, I don't have the bandwidth this morning to dig deeply enough into this topic. Would you submit an issue so we (i.e., the community) can discuss this and come to a consensus?\n19:00 <ionelmc> justmay: there's already https://github.com/getpelican/pelican/issues/1409 - i seriously doubt a new issue will help in any way\n19:01 <winlu> ionelmc: justs prune tests and be done with it\n19:01 <ionelmc> justmay: it's either the relocation or manual package_data, make a choice :-)\n19:01 <ionelmc> winlu: pruning the tests will remove them from sdist\n```\n\nCloses #1609. Closes #1545. Closes #1409.\n\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom setuptools import setup\n\nrequires = ['feedgenerator >= 1.6', 'jinja2 >= 2.7', 'pygments', 'docutils',\n 'pytz >= 0a', 'blinker', 'unidecode', 'six >= 1.4',\n 'python-dateutil']\n\nentry_points = {\n 'console_scripts': [\n 'pelican = pelican:main',\n 'pelican-import = pelican.tools.pelican_import:main',\n 'pelican-quickstart = pelican.tools.pelican_quickstart:main',\n 'pelican-themes = pelican.tools.pelican_themes:main'\n ]\n}\n\n\nREADME = open('README.rst').read()\nCHANGELOG = open('docs/changelog.rst').read()\n\n\nsetup(\n name=\"pelican\",\n version=\"3.6.1.dev\",\n url='http://getpelican.com/',\n author='Alexis Metaireau',\n author_email='[email protected]',\n description=\"A tool to generate a static blog from reStructuredText or \"\n \"Markdown input files.\",\n long_description=README + '\\n' + CHANGELOG,\n packages=['pelican', 'pelican.tools'],\n include_package_data=True,\n install_requires=requires,\n entry_points=entry_points,\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Console',\n 'License :: OSI Approved :: GNU Affero General Public License v3',\n 'Operating System :: OS Independent',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.3',\n 'Programming Language :: Python :: 3.4',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Software Development :: Libraries :: Python Modules',\n ],\n test_suite='pelican.tests',\n)\n", "path": "setup.py"}]} | 1,770 | 372 |
gh_patches_debug_17293 | rasdani/github-patches | git_diff | mampfes__hacs_waste_collection_schedule-1637 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[Feature]: Add support for Christchurch (NZ) special date overrides
### I propose a feature for:
Sources
### Describe your wanted feature
The API for collection dates for Christchurch City Council does not automatically apply any special date overrides (for example, when your collection day falls near a public holiday and is moved).
A separate URL provides a list of these potential overrides, it needs to be called separately and the results merged.
The URL is [https://ccc.govt.nz/api/kerbsidedateoverrides](https://ccc.govt.nz/api/kerbsidedateoverrides)
It responds to HTTP GET with no authentication requirements and will return an array of overrides dates in this format:
```
{
ID: 32,
Title: "New Year Friday 2024",
OriginalDate: "2024-01-05",
NewDate: "2024-01-06",
Expired: 0
}
```
If your collection date falls on `OriginalDate` it needs to be moved to `NewDate`.
</issue>
<code>
[start of custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py]
1 import datetime
2
3 import requests
4 from waste_collection_schedule import Collection
5
6 # Include work around for SSL UNSAFE_LEGACY_RENEGOTIATION_DISABLED error
7 from waste_collection_schedule.service.SSLError import get_legacy_session
8
9
10 TITLE = "Christchurch City Council"
11 DESCRIPTION = "Source for Christchurch City Council."
12 URL = "https://ccc.govt.nz"
13 TEST_CASES = {"53 Hereford Street": {"address": "53 Hereford Street"}}
14
15
16 class Source:
17 def __init__(self, address):
18 self._address = address
19
20 def fetch(self):
21
22 s = get_legacy_session()
23
24 entries = []
25
26 # Find the Rating Unit ID by the physical address
27 # While a property may have more than one address, bins are allocated by each Rating Unit
28 addressQuery = {
29 "q": self._address,
30 "status": "current",
31 "crs": "epsg:4326",
32 "limit": 1,
33 }
34
35 r = s.get("https://opendata.ccc.govt.nz/CCCSearch/rest/address/suggest",
36 params=addressQuery,
37 # verify=False,
38 )
39 address = r.json()
40
41 # Find the Bin service by Rating Unit ID
42 binsHeaders = {
43 "client_id": "69f433c880c74c349b0128e9fa1b6a93",
44 "client_secret": "139F3D2A83E34AdF98c80566f2eb7212"
45 }
46
47 # Updated request using SSL code snippet
48 r = s.get("https://ccc-data-citizen-api-v1-prod.au-s1.cloudhub.io/api/v1/properties/" + str(address[0]["RatingUnitID"]),
49 headers=binsHeaders
50 # verify=False,
51 )
52 bins = r.json()
53
54 # Deduplicate the Bins in case the Rating Unit has more than one of the same Bin type
55 bins = {each["material"]: each for each in bins["bins"]["collections"]}.values()
56
57 # Process each Bin
58 for bin in bins:
59 entries.append(
60 Collection(
61 datetime.datetime.strptime(
62 bin["next_planned_date_app"], "%Y-%m-%d"
63 ).date(),
64 bin["material"],
65 )
66 )
67
68 return entries
69
[end of custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py
--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py
+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py
@@ -54,6 +54,16 @@
# Deduplicate the Bins in case the Rating Unit has more than one of the same Bin type
bins = {each["material"]: each for each in bins["bins"]["collections"]}.values()
+ # Get the list of Overrides for any special dates
+ # It will be an array of these: { ID: 32, Title: "New Year Friday 2024", OriginalDate: "2024-01-05", NewDate: "2024-01-06", Expired: 0 }
+ overrides = requests.get("https://ccc.govt.nz/api/kerbsidedateoverrides").json()
+
+ # Process each Override
+ for bin in bins:
+ for override in overrides:
+ if override["OriginalDate"] == bin["next_planned_date_app"]:
+ bin["next_planned_date_app"] = override["NewDate"]
+
# Process each Bin
for bin in bins:
entries.append(
| {"golden_diff": "diff --git a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py\n--- a/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py\n+++ b/custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py\n@@ -54,6 +54,16 @@\n # Deduplicate the Bins in case the Rating Unit has more than one of the same Bin type\n bins = {each[\"material\"]: each for each in bins[\"bins\"][\"collections\"]}.values()\n \n+ # Get the list of Overrides for any special dates\n+ # It will be an array of these: { ID: 32, Title: \"New Year Friday 2024\", OriginalDate: \"2024-01-05\", NewDate: \"2024-01-06\", Expired: 0 }\n+ overrides = requests.get(\"https://ccc.govt.nz/api/kerbsidedateoverrides\").json()\n+\n+ # Process each Override\n+ for bin in bins:\n+ for override in overrides:\n+ if override[\"OriginalDate\"] == bin[\"next_planned_date_app\"]:\n+ bin[\"next_planned_date_app\"] = override[\"NewDate\"]\n+\n # Process each Bin\n for bin in bins:\n entries.append(\n", "issue": "[Feature]: Add support for Christchurch (NZ) special date overrides\n### I propose a feature for:\n\nSources\n\n### Describe your wanted feature\n\nThe API for collection dates for Christchurch City Council does not automatically apply any special date overrides (for example, when your collection day falls near a public holiday and is moved).\r\nA separate URL provides a list of these potential overrides, it needs to be called separately and the results merged.\r\n\r\nThe URL is [https://ccc.govt.nz/api/kerbsidedateoverrides](https://ccc.govt.nz/api/kerbsidedateoverrides)\r\nIt responds to HTTP GET with no authentication requirements and will return an array of overrides dates in this format:\r\n```\r\n{\r\n ID: 32,\r\n Title: \"New Year Friday 2024\",\r\n OriginalDate: \"2024-01-05\",\r\n NewDate: \"2024-01-06\",\r\n Expired: 0\r\n}\r\n```\r\nIf your collection date falls on `OriginalDate` it needs to be moved to `NewDate`.\n", "before_files": [{"content": "import datetime\n\nimport requests\nfrom waste_collection_schedule import Collection\n\n# Include work around for SSL UNSAFE_LEGACY_RENEGOTIATION_DISABLED error\nfrom waste_collection_schedule.service.SSLError import get_legacy_session\n\n\nTITLE = \"Christchurch City Council\"\nDESCRIPTION = \"Source for Christchurch City Council.\"\nURL = \"https://ccc.govt.nz\"\nTEST_CASES = {\"53 Hereford Street\": {\"address\": \"53 Hereford Street\"}}\n\n\nclass Source:\n def __init__(self, address):\n self._address = address\n\n def fetch(self):\n\n s = get_legacy_session()\n\n entries = []\n\n # Find the Rating Unit ID by the physical address\n # While a property may have more than one address, bins are allocated by each Rating Unit\n addressQuery = {\n \"q\": self._address,\n \"status\": \"current\",\n \"crs\": \"epsg:4326\",\n \"limit\": 1,\n }\n\n r = s.get(\"https://opendata.ccc.govt.nz/CCCSearch/rest/address/suggest\",\n params=addressQuery,\n # verify=False,\n )\n address = r.json()\n\n # Find the Bin service by Rating Unit ID\n binsHeaders = {\n \"client_id\": \"69f433c880c74c349b0128e9fa1b6a93\",\n \"client_secret\": \"139F3D2A83E34AdF98c80566f2eb7212\"\n }\n\n # Updated request using SSL code snippet\n r = s.get(\"https://ccc-data-citizen-api-v1-prod.au-s1.cloudhub.io/api/v1/properties/\" + str(address[0][\"RatingUnitID\"]),\n headers=binsHeaders\n # verify=False,\n )\n bins = r.json()\n \n # Deduplicate the Bins in case the Rating Unit has more than one of the same Bin type\n bins = {each[\"material\"]: each for each in bins[\"bins\"][\"collections\"]}.values()\n\n # Process each Bin\n for bin in bins:\n entries.append(\n Collection(\n datetime.datetime.strptime(\n bin[\"next_planned_date_app\"], \"%Y-%m-%d\"\n ).date(),\n bin[\"material\"],\n )\n )\n\n return entries\n", "path": "custom_components/waste_collection_schedule/waste_collection_schedule/source/ccc_govt_nz.py"}]} | 1,440 | 320 |
gh_patches_debug_1623 | rasdani/github-patches | git_diff | googleapis__google-auth-library-python-671 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use extra for asyncio dependencies
Hello! The latest release for this library pulls in aiohttp and its dependencies unconditionally, which adds non-trivial burden to projects that don’t need it. Would you consider using a packaging extra so that people can opt-in?
TODO: undo pin of 'aiohttp' once 'aioresponses' releases a fix
Environment details
- OS: $ sw_vers
ProductName: Mac OS X
ProductVersion: 10.14.6
BuildVersion: 18G6020
- Python version: 3.6, 3.7, 3.8
- pip version: pip 20.2.4
- `google-auth` version: 5906c8583ca351b5385a079a30521a9a8a0c7c59
#### Steps to reproduce
1. nox -s unit
There are 9 tests that fail, all with the same error:
`TypeError: __init__() missing 1 required positional argument: 'limit'`
```
====================================================== short test summary info =======================================================
FAILED tests_async/transport/test_aiohttp_requests.py::TestCombinedResponse::test_content_compressed - TypeError: __init__() missin...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_headers_prop - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_status_prop - TypeError: __init__() missing 1 required po...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request - TypeError: __init__() missing 1 requir...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_ctx - TypeError: __init__() missing 1 required p...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_http_headers - TypeError: __init__() missing 1 r...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_regexp_example - TypeError: __init__() missing 1...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_no_refresh - TypeError: __init__() missi...
FAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_refresh - TypeError: __init__() missing ...
============================================ 9 failed, 609 passed, 12 warnings in 33.41s =============================================
```
Here is the traceback for one of the failing tests:
```
____________________________________________ TestCombinedResponse.test_content_compressed ____________________________________________
self = <tests_async.transport.test_aiohttp_requests.TestCombinedResponse object at 0x108803160>
urllib3_mock = <function decompress at 0x10880a820>
@mock.patch(
"google.auth.transport._aiohttp_requests.urllib3.response.MultiDecoder.decompress",
return_value="decompressed",
autospec=True,
)
@pytest.mark.asyncio
async def test_content_compressed(self, urllib3_mock):
rm = core.RequestMatch(
"url", headers={"Content-Encoding": "gzip"}, payload="compressed"
)
> response = await rm.build_response(core.URL("url"))
tests_async/transport/test_aiohttp_requests.py:72:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:192: in build_response
resp = self._build_response(
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:173: in _build_response
resp.content = stream_reader_factory(loop)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
loop = <Mock id='4437587472'>
def stream_reader_factory( # noqa
loop: 'Optional[asyncio.AbstractEventLoop]' = None
):
protocol = ResponseHandler(loop=loop)
> return StreamReader(protocol, loop=loop)
E TypeError: __init__() missing 1 required positional argument: 'limit'
../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/compat.py:48: TypeError
========================================================== warnings summary ==========================================================
```
The root cause is a change in aiohttp version 3.7.0 which was released a few hours ago. The signature for StreamReader has changed, making the optional argument `limit` a required argument.
https://github.com/aio-libs/aiohttp/blob/56e78836aa7c67292ace9e256711699d51d57285/aiohttp/streams.py#L106
This change breaks aioresponses:
https://github.com/pnuckowski/aioresponses/blob/e61977f42a0164e0c572031dfb18ae95ba198df0/aioresponses/compat.py#L44
Add support for Python 3.9
</issue>
<code>
[start of synth.py]
1 import synthtool as s
2 from synthtool import gcp
3
4 common = gcp.CommonTemplates()
5
6 # ----------------------------------------------------------------------------
7 # Add templated files
8 # ----------------------------------------------------------------------------
9 templated_files = common.py_library(unit_cov_level=100, cov_level=100)
10 s.move(
11 templated_files / ".kokoro",
12 excludes=[
13 ".kokoro/continuous/common.cfg",
14 ".kokoro/presubmit/common.cfg",
15 ".kokoro/build.sh",
16 ],
17 ) # just move kokoro configs
18
[end of synth.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/synth.py b/synth.py
--- a/synth.py
+++ b/synth.py
@@ -10,8 +10,8 @@
s.move(
templated_files / ".kokoro",
excludes=[
- ".kokoro/continuous/common.cfg",
- ".kokoro/presubmit/common.cfg",
- ".kokoro/build.sh",
+ "continuous/common.cfg",
+ "presubmit/common.cfg",
+ "build.sh",
],
) # just move kokoro configs
| {"golden_diff": "diff --git a/synth.py b/synth.py\n--- a/synth.py\n+++ b/synth.py\n@@ -10,8 +10,8 @@\n s.move(\n templated_files / \".kokoro\",\n excludes=[\n- \".kokoro/continuous/common.cfg\",\n- \".kokoro/presubmit/common.cfg\",\n- \".kokoro/build.sh\",\n+ \"continuous/common.cfg\",\n+ \"presubmit/common.cfg\",\n+ \"build.sh\",\n ],\n ) # just move kokoro configs\n", "issue": "Use extra for asyncio dependencies\nHello! The latest release for this library pulls in aiohttp and its dependencies unconditionally, which adds non-trivial burden to projects that don\u2019t need it. Would you consider using a packaging extra so that people can opt-in?\nTODO: undo pin of 'aiohttp' once 'aioresponses' releases a fix\nEnvironment details\r\n\r\n - OS: $ sw_vers\r\nProductName: Mac OS X\r\nProductVersion: 10.14.6\r\nBuildVersion: 18G6020\r\n\r\n - Python version: 3.6, 3.7, 3.8\r\n - pip version: pip 20.2.4\r\n - `google-auth` version: 5906c8583ca351b5385a079a30521a9a8a0c7c59\r\n\r\n#### Steps to reproduce\r\n\r\n 1. nox -s unit\r\n\r\n\r\nThere are 9 tests that fail, all with the same error:\r\n\r\n`TypeError: __init__() missing 1 required positional argument: 'limit'`\r\n\r\n\r\n```\r\n====================================================== short test summary info =======================================================\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestCombinedResponse::test_content_compressed - TypeError: __init__() missin...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_headers_prop - TypeError: __init__() missing 1 required p...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestResponse::test_status_prop - TypeError: __init__() missing 1 required po...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request - TypeError: __init__() missing 1 requir...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_ctx - TypeError: __init__() missing 1 required p...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_http_headers - TypeError: __init__() missing 1 r...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_regexp_example - TypeError: __init__() missing 1...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_no_refresh - TypeError: __init__() missi...\r\nFAILED tests_async/transport/test_aiohttp_requests.py::TestAuthorizedSession::test_request_refresh - TypeError: __init__() missing ...\r\n============================================ 9 failed, 609 passed, 12 warnings in 33.41s =============================================\r\n```\r\n\r\nHere is the traceback for one of the failing tests:\r\n\r\n\r\n```\r\n____________________________________________ TestCombinedResponse.test_content_compressed ____________________________________________\r\n\r\nself = <tests_async.transport.test_aiohttp_requests.TestCombinedResponse object at 0x108803160>\r\nurllib3_mock = <function decompress at 0x10880a820>\r\n\r\n @mock.patch(\r\n \"google.auth.transport._aiohttp_requests.urllib3.response.MultiDecoder.decompress\",\r\n return_value=\"decompressed\",\r\n autospec=True,\r\n )\r\n @pytest.mark.asyncio\r\n async def test_content_compressed(self, urllib3_mock):\r\n rm = core.RequestMatch(\r\n \"url\", headers={\"Content-Encoding\": \"gzip\"}, payload=\"compressed\"\r\n )\r\n> response = await rm.build_response(core.URL(\"url\"))\r\n\r\ntests_async/transport/test_aiohttp_requests.py:72: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:192: in build_response\r\n resp = self._build_response(\r\n../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/core.py:173: in _build_response\r\n resp.content = stream_reader_factory(loop)\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ \r\n\r\nloop = <Mock id='4437587472'>\r\n\r\n def stream_reader_factory( # noqa\r\n loop: 'Optional[asyncio.AbstractEventLoop]' = None\r\n ):\r\n protocol = ResponseHandler(loop=loop)\r\n> return StreamReader(protocol, loop=loop)\r\nE TypeError: __init__() missing 1 required positional argument: 'limit'\r\n\r\n../../../.virtualenv/google-auth-library-python/lib/python3.8/site-packages/aioresponses/compat.py:48: TypeError\r\n========================================================== warnings summary ==========================================================\r\n```\r\n\r\nThe root cause is a change in aiohttp version 3.7.0 which was released a few hours ago. The signature for StreamReader has changed, making the optional argument `limit` a required argument.\r\n\r\nhttps://github.com/aio-libs/aiohttp/blob/56e78836aa7c67292ace9e256711699d51d57285/aiohttp/streams.py#L106\r\n\r\nThis change breaks aioresponses:\r\n\r\nhttps://github.com/pnuckowski/aioresponses/blob/e61977f42a0164e0c572031dfb18ae95ba198df0/aioresponses/compat.py#L44\r\n\r\n\nAdd support for Python 3.9\n\n", "before_files": [{"content": "import synthtool as s\nfrom synthtool import gcp\n\ncommon = gcp.CommonTemplates()\n\n# ----------------------------------------------------------------------------\n# Add templated files\n# ----------------------------------------------------------------------------\ntemplated_files = common.py_library(unit_cov_level=100, cov_level=100)\ns.move(\n templated_files / \".kokoro\",\n excludes=[\n \".kokoro/continuous/common.cfg\",\n \".kokoro/presubmit/common.cfg\",\n \".kokoro/build.sh\",\n ],\n) # just move kokoro configs\n", "path": "synth.py"}]} | 1,918 | 115 |
gh_patches_debug_582 | rasdani/github-patches | git_diff | pex-tool__pex-777 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 1.6.12
On the docket:
+ [x] PythonInterpreter: support python binary names with single letter suffixes #769
+ [x] Pex should support some form of verifiably reproducible resolve. #772
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = '1.6.11'
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = '1.6.11'
+__version__ = '1.6.12'
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = '1.6.11'\n+__version__ = '1.6.12'\n", "issue": "Release 1.6.12\nOn the docket:\r\n+ [x] PythonInterpreter: support python binary names with single letter suffixes #769\r\n+ [x] Pex should support some form of verifiably reproducible resolve. #772\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = '1.6.11'\n", "path": "pex/version.py"}]} | 641 | 97 |
gh_patches_debug_23322 | rasdani/github-patches | git_diff | fossasia__open-event-server-9034 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
allow mutiple station for same location
<!--
(Thanks for sending a pull request! Please make sure you click the link above to view the contribution guidelines, then fill out the blanks below.)
-->
<!-- Add the issue number that is fixed by this PR (In the form Fixes #123) -->
Fixes #8958
#### Short description of what this resolves:
- fix issue to allow mutiple station for same location
#### Changes proposed in this pull request:
- allow mutiple station for same location
#### Checklist
- [x] I have read the [Contribution & Best practices Guide](https://blog.fossasia.org/open-source-developer-guide-and-best-practices-at-fossasia) and my PR follows them.
- [x] My branch is up-to-date with the Upstream `development` branch.
- [ ] The unit tests pass locally with my changes <!-- use `nosetests tests/` to run all the tests -->
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] I have added necessary documentation (if appropriate)
<!-- If an existing function does not have a docstring, please add one -->
- [ ] All the functions created/modified in this PR contain relevant docstrings.
</issue>
<code>
[start of app/api/station.py]
1 from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship
2 from flask_rest_jsonapi.exceptions import ObjectNotFound
3
4 from app.api.helpers.db import safe_query_kwargs
5 from app.api.helpers.permission_manager import has_access
6 from app.api.helpers.permissions import jwt_required
7 from app.api.helpers.utilities import require_relationship
8 from app.api.schema.station import StationSchema
9 from app.models import db
10 from app.models.event import Event
11 from app.models.microlocation import Microlocation
12 from app.models.station import Station
13
14
15 class StationList(ResourceList):
16 """Create and List Station"""
17
18 def query(self, view_kwargs):
19 """
20 query method for different view_kwargs
21 :param view_kwargs:
22 :return:
23 """
24 query_ = self.session.query(Station)
25 if view_kwargs.get('event_id'):
26 event = safe_query_kwargs(Event, view_kwargs, 'event_id')
27 query_ = query_.filter_by(event_id=event.id)
28
29 elif view_kwargs.get('microlocation_id'):
30 event = safe_query_kwargs(Microlocation, view_kwargs, 'microlocation_id')
31 query_ = query_.filter_by(microlocation_id=event.id)
32
33 return query_
34
35 view_kwargs = True
36 schema = StationSchema
37 data_layer = {
38 'session': db.session,
39 'model': Station,
40 'methods': {'query': query},
41 }
42
43
44 class StationDetail(ResourceDetail):
45 """Station detail by id"""
46
47 @staticmethod
48 def before_patch(args, kwargs, data):
49 """
50 before patch method
51 :param args:
52 :param kwargs:
53 :param data:
54 :return:
55 """
56 require_relationship(['event'], data)
57 if not has_access('is_coorganizer', event_id=data['event']):
58 raise ObjectNotFound(
59 {'parameter': 'event'},
60 f"Event: {data['event']} not found {args} {kwargs}",
61 )
62
63 if data.get('microlocation'):
64 require_relationship(['microlocation'], data)
65 else:
66 if data['station_type'] in ('check in', 'check out', 'daily'):
67 raise ObjectNotFound(
68 {'parameter': 'microlocation'},
69 "Microlocation: microlocation_id is missing from your request.",
70 )
71
72 schema = StationSchema
73 data_layer = {
74 'session': db.session,
75 'model': Station,
76 }
77
78
79 class StationRelationship(ResourceRelationship):
80 """Station Relationship (Required)"""
81
82 decorators = (jwt_required,)
83 methods = ['GET', 'PATCH']
84 schema = StationSchema
85 data_layer = {'session': db.session, 'model': Station}
86
87
88 class StationListPost(ResourceList):
89 """Create and List Station"""
90
91 @staticmethod
92 def before_post(args, kwargs, data):
93 """
94 method to check for required relationship with event and microlocation
95 :param data:
96 :param args:
97 :param kwargs:
98 :return:
99 """
100 require_relationship(['event'], data)
101 if not has_access('is_coorganizer', event_id=data['event']):
102 raise ObjectNotFound(
103 {'parameter': 'event'},
104 f"Event: {data['event']} not found {args} {kwargs}",
105 )
106
107 if data.get('microlocation'):
108 require_relationship(['microlocation'], data)
109 else:
110 if data['station_type'] in ('check in', 'check out', 'daily'):
111 raise ObjectNotFound(
112 {'parameter': 'microlocation'},
113 "Microlocation: missing from your request.",
114 )
115
116 schema = StationSchema
117 methods = [
118 'POST',
119 ]
120 data_layer = {
121 'session': db.session,
122 'model': Station,
123 }
124
[end of app/api/station.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/app/api/station.py b/app/api/station.py
--- a/app/api/station.py
+++ b/app/api/station.py
@@ -63,7 +63,7 @@
if data.get('microlocation'):
require_relationship(['microlocation'], data)
else:
- if data['station_type'] in ('check in', 'check out', 'daily'):
+ if data['station_type'] in ('check in', 'check out'):
raise ObjectNotFound(
{'parameter': 'microlocation'},
"Microlocation: microlocation_id is missing from your request.",
@@ -107,7 +107,7 @@
if data.get('microlocation'):
require_relationship(['microlocation'], data)
else:
- if data['station_type'] in ('check in', 'check out', 'daily'):
+ if data['station_type'] in ('check in', 'check out'):
raise ObjectNotFound(
{'parameter': 'microlocation'},
"Microlocation: missing from your request.",
| {"golden_diff": "diff --git a/app/api/station.py b/app/api/station.py\n--- a/app/api/station.py\n+++ b/app/api/station.py\n@@ -63,7 +63,7 @@\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n- if data['station_type'] in ('check in', 'check out', 'daily'):\n+ if data['station_type'] in ('check in', 'check out'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: microlocation_id is missing from your request.\",\n@@ -107,7 +107,7 @@\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n- if data['station_type'] in ('check in', 'check out', 'daily'):\n+ if data['station_type'] in ('check in', 'check out'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: missing from your request.\",\n", "issue": "allow mutiple station for same location\n<!--\r\n(Thanks for sending a pull request! Please make sure you click the link above to view the contribution guidelines, then fill out the blanks below.)\r\n-->\r\n<!-- Add the issue number that is fixed by this PR (In the form Fixes #123) -->\r\n\r\nFixes #8958 \r\n\r\n#### Short description of what this resolves:\r\n- fix issue to allow mutiple station for same location\r\n\r\n#### Changes proposed in this pull request:\r\n\r\n- allow mutiple station for same location\r\n\r\n#### Checklist\r\n\r\n- [x] I have read the [Contribution & Best practices Guide](https://blog.fossasia.org/open-source-developer-guide-and-best-practices-at-fossasia) and my PR follows them.\r\n- [x] My branch is up-to-date with the Upstream `development` branch.\r\n- [ ] The unit tests pass locally with my changes <!-- use `nosetests tests/` to run all the tests -->\r\n- [ ] I have added tests that prove my fix is effective or that my feature works\r\n- [ ] I have added necessary documentation (if appropriate)\r\n<!-- If an existing function does not have a docstring, please add one -->\r\n- [ ] All the functions created/modified in this PR contain relevant docstrings.\r\n\n", "before_files": [{"content": "from flask_rest_jsonapi import ResourceDetail, ResourceList, ResourceRelationship\nfrom flask_rest_jsonapi.exceptions import ObjectNotFound\n\nfrom app.api.helpers.db import safe_query_kwargs\nfrom app.api.helpers.permission_manager import has_access\nfrom app.api.helpers.permissions import jwt_required\nfrom app.api.helpers.utilities import require_relationship\nfrom app.api.schema.station import StationSchema\nfrom app.models import db\nfrom app.models.event import Event\nfrom app.models.microlocation import Microlocation\nfrom app.models.station import Station\n\n\nclass StationList(ResourceList):\n \"\"\"Create and List Station\"\"\"\n\n def query(self, view_kwargs):\n \"\"\"\n query method for different view_kwargs\n :param view_kwargs:\n :return:\n \"\"\"\n query_ = self.session.query(Station)\n if view_kwargs.get('event_id'):\n event = safe_query_kwargs(Event, view_kwargs, 'event_id')\n query_ = query_.filter_by(event_id=event.id)\n\n elif view_kwargs.get('microlocation_id'):\n event = safe_query_kwargs(Microlocation, view_kwargs, 'microlocation_id')\n query_ = query_.filter_by(microlocation_id=event.id)\n\n return query_\n\n view_kwargs = True\n schema = StationSchema\n data_layer = {\n 'session': db.session,\n 'model': Station,\n 'methods': {'query': query},\n }\n\n\nclass StationDetail(ResourceDetail):\n \"\"\"Station detail by id\"\"\"\n\n @staticmethod\n def before_patch(args, kwargs, data):\n \"\"\"\n before patch method\n :param args:\n :param kwargs:\n :param data:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound(\n {'parameter': 'event'},\n f\"Event: {data['event']} not found {args} {kwargs}\",\n )\n\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n if data['station_type'] in ('check in', 'check out', 'daily'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: microlocation_id is missing from your request.\",\n )\n\n schema = StationSchema\n data_layer = {\n 'session': db.session,\n 'model': Station,\n }\n\n\nclass StationRelationship(ResourceRelationship):\n \"\"\"Station Relationship (Required)\"\"\"\n\n decorators = (jwt_required,)\n methods = ['GET', 'PATCH']\n schema = StationSchema\n data_layer = {'session': db.session, 'model': Station}\n\n\nclass StationListPost(ResourceList):\n \"\"\"Create and List Station\"\"\"\n\n @staticmethod\n def before_post(args, kwargs, data):\n \"\"\"\n method to check for required relationship with event and microlocation\n :param data:\n :param args:\n :param kwargs:\n :return:\n \"\"\"\n require_relationship(['event'], data)\n if not has_access('is_coorganizer', event_id=data['event']):\n raise ObjectNotFound(\n {'parameter': 'event'},\n f\"Event: {data['event']} not found {args} {kwargs}\",\n )\n\n if data.get('microlocation'):\n require_relationship(['microlocation'], data)\n else:\n if data['station_type'] in ('check in', 'check out', 'daily'):\n raise ObjectNotFound(\n {'parameter': 'microlocation'},\n \"Microlocation: missing from your request.\",\n )\n\n schema = StationSchema\n methods = [\n 'POST',\n ]\n data_layer = {\n 'session': db.session,\n 'model': Station,\n }\n", "path": "app/api/station.py"}]} | 1,842 | 223 |
gh_patches_debug_41468 | rasdani/github-patches | git_diff | strawberry-graphql__strawberry-323 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use sentinel value for input parameters that aren't sent by the clients
When using input types with optional fields we cannot differentiate by fields that have sent as null and fields that haven't been sent at all.
So I think we should use a sentinel value that tells the field is unset, and also behaves as falsy:
```python
class _Unset:
def __bool__(self): return False
UNSET = _Unset()
# this utility might be useful, so we don't have to use an internal representation
def is_unset(value: Any):
return value is UNSET
```
then we can use this class when instantiating the input types for a resolver:)
</issue>
<code>
[start of strawberry/utils/arguments.py]
1 import enum
2 from dataclasses import is_dataclass
3 from datetime import date, datetime, time
4
5 from ..exceptions import UnsupportedTypeError
6 from .str_converters import to_camel_case, to_snake_case
7 from .typing import get_list_annotation, get_optional_annotation, is_list, is_optional
8
9
10 SCALAR_TYPES = [int, str, float, bytes, bool, datetime, date, time]
11
12
13 def _to_type(value, annotation):
14 if value is None:
15 return None
16
17 if is_optional(annotation):
18 annotation = get_optional_annotation(annotation)
19
20 # TODO: change this to be a is_scalar util and make sure it works with any scalar
21 if getattr(annotation, "__supertype__", annotation) in SCALAR_TYPES:
22 return value
23
24 # Convert Enum fields to instances using the value. This is safe
25 # because graphql-core has already validated the input.
26 if isinstance(annotation, enum.EnumMeta):
27 return annotation(value)
28
29 if is_list(annotation):
30 annotation = get_list_annotation(annotation)
31
32 return [_to_type(x, annotation) for x in value]
33
34 if is_dataclass(annotation):
35 fields = annotation.__dataclass_fields__
36
37 kwargs = {}
38
39 for name, field in fields.items():
40 dict_name = name
41
42 if hasattr(field, "field_name") and field.field_name:
43 dict_name = field.field_name
44 else:
45 dict_name = to_camel_case(name)
46
47 kwargs[name] = _to_type(value.get(dict_name), field.type)
48
49 return annotation(**kwargs)
50
51 raise UnsupportedTypeError(annotation)
52
53
54 def convert_args(args, annotations):
55 """Converts a nested dictionary to a dictionary of strawberry input types."""
56
57 converted_args = {}
58
59 for key, value in args.items():
60 key = to_snake_case(key)
61 annotation = annotations[key]
62
63 converted_args[key] = _to_type(value, annotation)
64
65 return converted_args
66
[end of strawberry/utils/arguments.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/strawberry/utils/arguments.py b/strawberry/utils/arguments.py
--- a/strawberry/utils/arguments.py
+++ b/strawberry/utils/arguments.py
@@ -1,19 +1,49 @@
import enum
+import typing
from dataclasses import is_dataclass
from datetime import date, datetime, time
from ..exceptions import UnsupportedTypeError
-from .str_converters import to_camel_case, to_snake_case
+from .str_converters import to_camel_case
from .typing import get_list_annotation, get_optional_annotation, is_list, is_optional
SCALAR_TYPES = [int, str, float, bytes, bool, datetime, date, time]
-def _to_type(value, annotation):
+class _Unset:
+ def __str__(self):
+ return ""
+
+ def __bool__(self):
+ return False
+
+
+UNSET = _Unset()
+
+
+def is_unset(value: typing.Any) -> bool:
+ return value is UNSET
+
+
+def convert_args(
+ value: typing.Union[typing.Dict[str, typing.Any], typing.Any],
+ annotation: typing.Union[typing.Dict[str, typing.Type], typing.Type],
+):
+ """Converts a nested dictionary to a dictionary of actual types.
+
+ It deals with conversion of input types to proper dataclasses and
+ also uses a sentinel value for unset values."""
+
+ if annotation == {}:
+ return value
+
if value is None:
return None
+ if is_unset(value):
+ return value
+
if is_optional(annotation):
annotation = get_optional_annotation(annotation)
@@ -24,19 +54,27 @@
# Convert Enum fields to instances using the value. This is safe
# because graphql-core has already validated the input.
if isinstance(annotation, enum.EnumMeta):
- return annotation(value)
+ return annotation(value) # type: ignore
if is_list(annotation):
annotation = get_list_annotation(annotation)
- return [_to_type(x, annotation) for x in value]
+ return [convert_args(x, annotation) for x in value]
+
+ fields = None
- if is_dataclass(annotation):
- fields = annotation.__dataclass_fields__
+ # we receive dicts when converting resolvers arguments to
+ # actual types
+ if isinstance(annotation, dict):
+ fields = annotation.items()
+ elif is_dataclass(annotation):
+ fields = annotation.__dataclass_fields__.items()
+
+ if fields:
kwargs = {}
- for name, field in fields.items():
+ for name, field in fields:
dict_name = name
if hasattr(field, "field_name") and field.field_name:
@@ -44,22 +82,19 @@
else:
dict_name = to_camel_case(name)
- kwargs[name] = _to_type(value.get(dict_name), field.type)
-
- return annotation(**kwargs)
-
- raise UnsupportedTypeError(annotation)
-
-
-def convert_args(args, annotations):
- """Converts a nested dictionary to a dictionary of strawberry input types."""
+ # dataclasses field have a .type attribute
+ if hasattr(field, "type"):
+ field_type = field.type
+ # meanwhile when using dicts the value of the field is
+ # the actual type, for example in: { 'name': str }
+ else:
+ field_type = field
- converted_args = {}
+ kwargs[name] = convert_args(value.get(dict_name, UNSET), field_type)
- for key, value in args.items():
- key = to_snake_case(key)
- annotation = annotations[key]
+ if is_dataclass(annotation):
+ return annotation(**kwargs) # type: ignore
- converted_args[key] = _to_type(value, annotation)
+ return kwargs
- return converted_args
+ raise UnsupportedTypeError(annotation)
| {"golden_diff": "diff --git a/strawberry/utils/arguments.py b/strawberry/utils/arguments.py\n--- a/strawberry/utils/arguments.py\n+++ b/strawberry/utils/arguments.py\n@@ -1,19 +1,49 @@\n import enum\n+import typing\n from dataclasses import is_dataclass\n from datetime import date, datetime, time\n \n from ..exceptions import UnsupportedTypeError\n-from .str_converters import to_camel_case, to_snake_case\n+from .str_converters import to_camel_case\n from .typing import get_list_annotation, get_optional_annotation, is_list, is_optional\n \n \n SCALAR_TYPES = [int, str, float, bytes, bool, datetime, date, time]\n \n \n-def _to_type(value, annotation):\n+class _Unset:\n+ def __str__(self):\n+ return \"\"\n+\n+ def __bool__(self):\n+ return False\n+\n+\n+UNSET = _Unset()\n+\n+\n+def is_unset(value: typing.Any) -> bool:\n+ return value is UNSET\n+\n+\n+def convert_args(\n+ value: typing.Union[typing.Dict[str, typing.Any], typing.Any],\n+ annotation: typing.Union[typing.Dict[str, typing.Type], typing.Type],\n+):\n+ \"\"\"Converts a nested dictionary to a dictionary of actual types.\n+\n+ It deals with conversion of input types to proper dataclasses and\n+ also uses a sentinel value for unset values.\"\"\"\n+\n+ if annotation == {}:\n+ return value\n+\n if value is None:\n return None\n \n+ if is_unset(value):\n+ return value\n+\n if is_optional(annotation):\n annotation = get_optional_annotation(annotation)\n \n@@ -24,19 +54,27 @@\n # Convert Enum fields to instances using the value. This is safe\n # because graphql-core has already validated the input.\n if isinstance(annotation, enum.EnumMeta):\n- return annotation(value)\n+ return annotation(value) # type: ignore\n \n if is_list(annotation):\n annotation = get_list_annotation(annotation)\n \n- return [_to_type(x, annotation) for x in value]\n+ return [convert_args(x, annotation) for x in value]\n+\n+ fields = None\n \n- if is_dataclass(annotation):\n- fields = annotation.__dataclass_fields__\n+ # we receive dicts when converting resolvers arguments to\n+ # actual types\n+ if isinstance(annotation, dict):\n+ fields = annotation.items()\n \n+ elif is_dataclass(annotation):\n+ fields = annotation.__dataclass_fields__.items()\n+\n+ if fields:\n kwargs = {}\n \n- for name, field in fields.items():\n+ for name, field in fields:\n dict_name = name\n \n if hasattr(field, \"field_name\") and field.field_name:\n@@ -44,22 +82,19 @@\n else:\n dict_name = to_camel_case(name)\n \n- kwargs[name] = _to_type(value.get(dict_name), field.type)\n-\n- return annotation(**kwargs)\n-\n- raise UnsupportedTypeError(annotation)\n-\n-\n-def convert_args(args, annotations):\n- \"\"\"Converts a nested dictionary to a dictionary of strawberry input types.\"\"\"\n+ # dataclasses field have a .type attribute\n+ if hasattr(field, \"type\"):\n+ field_type = field.type\n+ # meanwhile when using dicts the value of the field is\n+ # the actual type, for example in: { 'name': str }\n+ else:\n+ field_type = field\n \n- converted_args = {}\n+ kwargs[name] = convert_args(value.get(dict_name, UNSET), field_type)\n \n- for key, value in args.items():\n- key = to_snake_case(key)\n- annotation = annotations[key]\n+ if is_dataclass(annotation):\n+ return annotation(**kwargs) # type: ignore\n \n- converted_args[key] = _to_type(value, annotation)\n+ return kwargs\n \n- return converted_args\n+ raise UnsupportedTypeError(annotation)\n", "issue": "Use sentinel value for input parameters that aren't sent by the clients\nWhen using input types with optional fields we cannot differentiate by fields that have sent as null and fields that haven't been sent at all.\r\n\r\nSo I think we should use a sentinel value that tells the field is unset, and also behaves as falsy:\r\n\r\n```python\r\nclass _Unset:\r\n def __bool__(self): return False\r\n\r\nUNSET = _Unset()\r\n\r\n# this utility might be useful, so we don't have to use an internal representation\r\ndef is_unset(value: Any):\r\n return value is UNSET\r\n```\r\n\r\nthen we can use this class when instantiating the input types for a resolver:)\n", "before_files": [{"content": "import enum\nfrom dataclasses import is_dataclass\nfrom datetime import date, datetime, time\n\nfrom ..exceptions import UnsupportedTypeError\nfrom .str_converters import to_camel_case, to_snake_case\nfrom .typing import get_list_annotation, get_optional_annotation, is_list, is_optional\n\n\nSCALAR_TYPES = [int, str, float, bytes, bool, datetime, date, time]\n\n\ndef _to_type(value, annotation):\n if value is None:\n return None\n\n if is_optional(annotation):\n annotation = get_optional_annotation(annotation)\n\n # TODO: change this to be a is_scalar util and make sure it works with any scalar\n if getattr(annotation, \"__supertype__\", annotation) in SCALAR_TYPES:\n return value\n\n # Convert Enum fields to instances using the value. This is safe\n # because graphql-core has already validated the input.\n if isinstance(annotation, enum.EnumMeta):\n return annotation(value)\n\n if is_list(annotation):\n annotation = get_list_annotation(annotation)\n\n return [_to_type(x, annotation) for x in value]\n\n if is_dataclass(annotation):\n fields = annotation.__dataclass_fields__\n\n kwargs = {}\n\n for name, field in fields.items():\n dict_name = name\n\n if hasattr(field, \"field_name\") and field.field_name:\n dict_name = field.field_name\n else:\n dict_name = to_camel_case(name)\n\n kwargs[name] = _to_type(value.get(dict_name), field.type)\n\n return annotation(**kwargs)\n\n raise UnsupportedTypeError(annotation)\n\n\ndef convert_args(args, annotations):\n \"\"\"Converts a nested dictionary to a dictionary of strawberry input types.\"\"\"\n\n converted_args = {}\n\n for key, value in args.items():\n key = to_snake_case(key)\n annotation = annotations[key]\n\n converted_args[key] = _to_type(value, annotation)\n\n return converted_args\n", "path": "strawberry/utils/arguments.py"}]} | 1,213 | 873 |
gh_patches_debug_34000 | rasdani/github-patches | git_diff | AlexsLemonade__refinebio-2280 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Engagement bot thinks every user is a returning user
### Context
https://alexslemonade.slack.com/archives/CRK42AL1Y/p1587988808265500
### Problem or idea
@dvenprasad says 6 of those are new users. There must be a bug in the queries it uses or something.
### Solution or next step
Fix the engagement bot so it reports new users as new users.
</issue>
<code>
[start of api/data_refinery_api/management/commands/post_downloads_summary.py]
1 import datetime
2
3 from django.conf import settings
4 from django.core.management.base import BaseCommand
5 from django.utils import timezone
6
7 import requests
8
9 from data_refinery_common.models import DatasetAnnotation
10
11
12 class Command(BaseCommand):
13 help = "Post downloads summary to slack"
14
15 def add_arguments(self, parser):
16 parser.add_argument(
17 "--days",
18 type=int,
19 default=7, # default to a week
20 help=("Number of days in the past for which to build the stats"),
21 )
22 parser.add_argument(
23 "--channel",
24 type=str,
25 default="ccdl-general",
26 help=("Optional parameter to choose the channel where the message will be posted."),
27 )
28
29 def handle(self, *args, **options):
30 days = options["days"]
31 start_time = timezone.now() - datetime.timedelta(days=days)
32
33 annotation_queryset = DatasetAnnotation.objects.filter(
34 created_at__gt=start_time
35 ).prefetch_related("dataset")
36 annotations = [
37 annotation
38 for annotation in annotation_queryset
39 if annotation.data["start"] and should_display_email(annotation.dataset.email_address)
40 ]
41
42 unique_users = list(set(annotation.dataset.email_address for annotation in annotations))
43 unique_ips = list(set(annotation.data["ip"] for annotation in annotations))
44
45 if unique_users:
46 fallback_text = "In the last {0} days, {1} users downloaded datasets from {2} locations.".format(
47 days, len(unique_users), len(unique_ips)
48 )
49 else:
50 fallback_text = "There were no downloads in the last {0} days.".format(days)
51
52 new_users = ""
53 returning_users = ""
54 for email in unique_users:
55 user_annotations = annotation_queryset.filter(dataset__email_address=email)
56 total_downloads = user_annotations.count()
57 unique_locations = list(set(annotation.data["ip"] for annotation in user_annotations))
58 locations = ", ".join(get_ip_location(ip) for ip in unique_locations)
59 is_new_user = DatasetAnnotation.objects.filter(
60 created_at__lt=start_time, dataset__email_address=email
61 )
62 text = "{0} | {1} downloads from {2}\n".format(email, total_downloads, locations)
63 if is_new_user:
64 new_users += text
65 else:
66 returning_users += text
67
68 blocks = [
69 {
70 "type": "section",
71 "text": {"type": "plain_text", "emoji": True, "text": fallback_text},
72 }
73 ]
74 if new_users:
75 blocks.append(
76 {
77 "type": "section",
78 "text": {"type": "mrkdwn", "text": "*New users* \n" + new_users,},
79 }
80 )
81 if returning_users:
82 blocks.append(
83 {
84 "type": "section",
85 "text": {"type": "mrkdwn", "text": "*Returning users* \n" + returning_users,},
86 }
87 )
88
89 # Post to slack
90 requests.post(
91 settings.ENGAGEMENTBOT_WEBHOOK,
92 json={
93 "username": "EngagementBot",
94 "icon_emoji": ":halal:",
95 "channel": "#" + options["channel"],
96 "text": fallback_text,
97 "blocks": blocks,
98 },
99 headers={"Content-Type": "application/json"},
100 timeout=10,
101 )
102
103
104 def should_display_email(email: str) -> bool:
105 """ Returns true if the given email is not associated with the CCDL suers """
106 if not email:
107 return False
108 return not (
109 email.startswith("cansav09")
110 or email.startswith("arielsvn")
111 or email.startswith("jaclyn.n.taroni")
112 or email.startswith("kurt.wheeler")
113 or email.startswith("greenescientist")
114 or email.startswith("miserlou")
115 or email.startswith("d.prasad")
116 or email.endswith("@alexslemonade.org")
117 or email is ("[email protected]")
118 or email is ("[email protected]")
119 )
120
121
122 def get_ip_location(remote_ip):
123 try:
124 data = requests.get("https://ipapi.co/" + remote_ip + "/json/", timeout=10).json()
125 return "{0}, {1}".format(data["city"], data["country_name"])
126 except Exception:
127 return remote_ip
128
[end of api/data_refinery_api/management/commands/post_downloads_summary.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/api/data_refinery_api/management/commands/post_downloads_summary.py b/api/data_refinery_api/management/commands/post_downloads_summary.py
--- a/api/data_refinery_api/management/commands/post_downloads_summary.py
+++ b/api/data_refinery_api/management/commands/post_downloads_summary.py
@@ -42,28 +42,30 @@
unique_users = list(set(annotation.dataset.email_address for annotation in annotations))
unique_ips = list(set(annotation.data["ip"] for annotation in annotations))
- if unique_users:
- fallback_text = "In the last {0} days, {1} users downloaded datasets from {2} locations.".format(
- days, len(unique_users), len(unique_ips)
- )
- else:
- fallback_text = "There were no downloads in the last {0} days.".format(days)
-
new_users = ""
returning_users = ""
+ total_downloads = 0
for email in unique_users:
user_annotations = annotation_queryset.filter(dataset__email_address=email)
- total_downloads = user_annotations.count()
+ downloads = user_annotations.count()
+ total_downloads += downloads
unique_locations = list(set(annotation.data["ip"] for annotation in user_annotations))
locations = ", ".join(get_ip_location(ip) for ip in unique_locations)
- is_new_user = DatasetAnnotation.objects.filter(
+ is_returning_user = DatasetAnnotation.objects.filter(
created_at__lt=start_time, dataset__email_address=email
)
- text = "{0} | {1} downloads from {2}\n".format(email, total_downloads, locations)
- if is_new_user:
- new_users += text
- else:
+ text = "{0} | {1} downloads from {2}\n".format(email, downloads, locations)
+ if is_returning_user:
returning_users += text
+ else:
+ new_users += text
+
+ if total_downloads > 0:
+ fallback_text = "In the last {0} days, {1} users downloaded {2} datasets from {3} locations.".format(
+ days, len(unique_users), total_downloads, len(unique_ips)
+ )
+ else:
+ fallback_text = "There were no downloads in the last {0} days.".format(days)
blocks = [
{
| {"golden_diff": "diff --git a/api/data_refinery_api/management/commands/post_downloads_summary.py b/api/data_refinery_api/management/commands/post_downloads_summary.py\n--- a/api/data_refinery_api/management/commands/post_downloads_summary.py\n+++ b/api/data_refinery_api/management/commands/post_downloads_summary.py\n@@ -42,28 +42,30 @@\n unique_users = list(set(annotation.dataset.email_address for annotation in annotations))\n unique_ips = list(set(annotation.data[\"ip\"] for annotation in annotations))\n \n- if unique_users:\n- fallback_text = \"In the last {0} days, {1} users downloaded datasets from {2} locations.\".format(\n- days, len(unique_users), len(unique_ips)\n- )\n- else:\n- fallback_text = \"There were no downloads in the last {0} days.\".format(days)\n-\n new_users = \"\"\n returning_users = \"\"\n+ total_downloads = 0\n for email in unique_users:\n user_annotations = annotation_queryset.filter(dataset__email_address=email)\n- total_downloads = user_annotations.count()\n+ downloads = user_annotations.count()\n+ total_downloads += downloads\n unique_locations = list(set(annotation.data[\"ip\"] for annotation in user_annotations))\n locations = \", \".join(get_ip_location(ip) for ip in unique_locations)\n- is_new_user = DatasetAnnotation.objects.filter(\n+ is_returning_user = DatasetAnnotation.objects.filter(\n created_at__lt=start_time, dataset__email_address=email\n )\n- text = \"{0} | {1} downloads from {2}\\n\".format(email, total_downloads, locations)\n- if is_new_user:\n- new_users += text\n- else:\n+ text = \"{0} | {1} downloads from {2}\\n\".format(email, downloads, locations)\n+ if is_returning_user:\n returning_users += text\n+ else:\n+ new_users += text\n+\n+ if total_downloads > 0:\n+ fallback_text = \"In the last {0} days, {1} users downloaded {2} datasets from {3} locations.\".format(\n+ days, len(unique_users), total_downloads, len(unique_ips)\n+ )\n+ else:\n+ fallback_text = \"There were no downloads in the last {0} days.\".format(days)\n \n blocks = [\n {\n", "issue": "Engagement bot thinks every user is a returning user\n### Context\r\n\r\nhttps://alexslemonade.slack.com/archives/CRK42AL1Y/p1587988808265500\r\n\r\n### Problem or idea\r\n\r\n@dvenprasad says 6 of those are new users. There must be a bug in the queries it uses or something.\r\n\r\n### Solution or next step\r\n\r\nFix the engagement bot so it reports new users as new users.\n", "before_files": [{"content": "import datetime\n\nfrom django.conf import settings\nfrom django.core.management.base import BaseCommand\nfrom django.utils import timezone\n\nimport requests\n\nfrom data_refinery_common.models import DatasetAnnotation\n\n\nclass Command(BaseCommand):\n help = \"Post downloads summary to slack\"\n\n def add_arguments(self, parser):\n parser.add_argument(\n \"--days\",\n type=int,\n default=7, # default to a week\n help=(\"Number of days in the past for which to build the stats\"),\n )\n parser.add_argument(\n \"--channel\",\n type=str,\n default=\"ccdl-general\",\n help=(\"Optional parameter to choose the channel where the message will be posted.\"),\n )\n\n def handle(self, *args, **options):\n days = options[\"days\"]\n start_time = timezone.now() - datetime.timedelta(days=days)\n\n annotation_queryset = DatasetAnnotation.objects.filter(\n created_at__gt=start_time\n ).prefetch_related(\"dataset\")\n annotations = [\n annotation\n for annotation in annotation_queryset\n if annotation.data[\"start\"] and should_display_email(annotation.dataset.email_address)\n ]\n\n unique_users = list(set(annotation.dataset.email_address for annotation in annotations))\n unique_ips = list(set(annotation.data[\"ip\"] for annotation in annotations))\n\n if unique_users:\n fallback_text = \"In the last {0} days, {1} users downloaded datasets from {2} locations.\".format(\n days, len(unique_users), len(unique_ips)\n )\n else:\n fallback_text = \"There were no downloads in the last {0} days.\".format(days)\n\n new_users = \"\"\n returning_users = \"\"\n for email in unique_users:\n user_annotations = annotation_queryset.filter(dataset__email_address=email)\n total_downloads = user_annotations.count()\n unique_locations = list(set(annotation.data[\"ip\"] for annotation in user_annotations))\n locations = \", \".join(get_ip_location(ip) for ip in unique_locations)\n is_new_user = DatasetAnnotation.objects.filter(\n created_at__lt=start_time, dataset__email_address=email\n )\n text = \"{0} | {1} downloads from {2}\\n\".format(email, total_downloads, locations)\n if is_new_user:\n new_users += text\n else:\n returning_users += text\n\n blocks = [\n {\n \"type\": \"section\",\n \"text\": {\"type\": \"plain_text\", \"emoji\": True, \"text\": fallback_text},\n }\n ]\n if new_users:\n blocks.append(\n {\n \"type\": \"section\",\n \"text\": {\"type\": \"mrkdwn\", \"text\": \"*New users* \\n\" + new_users,},\n }\n )\n if returning_users:\n blocks.append(\n {\n \"type\": \"section\",\n \"text\": {\"type\": \"mrkdwn\", \"text\": \"*Returning users* \\n\" + returning_users,},\n }\n )\n\n # Post to slack\n requests.post(\n settings.ENGAGEMENTBOT_WEBHOOK,\n json={\n \"username\": \"EngagementBot\",\n \"icon_emoji\": \":halal:\",\n \"channel\": \"#\" + options[\"channel\"],\n \"text\": fallback_text,\n \"blocks\": blocks,\n },\n headers={\"Content-Type\": \"application/json\"},\n timeout=10,\n )\n\n\ndef should_display_email(email: str) -> bool:\n \"\"\" Returns true if the given email is not associated with the CCDL suers \"\"\"\n if not email:\n return False\n return not (\n email.startswith(\"cansav09\")\n or email.startswith(\"arielsvn\")\n or email.startswith(\"jaclyn.n.taroni\")\n or email.startswith(\"kurt.wheeler\")\n or email.startswith(\"greenescientist\")\n or email.startswith(\"miserlou\")\n or email.startswith(\"d.prasad\")\n or email.endswith(\"@alexslemonade.org\")\n or email is (\"[email protected]\")\n or email is (\"[email protected]\")\n )\n\n\ndef get_ip_location(remote_ip):\n try:\n data = requests.get(\"https://ipapi.co/\" + remote_ip + \"/json/\", timeout=10).json()\n return \"{0}, {1}\".format(data[\"city\"], data[\"country_name\"])\n except Exception:\n return remote_ip\n", "path": "api/data_refinery_api/management/commands/post_downloads_summary.py"}]} | 1,860 | 516 |
gh_patches_debug_26690 | rasdani/github-patches | git_diff | getsentry__sentry-python-2755 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Make EventScrubber recursive
### Problem Statement
We have a custom `before_send` implementation that scrubs data recursively. I was hopping to replace the custom implementation with the built-in EventScrubber but I found out that it doesn't scrub `vars` recursively.
As far as I can tell this was a consistency, perf trade-off thing but it would be nice to have a built-in option to make it recursive.
Thank you!
### Solution Brainstorm
`EventScrubber(recursive=True)`
</issue>
<code>
[start of sentry_sdk/scrubber.py]
1 from sentry_sdk.utils import (
2 capture_internal_exceptions,
3 AnnotatedValue,
4 iter_event_frames,
5 )
6 from sentry_sdk._compat import string_types
7 from sentry_sdk._types import TYPE_CHECKING
8
9 if TYPE_CHECKING:
10 from sentry_sdk._types import Event
11 from typing import Any
12 from typing import Dict
13 from typing import List
14 from typing import Optional
15
16
17 DEFAULT_DENYLIST = [
18 # stolen from relay
19 "password",
20 "passwd",
21 "secret",
22 "api_key",
23 "apikey",
24 "auth",
25 "credentials",
26 "mysql_pwd",
27 "privatekey",
28 "private_key",
29 "token",
30 "ip_address",
31 "session",
32 # django
33 "csrftoken",
34 "sessionid",
35 # wsgi
36 "remote_addr",
37 "x_csrftoken",
38 "x_forwarded_for",
39 "set_cookie",
40 "cookie",
41 "authorization",
42 "x_api_key",
43 "x_forwarded_for",
44 "x_real_ip",
45 # other common names used in the wild
46 "aiohttp_session", # aiohttp
47 "connect.sid", # Express
48 "csrf_token", # Pyramid
49 "csrf", # (this is a cookie name used in accepted answers on stack overflow)
50 "_csrf", # Express
51 "_csrf_token", # Bottle
52 "PHPSESSID", # PHP
53 "_session", # Sanic
54 "symfony", # Symfony
55 "user_session", # Vue
56 "_xsrf", # Tornado
57 "XSRF-TOKEN", # Angular, Laravel
58 ]
59
60
61 class EventScrubber(object):
62 def __init__(self, denylist=None):
63 # type: (Optional[List[str]]) -> None
64 self.denylist = DEFAULT_DENYLIST if denylist is None else denylist
65 self.denylist = [x.lower() for x in self.denylist]
66
67 def scrub_dict(self, d):
68 # type: (Dict[str, Any]) -> None
69 if not isinstance(d, dict):
70 return
71
72 for k in d.keys():
73 if isinstance(k, string_types) and k.lower() in self.denylist:
74 d[k] = AnnotatedValue.substituted_because_contains_sensitive_data()
75
76 def scrub_request(self, event):
77 # type: (Event) -> None
78 with capture_internal_exceptions():
79 if "request" in event:
80 if "headers" in event["request"]:
81 self.scrub_dict(event["request"]["headers"])
82 if "cookies" in event["request"]:
83 self.scrub_dict(event["request"]["cookies"])
84 if "data" in event["request"]:
85 self.scrub_dict(event["request"]["data"])
86
87 def scrub_extra(self, event):
88 # type: (Event) -> None
89 with capture_internal_exceptions():
90 if "extra" in event:
91 self.scrub_dict(event["extra"])
92
93 def scrub_user(self, event):
94 # type: (Event) -> None
95 with capture_internal_exceptions():
96 if "user" in event:
97 self.scrub_dict(event["user"])
98
99 def scrub_breadcrumbs(self, event):
100 # type: (Event) -> None
101 with capture_internal_exceptions():
102 if "breadcrumbs" in event:
103 if "values" in event["breadcrumbs"]:
104 for value in event["breadcrumbs"]["values"]:
105 if "data" in value:
106 self.scrub_dict(value["data"])
107
108 def scrub_frames(self, event):
109 # type: (Event) -> None
110 with capture_internal_exceptions():
111 for frame in iter_event_frames(event):
112 if "vars" in frame:
113 self.scrub_dict(frame["vars"])
114
115 def scrub_spans(self, event):
116 # type: (Event) -> None
117 with capture_internal_exceptions():
118 if "spans" in event:
119 for span in event["spans"]:
120 if "data" in span:
121 self.scrub_dict(span["data"])
122
123 def scrub_event(self, event):
124 # type: (Event) -> None
125 self.scrub_request(event)
126 self.scrub_extra(event)
127 self.scrub_user(event)
128 self.scrub_breadcrumbs(event)
129 self.scrub_frames(event)
130 self.scrub_spans(event)
131
[end of sentry_sdk/scrubber.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/sentry_sdk/scrubber.py b/sentry_sdk/scrubber.py
--- a/sentry_sdk/scrubber.py
+++ b/sentry_sdk/scrubber.py
@@ -59,19 +59,36 @@
class EventScrubber(object):
- def __init__(self, denylist=None):
- # type: (Optional[List[str]]) -> None
+ def __init__(self, denylist=None, recursive=False):
+ # type: (Optional[List[str]], bool) -> None
self.denylist = DEFAULT_DENYLIST if denylist is None else denylist
self.denylist = [x.lower() for x in self.denylist]
+ self.recursive = recursive
+
+ def scrub_list(self, lst):
+ # type: (List[Any]) -> None
+ if not isinstance(lst, list):
+ return
+
+ for v in lst:
+ if isinstance(v, dict):
+ self.scrub_dict(v)
+ elif isinstance(v, list):
+ self.scrub_list(v)
def scrub_dict(self, d):
# type: (Dict[str, Any]) -> None
if not isinstance(d, dict):
return
- for k in d.keys():
+ for k, v in d.items():
if isinstance(k, string_types) and k.lower() in self.denylist:
d[k] = AnnotatedValue.substituted_because_contains_sensitive_data()
+ elif self.recursive:
+ if isinstance(v, dict):
+ self.scrub_dict(v)
+ elif isinstance(v, list):
+ self.scrub_list(v)
def scrub_request(self, event):
# type: (Event) -> None
| {"golden_diff": "diff --git a/sentry_sdk/scrubber.py b/sentry_sdk/scrubber.py\n--- a/sentry_sdk/scrubber.py\n+++ b/sentry_sdk/scrubber.py\n@@ -59,19 +59,36 @@\n \n \n class EventScrubber(object):\n- def __init__(self, denylist=None):\n- # type: (Optional[List[str]]) -> None\n+ def __init__(self, denylist=None, recursive=False):\n+ # type: (Optional[List[str]], bool) -> None\n self.denylist = DEFAULT_DENYLIST if denylist is None else denylist\n self.denylist = [x.lower() for x in self.denylist]\n+ self.recursive = recursive\n+\n+ def scrub_list(self, lst):\n+ # type: (List[Any]) -> None\n+ if not isinstance(lst, list):\n+ return\n+\n+ for v in lst:\n+ if isinstance(v, dict):\n+ self.scrub_dict(v)\n+ elif isinstance(v, list):\n+ self.scrub_list(v)\n \n def scrub_dict(self, d):\n # type: (Dict[str, Any]) -> None\n if not isinstance(d, dict):\n return\n \n- for k in d.keys():\n+ for k, v in d.items():\n if isinstance(k, string_types) and k.lower() in self.denylist:\n d[k] = AnnotatedValue.substituted_because_contains_sensitive_data()\n+ elif self.recursive:\n+ if isinstance(v, dict):\n+ self.scrub_dict(v)\n+ elif isinstance(v, list):\n+ self.scrub_list(v)\n \n def scrub_request(self, event):\n # type: (Event) -> None\n", "issue": "Make EventScrubber recursive\n### Problem Statement\r\n\r\nWe have a custom `before_send` implementation that scrubs data recursively. I was hopping to replace the custom implementation with the built-in EventScrubber but I found out that it doesn't scrub `vars` recursively.\r\n\r\nAs far as I can tell this was a consistency, perf trade-off thing but it would be nice to have a built-in option to make it recursive.\r\n\r\nThank you!\r\n\r\n### Solution Brainstorm\r\n\r\n`EventScrubber(recursive=True)`\n", "before_files": [{"content": "from sentry_sdk.utils import (\n capture_internal_exceptions,\n AnnotatedValue,\n iter_event_frames,\n)\nfrom sentry_sdk._compat import string_types\nfrom sentry_sdk._types import TYPE_CHECKING\n\nif TYPE_CHECKING:\n from sentry_sdk._types import Event\n from typing import Any\n from typing import Dict\n from typing import List\n from typing import Optional\n\n\nDEFAULT_DENYLIST = [\n # stolen from relay\n \"password\",\n \"passwd\",\n \"secret\",\n \"api_key\",\n \"apikey\",\n \"auth\",\n \"credentials\",\n \"mysql_pwd\",\n \"privatekey\",\n \"private_key\",\n \"token\",\n \"ip_address\",\n \"session\",\n # django\n \"csrftoken\",\n \"sessionid\",\n # wsgi\n \"remote_addr\",\n \"x_csrftoken\",\n \"x_forwarded_for\",\n \"set_cookie\",\n \"cookie\",\n \"authorization\",\n \"x_api_key\",\n \"x_forwarded_for\",\n \"x_real_ip\",\n # other common names used in the wild\n \"aiohttp_session\", # aiohttp\n \"connect.sid\", # Express\n \"csrf_token\", # Pyramid\n \"csrf\", # (this is a cookie name used in accepted answers on stack overflow)\n \"_csrf\", # Express\n \"_csrf_token\", # Bottle\n \"PHPSESSID\", # PHP\n \"_session\", # Sanic\n \"symfony\", # Symfony\n \"user_session\", # Vue\n \"_xsrf\", # Tornado\n \"XSRF-TOKEN\", # Angular, Laravel\n]\n\n\nclass EventScrubber(object):\n def __init__(self, denylist=None):\n # type: (Optional[List[str]]) -> None\n self.denylist = DEFAULT_DENYLIST if denylist is None else denylist\n self.denylist = [x.lower() for x in self.denylist]\n\n def scrub_dict(self, d):\n # type: (Dict[str, Any]) -> None\n if not isinstance(d, dict):\n return\n\n for k in d.keys():\n if isinstance(k, string_types) and k.lower() in self.denylist:\n d[k] = AnnotatedValue.substituted_because_contains_sensitive_data()\n\n def scrub_request(self, event):\n # type: (Event) -> None\n with capture_internal_exceptions():\n if \"request\" in event:\n if \"headers\" in event[\"request\"]:\n self.scrub_dict(event[\"request\"][\"headers\"])\n if \"cookies\" in event[\"request\"]:\n self.scrub_dict(event[\"request\"][\"cookies\"])\n if \"data\" in event[\"request\"]:\n self.scrub_dict(event[\"request\"][\"data\"])\n\n def scrub_extra(self, event):\n # type: (Event) -> None\n with capture_internal_exceptions():\n if \"extra\" in event:\n self.scrub_dict(event[\"extra\"])\n\n def scrub_user(self, event):\n # type: (Event) -> None\n with capture_internal_exceptions():\n if \"user\" in event:\n self.scrub_dict(event[\"user\"])\n\n def scrub_breadcrumbs(self, event):\n # type: (Event) -> None\n with capture_internal_exceptions():\n if \"breadcrumbs\" in event:\n if \"values\" in event[\"breadcrumbs\"]:\n for value in event[\"breadcrumbs\"][\"values\"]:\n if \"data\" in value:\n self.scrub_dict(value[\"data\"])\n\n def scrub_frames(self, event):\n # type: (Event) -> None\n with capture_internal_exceptions():\n for frame in iter_event_frames(event):\n if \"vars\" in frame:\n self.scrub_dict(frame[\"vars\"])\n\n def scrub_spans(self, event):\n # type: (Event) -> None\n with capture_internal_exceptions():\n if \"spans\" in event:\n for span in event[\"spans\"]:\n if \"data\" in span:\n self.scrub_dict(span[\"data\"])\n\n def scrub_event(self, event):\n # type: (Event) -> None\n self.scrub_request(event)\n self.scrub_extra(event)\n self.scrub_user(event)\n self.scrub_breadcrumbs(event)\n self.scrub_frames(event)\n self.scrub_spans(event)\n", "path": "sentry_sdk/scrubber.py"}]} | 1,882 | 386 |
gh_patches_debug_19340 | rasdani/github-patches | git_diff | pyca__cryptography-7382 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Include Rust version in DEBUG ASSISTENCE message?
I'm not sure what the best way to do this is but it seems like it would be helpful to include the output of `rustc -V` in the DEBUG ASSISTENCE.
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2
3 # This file is dual licensed under the terms of the Apache License, Version
4 # 2.0, and the BSD License. See the LICENSE file in the root of this repository
5 # for complete details.
6
7 import os
8 import platform
9 import sys
10
11 from setuptools import setup
12
13 try:
14 from setuptools_rust import RustExtension
15 except ImportError:
16 print(
17 """
18 =============================DEBUG ASSISTANCE==========================
19 If you are seeing an error here please try the following to
20 successfully install cryptography:
21
22 Upgrade to the latest pip and try again. This will fix errors for most
23 users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
24 =============================DEBUG ASSISTANCE==========================
25 """
26 )
27 raise
28
29
30 base_dir = os.path.dirname(__file__)
31 src_dir = os.path.join(base_dir, "src")
32
33 # When executing the setup.py, we need to be able to import ourselves, this
34 # means that we need to add the src/ directory to the sys.path.
35 sys.path.insert(0, src_dir)
36
37 try:
38 # See setup.cfg for most of the config metadata.
39 setup(
40 cffi_modules=[
41 "src/_cffi_src/build_openssl.py:ffi",
42 ],
43 rust_extensions=[
44 RustExtension(
45 "_rust",
46 "src/rust/Cargo.toml",
47 py_limited_api=True,
48 # Enable abi3 mode if we're not using PyPy.
49 features=(
50 []
51 if platform.python_implementation() == "PyPy"
52 else ["pyo3/abi3-py36"]
53 ),
54 rust_version=">=1.48.0",
55 )
56 ],
57 )
58 except: # noqa: E722
59 # Note: This is a bare exception that re-raises so that we don't interfere
60 # with anything the installation machinery might want to do. Because we
61 # print this for any exception this msg can appear (e.g. in verbose logs)
62 # even if there's no failure. For example, SetupRequirementsError is raised
63 # during PEP517 building and prints this text. setuptools raises SystemExit
64 # when compilation fails right now, but it's possible this isn't stable
65 # or a public API commitment so we'll remain ultra conservative.
66
67 import pkg_resources
68
69 print(
70 """
71 =============================DEBUG ASSISTANCE=============================
72 If you are seeing a compilation error please try the following steps to
73 successfully install cryptography:
74 1) Upgrade to the latest pip and try again. This will fix errors for most
75 users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip
76 2) Read https://cryptography.io/en/latest/installation/ for specific
77 instructions for your platform.
78 3) Check our frequently asked questions for more information:
79 https://cryptography.io/en/latest/faq/
80 4) Ensure you have a recent Rust toolchain installed:
81 https://cryptography.io/en/latest/installation/#rust
82 """
83 )
84 print(f" Python: {'.'.join(str(v) for v in sys.version_info[:3])}")
85 print(f" platform: {platform.platform()}")
86 for dist in ["pip", "setuptools", "setuptools_rust"]:
87 try:
88 version = pkg_resources.get_distribution(dist).version
89 except pkg_resources.DistributionNotFound:
90 version = "n/a"
91 print(f" {dist}: {version}")
92 print(
93 """\
94 =============================DEBUG ASSISTANCE=============================
95 """
96 )
97 raise
98
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -6,6 +6,9 @@
import os
import platform
+import re
+import shutil
+import subprocess
import sys
from setuptools import setup
@@ -89,6 +92,22 @@
except pkg_resources.DistributionNotFound:
version = "n/a"
print(f" {dist}: {version}")
+ version = "n/a"
+ if shutil.which("rustc") is not None:
+ try:
+ # If for any reason `rustc --version` fails, silently ignore it
+ rustc_output = subprocess.run(
+ ["rustc", "--version"],
+ capture_output=True,
+ timeout=0.5,
+ encoding="utf8",
+ check=True,
+ ).stdout
+ version = re.sub("^rustc ", "", rustc_output.strip())
+ except subprocess.SubprocessError:
+ pass
+ print(f" rustc: {version}")
+
print(
"""\
=============================DEBUG ASSISTANCE=============================
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -6,6 +6,9 @@\n \n import os\n import platform\n+import re\n+import shutil\n+import subprocess\n import sys\n \n from setuptools import setup\n@@ -89,6 +92,22 @@\n except pkg_resources.DistributionNotFound:\n version = \"n/a\"\n print(f\" {dist}: {version}\")\n+ version = \"n/a\"\n+ if shutil.which(\"rustc\") is not None:\n+ try:\n+ # If for any reason `rustc --version` fails, silently ignore it\n+ rustc_output = subprocess.run(\n+ [\"rustc\", \"--version\"],\n+ capture_output=True,\n+ timeout=0.5,\n+ encoding=\"utf8\",\n+ check=True,\n+ ).stdout\n+ version = re.sub(\"^rustc \", \"\", rustc_output.strip())\n+ except subprocess.SubprocessError:\n+ pass\n+ print(f\" rustc: {version}\")\n+\n print(\n \"\"\"\\\n =============================DEBUG ASSISTANCE=============================\n", "issue": "Include Rust version in DEBUG ASSISTENCE message?\nI'm not sure what the best way to do this is but it seems like it would be helpful to include the output of `rustc -V` in the DEBUG ASSISTENCE.\n", "before_files": [{"content": "#!/usr/bin/env python\n\n# This file is dual licensed under the terms of the Apache License, Version\n# 2.0, and the BSD License. See the LICENSE file in the root of this repository\n# for complete details.\n\nimport os\nimport platform\nimport sys\n\nfrom setuptools import setup\n\ntry:\n from setuptools_rust import RustExtension\nexcept ImportError:\n print(\n \"\"\"\n =============================DEBUG ASSISTANCE==========================\n If you are seeing an error here please try the following to\n successfully install cryptography:\n\n Upgrade to the latest pip and try again. This will fix errors for most\n users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip\n =============================DEBUG ASSISTANCE==========================\n \"\"\"\n )\n raise\n\n\nbase_dir = os.path.dirname(__file__)\nsrc_dir = os.path.join(base_dir, \"src\")\n\n# When executing the setup.py, we need to be able to import ourselves, this\n# means that we need to add the src/ directory to the sys.path.\nsys.path.insert(0, src_dir)\n\ntry:\n # See setup.cfg for most of the config metadata.\n setup(\n cffi_modules=[\n \"src/_cffi_src/build_openssl.py:ffi\",\n ],\n rust_extensions=[\n RustExtension(\n \"_rust\",\n \"src/rust/Cargo.toml\",\n py_limited_api=True,\n # Enable abi3 mode if we're not using PyPy.\n features=(\n []\n if platform.python_implementation() == \"PyPy\"\n else [\"pyo3/abi3-py36\"]\n ),\n rust_version=\">=1.48.0\",\n )\n ],\n )\nexcept: # noqa: E722\n # Note: This is a bare exception that re-raises so that we don't interfere\n # with anything the installation machinery might want to do. Because we\n # print this for any exception this msg can appear (e.g. in verbose logs)\n # even if there's no failure. For example, SetupRequirementsError is raised\n # during PEP517 building and prints this text. setuptools raises SystemExit\n # when compilation fails right now, but it's possible this isn't stable\n # or a public API commitment so we'll remain ultra conservative.\n\n import pkg_resources\n\n print(\n \"\"\"\n =============================DEBUG ASSISTANCE=============================\n If you are seeing a compilation error please try the following steps to\n successfully install cryptography:\n 1) Upgrade to the latest pip and try again. This will fix errors for most\n users. See: https://pip.pypa.io/en/stable/installing/#upgrading-pip\n 2) Read https://cryptography.io/en/latest/installation/ for specific\n instructions for your platform.\n 3) Check our frequently asked questions for more information:\n https://cryptography.io/en/latest/faq/\n 4) Ensure you have a recent Rust toolchain installed:\n https://cryptography.io/en/latest/installation/#rust\n \"\"\"\n )\n print(f\" Python: {'.'.join(str(v) for v in sys.version_info[:3])}\")\n print(f\" platform: {platform.platform()}\")\n for dist in [\"pip\", \"setuptools\", \"setuptools_rust\"]:\n try:\n version = pkg_resources.get_distribution(dist).version\n except pkg_resources.DistributionNotFound:\n version = \"n/a\"\n print(f\" {dist}: {version}\")\n print(\n \"\"\"\\\n =============================DEBUG ASSISTANCE=============================\n \"\"\"\n )\n raise\n", "path": "setup.py"}]} | 1,548 | 246 |
gh_patches_debug_42599 | rasdani/github-patches | git_diff | StackStorm__st2-5467 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix multiple file support in linux.file_watch.line + black + fstring
When multiple file_watch rules are defined, the last defined file reference is used for all files being watched. This causes trigger-instances to fail rule enforcement.
Adding the reference to the logging shows `test1.log` has the reference ending with `8c505`
```
2021-11-30 18:50:40,434 140243179888112 INFO file_watch_sensor [-] Added file "/var/log/test1.log" with reference linux.7e55ad75-b10c-44db-b53e-95164a18c505
2021-11-30 18:50:41,459 140243179888112 INFO file_watch_sensor [-] Added file "/var/log/test2.log" with reference linux.590de8c1-c578-4125-9082-2cee03b030a9
```
When the file contents are updated a trigger is emitted by the sensor using the reference of `test2.log` ending in `b030a9`
```
root@u1804:~# st2 trigger-instance get 61a6649f164625c2d94dccb8 -y
id: 61a6649f164625c2d94dccb8
occurrence_time: '2021-11-30T17:51:27.294000Z'
payload:
file_name: test1.log
file_path: /var/log/test1.log
line: Tue Nov 30 18:51:27 CET 2021 dhcp
status: processed
trigger: linux.590de8c1-c578-4125-9082-2cee03b030a9
```
This PR consists of adding a dictionary that is used to track the `path_name` and `reference` pair and looks up the reference for the file that was altered when creating the trigger.
The code is formatted with black and updated to use fstrings since all instances will be using Python 3.6+
</issue>
<code>
[start of contrib/linux/sensors/file_watch_sensor.py]
1 # Copyright 2020 The StackStorm Authors.
2 # Copyright 2019 Extreme Networks, Inc.
3 #
4 # Licensed under the Apache License, Version 2.0 (the "License");
5 # you may not use this file except in compliance with the License.
6 # You may obtain a copy of the License at
7 #
8 # http://www.apache.org/licenses/LICENSE-2.0
9 #
10 # Unless required by applicable law or agreed to in writing, software
11 # distributed under the License is distributed on an "AS IS" BASIS,
12 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
15
16 import os
17
18 import eventlet
19
20 from logshipper.tail import Tail
21
22 from st2reactor.sensor.base import Sensor
23
24
25 class FileWatchSensor(Sensor):
26 def __init__(self, sensor_service, config=None):
27 super(FileWatchSensor, self).__init__(
28 sensor_service=sensor_service, config=config
29 )
30 self._trigger = None
31 self._logger = self._sensor_service.get_logger(__name__)
32 self._tail = None
33
34 def setup(self):
35 self._tail = Tail(filenames=[])
36 self._tail.handler = self._handle_line
37 self._tail.should_run = True
38
39 def run(self):
40 self._tail.run()
41
42 def cleanup(self):
43 if self._tail:
44 self._tail.should_run = False
45
46 try:
47 self._tail.notifier.stop()
48 except Exception:
49 self._logger.exception("Unable to stop the tail notifier")
50
51 def add_trigger(self, trigger):
52 file_path = trigger["parameters"].get("file_path", None)
53
54 if not file_path:
55 self._logger.error('Received trigger type without "file_path" field.')
56 return
57
58 self._trigger = trigger.get("ref", None)
59
60 if not self._trigger:
61 raise Exception("Trigger %s did not contain a ref." % trigger)
62
63 # Wait a bit to avoid initialization race in logshipper library
64 eventlet.sleep(1.0)
65
66 self._tail.add_file(filename=file_path)
67 self._logger.info('Added file "%s"' % (file_path))
68
69 def update_trigger(self, trigger):
70 pass
71
72 def remove_trigger(self, trigger):
73 file_path = trigger["parameters"].get("file_path", None)
74
75 if not file_path:
76 self._logger.error('Received trigger type without "file_path" field.')
77 return
78
79 self._tail.remove_file(filename=file_path)
80 self._trigger = None
81
82 self._logger.info('Removed file "%s"' % (file_path))
83
84 def _handle_line(self, file_path, line):
85 trigger = self._trigger
86 payload = {
87 "file_path": file_path,
88 "file_name": os.path.basename(file_path),
89 "line": line,
90 }
91 self._logger.debug(
92 "Sending payload %s for trigger %s to sensor_service.", payload, trigger
93 )
94 self.sensor_service.dispatch(trigger=trigger, payload=payload)
95
[end of contrib/linux/sensors/file_watch_sensor.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/contrib/linux/sensors/file_watch_sensor.py b/contrib/linux/sensors/file_watch_sensor.py
--- a/contrib/linux/sensors/file_watch_sensor.py
+++ b/contrib/linux/sensors/file_watch_sensor.py
@@ -14,7 +14,6 @@
# limitations under the License.
import os
-
import eventlet
from logshipper.tail import Tail
@@ -27,44 +26,46 @@
super(FileWatchSensor, self).__init__(
sensor_service=sensor_service, config=config
)
- self._trigger = None
- self._logger = self._sensor_service.get_logger(__name__)
- self._tail = None
+ self.log = self._sensor_service.get_logger(__name__)
+ self.tail = None
+ self.file_ref = {}
def setup(self):
- self._tail = Tail(filenames=[])
- self._tail.handler = self._handle_line
- self._tail.should_run = True
+ self.tail = Tail(filenames=[])
+ self.tail.handler = self._handle_line
+ self.tail.should_run = True
def run(self):
- self._tail.run()
+ self.tail.run()
def cleanup(self):
- if self._tail:
- self._tail.should_run = False
+ if self.tail:
+ self.tail.should_run = False
try:
- self._tail.notifier.stop()
+ self.tail.notifier.stop()
except Exception:
- self._logger.exception("Unable to stop the tail notifier")
+ self.log.exception("Unable to stop the tail notifier")
def add_trigger(self, trigger):
file_path = trigger["parameters"].get("file_path", None)
if not file_path:
- self._logger.error('Received trigger type without "file_path" field.')
+ self.log.error('Received trigger type without "file_path" field.')
return
- self._trigger = trigger.get("ref", None)
+ trigger = trigger.get("ref", None)
- if not self._trigger:
- raise Exception("Trigger %s did not contain a ref." % trigger)
+ if not trigger:
+ raise Exception(f"Trigger {trigger} did not contain a ref.")
# Wait a bit to avoid initialization race in logshipper library
eventlet.sleep(1.0)
- self._tail.add_file(filename=file_path)
- self._logger.info('Added file "%s"' % (file_path))
+ self.tail.add_file(filename=file_path)
+ self.file_ref[file_path] = trigger
+
+ self.log.info(f"Added file '{file_path}' ({trigger}) to watch list.")
def update_trigger(self, trigger):
pass
@@ -73,22 +74,28 @@
file_path = trigger["parameters"].get("file_path", None)
if not file_path:
- self._logger.error('Received trigger type without "file_path" field.')
+ self.log.error("Received trigger type without 'file_path' field.")
return
- self._tail.remove_file(filename=file_path)
- self._trigger = None
+ self.tail.remove_file(filename=file_path)
+ self.file_ref.pop(file_path)
- self._logger.info('Removed file "%s"' % (file_path))
+ self.log.info(f"Removed file '{file_path}' ({trigger}) from watch list.")
def _handle_line(self, file_path, line):
- trigger = self._trigger
+ if file_path not in self.file_ref:
+ self.log.error(
+ f"No reference found for {file_path}, unable to emit trigger!"
+ )
+ return
+
+ trigger = self.file_ref[file_path]
payload = {
"file_path": file_path,
"file_name": os.path.basename(file_path),
"line": line,
}
- self._logger.debug(
- "Sending payload %s for trigger %s to sensor_service.", payload, trigger
+ self.log.debug(
+ f"Sending payload {payload} for trigger {trigger} to sensor_service."
)
self.sensor_service.dispatch(trigger=trigger, payload=payload)
| {"golden_diff": "diff --git a/contrib/linux/sensors/file_watch_sensor.py b/contrib/linux/sensors/file_watch_sensor.py\n--- a/contrib/linux/sensors/file_watch_sensor.py\n+++ b/contrib/linux/sensors/file_watch_sensor.py\n@@ -14,7 +14,6 @@\n # limitations under the License.\n \n import os\n-\n import eventlet\n \n from logshipper.tail import Tail\n@@ -27,44 +26,46 @@\n super(FileWatchSensor, self).__init__(\n sensor_service=sensor_service, config=config\n )\n- self._trigger = None\n- self._logger = self._sensor_service.get_logger(__name__)\n- self._tail = None\n+ self.log = self._sensor_service.get_logger(__name__)\n+ self.tail = None\n+ self.file_ref = {}\n \n def setup(self):\n- self._tail = Tail(filenames=[])\n- self._tail.handler = self._handle_line\n- self._tail.should_run = True\n+ self.tail = Tail(filenames=[])\n+ self.tail.handler = self._handle_line\n+ self.tail.should_run = True\n \n def run(self):\n- self._tail.run()\n+ self.tail.run()\n \n def cleanup(self):\n- if self._tail:\n- self._tail.should_run = False\n+ if self.tail:\n+ self.tail.should_run = False\n \n try:\n- self._tail.notifier.stop()\n+ self.tail.notifier.stop()\n except Exception:\n- self._logger.exception(\"Unable to stop the tail notifier\")\n+ self.log.exception(\"Unable to stop the tail notifier\")\n \n def add_trigger(self, trigger):\n file_path = trigger[\"parameters\"].get(\"file_path\", None)\n \n if not file_path:\n- self._logger.error('Received trigger type without \"file_path\" field.')\n+ self.log.error('Received trigger type without \"file_path\" field.')\n return\n \n- self._trigger = trigger.get(\"ref\", None)\n+ trigger = trigger.get(\"ref\", None)\n \n- if not self._trigger:\n- raise Exception(\"Trigger %s did not contain a ref.\" % trigger)\n+ if not trigger:\n+ raise Exception(f\"Trigger {trigger} did not contain a ref.\")\n \n # Wait a bit to avoid initialization race in logshipper library\n eventlet.sleep(1.0)\n \n- self._tail.add_file(filename=file_path)\n- self._logger.info('Added file \"%s\"' % (file_path))\n+ self.tail.add_file(filename=file_path)\n+ self.file_ref[file_path] = trigger\n+\n+ self.log.info(f\"Added file '{file_path}' ({trigger}) to watch list.\")\n \n def update_trigger(self, trigger):\n pass\n@@ -73,22 +74,28 @@\n file_path = trigger[\"parameters\"].get(\"file_path\", None)\n \n if not file_path:\n- self._logger.error('Received trigger type without \"file_path\" field.')\n+ self.log.error(\"Received trigger type without 'file_path' field.\")\n return\n \n- self._tail.remove_file(filename=file_path)\n- self._trigger = None\n+ self.tail.remove_file(filename=file_path)\n+ self.file_ref.pop(file_path)\n \n- self._logger.info('Removed file \"%s\"' % (file_path))\n+ self.log.info(f\"Removed file '{file_path}' ({trigger}) from watch list.\")\n \n def _handle_line(self, file_path, line):\n- trigger = self._trigger\n+ if file_path not in self.file_ref:\n+ self.log.error(\n+ f\"No reference found for {file_path}, unable to emit trigger!\"\n+ )\n+ return\n+\n+ trigger = self.file_ref[file_path]\n payload = {\n \"file_path\": file_path,\n \"file_name\": os.path.basename(file_path),\n \"line\": line,\n }\n- self._logger.debug(\n- \"Sending payload %s for trigger %s to sensor_service.\", payload, trigger\n+ self.log.debug(\n+ f\"Sending payload {payload} for trigger {trigger} to sensor_service.\"\n )\n self.sensor_service.dispatch(trigger=trigger, payload=payload)\n", "issue": "Fix multiple file support in linux.file_watch.line + black + fstring\nWhen multiple file_watch rules are defined, the last defined file reference is used for all files being watched. This causes trigger-instances to fail rule enforcement.\r\n\r\nAdding the reference to the logging shows `test1.log` has the reference ending with `8c505`\r\n```\r\n2021-11-30 18:50:40,434 140243179888112 INFO file_watch_sensor [-] Added file \"/var/log/test1.log\" with reference linux.7e55ad75-b10c-44db-b53e-95164a18c505\r\n2021-11-30 18:50:41,459 140243179888112 INFO file_watch_sensor [-] Added file \"/var/log/test2.log\" with reference linux.590de8c1-c578-4125-9082-2cee03b030a9\r\n```\r\n\r\nWhen the file contents are updated a trigger is emitted by the sensor using the reference of `test2.log` ending in `b030a9`\r\n```\r\nroot@u1804:~# st2 trigger-instance get 61a6649f164625c2d94dccb8 -y\r\nid: 61a6649f164625c2d94dccb8\r\noccurrence_time: '2021-11-30T17:51:27.294000Z'\r\npayload:\r\n file_name: test1.log\r\n file_path: /var/log/test1.log\r\n line: Tue Nov 30 18:51:27 CET 2021 dhcp\r\nstatus: processed\r\ntrigger: linux.590de8c1-c578-4125-9082-2cee03b030a9\r\n```\r\n\r\nThis PR consists of adding a dictionary that is used to track the `path_name` and `reference` pair and looks up the reference for the file that was altered when creating the trigger.\r\n\r\nThe code is formatted with black and updated to use fstrings since all instances will be using Python 3.6+\n", "before_files": [{"content": "# Copyright 2020 The StackStorm Authors.\n# Copyright 2019 Extreme Networks, Inc.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport os\n\nimport eventlet\n\nfrom logshipper.tail import Tail\n\nfrom st2reactor.sensor.base import Sensor\n\n\nclass FileWatchSensor(Sensor):\n def __init__(self, sensor_service, config=None):\n super(FileWatchSensor, self).__init__(\n sensor_service=sensor_service, config=config\n )\n self._trigger = None\n self._logger = self._sensor_service.get_logger(__name__)\n self._tail = None\n\n def setup(self):\n self._tail = Tail(filenames=[])\n self._tail.handler = self._handle_line\n self._tail.should_run = True\n\n def run(self):\n self._tail.run()\n\n def cleanup(self):\n if self._tail:\n self._tail.should_run = False\n\n try:\n self._tail.notifier.stop()\n except Exception:\n self._logger.exception(\"Unable to stop the tail notifier\")\n\n def add_trigger(self, trigger):\n file_path = trigger[\"parameters\"].get(\"file_path\", None)\n\n if not file_path:\n self._logger.error('Received trigger type without \"file_path\" field.')\n return\n\n self._trigger = trigger.get(\"ref\", None)\n\n if not self._trigger:\n raise Exception(\"Trigger %s did not contain a ref.\" % trigger)\n\n # Wait a bit to avoid initialization race in logshipper library\n eventlet.sleep(1.0)\n\n self._tail.add_file(filename=file_path)\n self._logger.info('Added file \"%s\"' % (file_path))\n\n def update_trigger(self, trigger):\n pass\n\n def remove_trigger(self, trigger):\n file_path = trigger[\"parameters\"].get(\"file_path\", None)\n\n if not file_path:\n self._logger.error('Received trigger type without \"file_path\" field.')\n return\n\n self._tail.remove_file(filename=file_path)\n self._trigger = None\n\n self._logger.info('Removed file \"%s\"' % (file_path))\n\n def _handle_line(self, file_path, line):\n trigger = self._trigger\n payload = {\n \"file_path\": file_path,\n \"file_name\": os.path.basename(file_path),\n \"line\": line,\n }\n self._logger.debug(\n \"Sending payload %s for trigger %s to sensor_service.\", payload, trigger\n )\n self.sensor_service.dispatch(trigger=trigger, payload=payload)\n", "path": "contrib/linux/sensors/file_watch_sensor.py"}]} | 1,934 | 918 |
gh_patches_debug_2998 | rasdani/github-patches | git_diff | archlinux__archinstall-763 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
in xfce, it need xarchiver for create archive & extract here-to
in xfce, it need xarchiver for create archive & extract here-to
in xfce, it need xarchiver for create archive & extract here-to
in xfce, it need xarchiver for create archive & extract here-to
</issue>
<code>
[start of profiles/xfce4.py]
1 # A desktop environment using "Xfce4"
2
3 import archinstall
4
5 is_top_level_profile = False
6
7 __packages__ = [
8 "xfce4",
9 "xfce4-goodies",
10 "pavucontrol",
11 "lightdm",
12 "lightdm-gtk-greeter",
13 "gvfs",
14 "network-manager-applet",
15 ]
16
17
18 def _prep_function(*args, **kwargs):
19 """
20 Magic function called by the importing installer
21 before continuing any further. It also avoids executing any
22 other code in this stage. So it's a safe way to ask the user
23 for more input before any other installer steps start.
24 """
25
26 # XFCE requires a functional xorg installation.
27 profile = archinstall.Profile(None, 'xorg')
28 with profile.load_instructions(namespace='xorg.py') as imported:
29 if hasattr(imported, '_prep_function'):
30 return imported._prep_function()
31 else:
32 print('Deprecated (??): xorg profile has no _prep_function() anymore')
33
34
35 # Ensures that this code only gets executed if executed
36 # through importlib.util.spec_from_file_location("xfce4", "/somewhere/xfce4.py")
37 # or through conventional import xfce4
38 if __name__ == 'xfce4':
39 # Install dependency profiles
40 archinstall.storage['installation_session'].install_profile('xorg')
41
42 # Install the XFCE4 packages
43 archinstall.storage['installation_session'].add_additional_packages(__packages__)
44
45 archinstall.storage['installation_session'].enable_service('lightdm') # Light Display Manager
46
[end of profiles/xfce4.py]
[start of profiles/kde.py]
1 # A desktop environment using "KDE".
2
3 import archinstall
4
5 is_top_level_profile = False
6
7 __packages__ = [
8 "plasma-meta",
9 "konsole",
10 "kate",
11 "dolphin",
12 "sddm",
13 "plasma-wayland-session",
14 "egl-wayland",
15 ]
16
17
18 # TODO: Remove hard dependency of bash (due to .bash_profile)
19
20
21 def _prep_function(*args, **kwargs):
22 """
23 Magic function called by the importing installer
24 before continuing any further. It also avoids executing any
25 other code in this stage. So it's a safe way to ask the user
26 for more input before any other installer steps start.
27 """
28
29 # KDE requires a functioning Xorg installation.
30 profile = archinstall.Profile(None, 'xorg')
31 with profile.load_instructions(namespace='xorg.py') as imported:
32 if hasattr(imported, '_prep_function'):
33 return imported._prep_function()
34 else:
35 print('Deprecated (??): xorg profile has no _prep_function() anymore')
36
37
38 """
39 def _post_install(*args, **kwargs):
40 if "nvidia" in _gfx_driver_packages:
41 print("Plasma Wayland has known compatibility issues with the proprietary Nvidia driver")
42 print("After booting, you can choose between Wayland and Xorg using the drop-down menu")
43 return True
44 """
45
46 # Ensures that this code only gets executed if executed
47 # through importlib.util.spec_from_file_location("kde", "/somewhere/kde.py")
48 # or through conventional import kde
49 if __name__ == 'kde':
50 # Install dependency profiles
51 archinstall.storage['installation_session'].install_profile('xorg')
52
53 # Install the KDE packages
54 archinstall.storage['installation_session'].add_additional_packages(__packages__)
55
56 # Enable autostart of KDE for all users
57 archinstall.storage['installation_session'].enable_service('sddm')
58
[end of profiles/kde.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/profiles/kde.py b/profiles/kde.py
--- a/profiles/kde.py
+++ b/profiles/kde.py
@@ -9,6 +9,7 @@
"konsole",
"kate",
"dolphin",
+ "ark",
"sddm",
"plasma-wayland-session",
"egl-wayland",
diff --git a/profiles/xfce4.py b/profiles/xfce4.py
--- a/profiles/xfce4.py
+++ b/profiles/xfce4.py
@@ -12,6 +12,7 @@
"lightdm-gtk-greeter",
"gvfs",
"network-manager-applet",
+ "xarchiver"
]
| {"golden_diff": "diff --git a/profiles/kde.py b/profiles/kde.py\n--- a/profiles/kde.py\n+++ b/profiles/kde.py\n@@ -9,6 +9,7 @@\n \t\"konsole\",\n \t\"kate\",\n \t\"dolphin\",\n+\t\"ark\",\n \t\"sddm\",\n \t\"plasma-wayland-session\",\n \t\"egl-wayland\",\ndiff --git a/profiles/xfce4.py b/profiles/xfce4.py\n--- a/profiles/xfce4.py\n+++ b/profiles/xfce4.py\n@@ -12,6 +12,7 @@\n \t\"lightdm-gtk-greeter\",\n \t\"gvfs\",\n \t\"network-manager-applet\",\n+\t\"xarchiver\"\n ]\n", "issue": "in xfce, it need xarchiver for create archive & extract here-to\nin xfce, it need xarchiver for create archive & extract here-to\nin xfce, it need xarchiver for create archive & extract here-to\nin xfce, it need xarchiver for create archive & extract here-to\n", "before_files": [{"content": "# A desktop environment using \"Xfce4\"\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"xfce4\",\n\t\"xfce4-goodies\",\n\t\"pavucontrol\",\n\t\"lightdm\",\n\t\"lightdm-gtk-greeter\",\n\t\"gvfs\",\n\t\"network-manager-applet\",\n]\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# XFCE requires a functional xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"xfce4\", \"/somewhere/xfce4.py\")\n# or through conventional import xfce4\nif __name__ == 'xfce4':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the XFCE4 packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\tarchinstall.storage['installation_session'].enable_service('lightdm') # Light Display Manager\n", "path": "profiles/xfce4.py"}, {"content": "# A desktop environment using \"KDE\".\n\nimport archinstall\n\nis_top_level_profile = False\n\n__packages__ = [\n\t\"plasma-meta\",\n\t\"konsole\",\n\t\"kate\",\n\t\"dolphin\",\n\t\"sddm\",\n\t\"plasma-wayland-session\",\n\t\"egl-wayland\",\n]\n\n\n# TODO: Remove hard dependency of bash (due to .bash_profile)\n\n\ndef _prep_function(*args, **kwargs):\n\t\"\"\"\n\tMagic function called by the importing installer\n\tbefore continuing any further. It also avoids executing any\n\tother code in this stage. So it's a safe way to ask the user\n\tfor more input before any other installer steps start.\n\t\"\"\"\n\n\t# KDE requires a functioning Xorg installation.\n\tprofile = archinstall.Profile(None, 'xorg')\n\twith profile.load_instructions(namespace='xorg.py') as imported:\n\t\tif hasattr(imported, '_prep_function'):\n\t\t\treturn imported._prep_function()\n\t\telse:\n\t\t\tprint('Deprecated (??): xorg profile has no _prep_function() anymore')\n\n\n\"\"\"\ndef _post_install(*args, **kwargs):\n\tif \"nvidia\" in _gfx_driver_packages:\n\t\tprint(\"Plasma Wayland has known compatibility issues with the proprietary Nvidia driver\")\n\tprint(\"After booting, you can choose between Wayland and Xorg using the drop-down menu\")\n\treturn True\n\"\"\"\n\n# Ensures that this code only gets executed if executed\n# through importlib.util.spec_from_file_location(\"kde\", \"/somewhere/kde.py\")\n# or through conventional import kde\nif __name__ == 'kde':\n\t# Install dependency profiles\n\tarchinstall.storage['installation_session'].install_profile('xorg')\n\n\t# Install the KDE packages\n\tarchinstall.storage['installation_session'].add_additional_packages(__packages__)\n\n\t# Enable autostart of KDE for all users\n\tarchinstall.storage['installation_session'].enable_service('sddm')\n", "path": "profiles/kde.py"}]} | 1,604 | 171 |
gh_patches_debug_37928 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3315 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider lenscrafters is broken
During the global build at 2021-08-25-14-42-15, spider **lenscrafters** failed with **0 features** and **1 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/logs/lenscrafters.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/lenscrafters.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/lenscrafters.geojson))
</issue>
<code>
[start of locations/spiders/lenscrafters.py]
1 # -*- coding: utf-8 -*-
2 import json
3 import re
4
5 import scrapy
6
7 from locations.items import GeojsonPointItem
8 from locations.hours import OpeningHours
9
10
11 class LensCraftersSpider(scrapy.Spider):
12 name = "lenscrafters"
13 item_attributes = { 'brand': "Lenscrafters" }
14 allowed_domains = ['local.lenscrafters.com']
15 start_urls = [
16 'https://local.lenscrafters.com/'
17 ]
18
19 def parse_hours(self, hours):
20 opening_hours = OpeningHours()
21 for group in hours:
22 if "Closed" in group:
23 pass
24 else:
25 days, open_time, close_time = re.search(r'([a-zA-Z,]+)\s([\d:]+)-([\d:]+)', group).groups()
26 days = days.split(',')
27 for day in days:
28 opening_hours.add_range(day=day, open_time=open_time, close_time=close_time, time_format='%H:%M')
29
30 return opening_hours.as_opening_hours()
31
32 def parse(self, response):
33 urls = response.xpath(
34 '//a[@class="c-directory-list-content-item-link" or @class="c-location-grid-item-link"]/@href').extract()
35 # If cannot find 'c-directory-list-content-item-link' or 'c-location-grid-item-link' then this is a store page
36 if len(urls) == 0:
37 properties = {
38 'name': response.xpath('//*[@class="location-name h1-normal"]/text()').extract_first(),
39 'addr_full': response.xpath('//*[@class="c-address-street-1"]/text()').extract_first(),
40 'city': response.xpath('//*[@class="c-address-city"]/text()').extract_first(),
41 'state': response.xpath('//*[@class="c-address-state"]/text()').extract_first(),
42 'postcode': response.xpath('//*[@class="c-address-postal-code"]/text()').extract_first(),
43 'phone': response.xpath('//*[@id="phone-main"]/text()').extract_first(),
44 'ref': "_".join(re.search(r".+/(.+?)/(.+?)/(.+?)/?(?:\.html|$)", response.url).groups()),
45 'website': response.url,
46 'lat': response.xpath('//*[@itemprop="latitude"]/@content').extract_first(),
47 'lon': response.xpath('//*[@itemprop="longitude"]/@content').extract_first(),
48 }
49
50 hours = self.parse_hours(response.xpath('//*[@itemprop="openingHours"]/@content').extract())
51 if hours:
52 properties["opening_hours"] = hours
53
54 yield GeojsonPointItem(**properties)
55 else:
56 for path in urls:
57 yield scrapy.Request(url=response.urljoin(path), callback=self.parse)
58
[end of locations/spiders/lenscrafters.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/lenscrafters.py b/locations/spiders/lenscrafters.py
--- a/locations/spiders/lenscrafters.py
+++ b/locations/spiders/lenscrafters.py
@@ -10,7 +10,7 @@
class LensCraftersSpider(scrapy.Spider):
name = "lenscrafters"
- item_attributes = { 'brand': "Lenscrafters" }
+ item_attributes = {'brand': "Lenscrafters"}
allowed_domains = ['local.lenscrafters.com']
start_urls = [
'https://local.lenscrafters.com/'
@@ -30,21 +30,21 @@
return opening_hours.as_opening_hours()
def parse(self, response):
- urls = response.xpath(
- '//a[@class="c-directory-list-content-item-link" or @class="c-location-grid-item-link"]/@href').extract()
- # If cannot find 'c-directory-list-content-item-link' or 'c-location-grid-item-link' then this is a store page
+ urls = response.xpath('//a[@class="Directory-listLink Link--directory"]/@href').extract()
+
+ # If cannot find 'Directory-listLink Link--directory' then this is a store page
if len(urls) == 0:
properties = {
- 'name': response.xpath('//*[@class="location-name h1-normal"]/text()').extract_first(),
- 'addr_full': response.xpath('//*[@class="c-address-street-1"]/text()').extract_first(),
- 'city': response.xpath('//*[@class="c-address-city"]/text()').extract_first(),
- 'state': response.xpath('//*[@class="c-address-state"]/text()').extract_first(),
- 'postcode': response.xpath('//*[@class="c-address-postal-code"]/text()').extract_first(),
- 'phone': response.xpath('//*[@id="phone-main"]/text()').extract_first(),
- 'ref': "_".join(re.search(r".+/(.+?)/(.+?)/(.+?)/?(?:\.html|$)", response.url).groups()),
- 'website': response.url,
- 'lat': response.xpath('//*[@itemprop="latitude"]/@content').extract_first(),
- 'lon': response.xpath('//*[@itemprop="longitude"]/@content').extract_first(),
+ 'name': response.xpath('//h1[@id="location-name"]/text()').extract_first(),
+ 'addr_full': response.xpath('//span[@class="c-address-street-1"]/text()').extract_first(),
+ 'city': response.xpath('//span[@class="c-address-city"]/text()').extract_first(),
+ 'state': response.xpath('//abbr[@class="c-address-state"]/text()').extract_first(),
+ 'postcode': response.xpath('//span[@class="c-address-postal-code"]/text()').extract_first(),
+ 'phone': response.xpath('//div[@id="phone-main"]/text()').extract_first(),
+ 'ref': response.xpath('//link[@rel="canonical"]/@href').extract_first(),
+ 'website': response.xpath('//link[@rel="canonical"]/@href').extract_first(),
+ 'lat': response.xpath('//meta[@itemprop="latitude"]/@content').extract_first(),
+ 'lon': response.xpath('//meta[@itemprop="longitude"]/@content').extract_first(),
}
hours = self.parse_hours(response.xpath('//*[@itemprop="openingHours"]/@content').extract())
| {"golden_diff": "diff --git a/locations/spiders/lenscrafters.py b/locations/spiders/lenscrafters.py\n--- a/locations/spiders/lenscrafters.py\n+++ b/locations/spiders/lenscrafters.py\n@@ -10,7 +10,7 @@\n \n class LensCraftersSpider(scrapy.Spider):\n name = \"lenscrafters\"\n- item_attributes = { 'brand': \"Lenscrafters\" }\n+ item_attributes = {'brand': \"Lenscrafters\"}\n allowed_domains = ['local.lenscrafters.com']\n start_urls = [\n 'https://local.lenscrafters.com/'\n@@ -30,21 +30,21 @@\n return opening_hours.as_opening_hours()\n \n def parse(self, response):\n- urls = response.xpath(\n- '//a[@class=\"c-directory-list-content-item-link\" or @class=\"c-location-grid-item-link\"]/@href').extract()\n- # If cannot find 'c-directory-list-content-item-link' or 'c-location-grid-item-link' then this is a store page\n+ urls = response.xpath('//a[@class=\"Directory-listLink Link--directory\"]/@href').extract()\n+\n+ # If cannot find 'Directory-listLink Link--directory' then this is a store page\n if len(urls) == 0:\n properties = {\n- 'name': response.xpath('//*[@class=\"location-name h1-normal\"]/text()').extract_first(),\n- 'addr_full': response.xpath('//*[@class=\"c-address-street-1\"]/text()').extract_first(),\n- 'city': response.xpath('//*[@class=\"c-address-city\"]/text()').extract_first(),\n- 'state': response.xpath('//*[@class=\"c-address-state\"]/text()').extract_first(),\n- 'postcode': response.xpath('//*[@class=\"c-address-postal-code\"]/text()').extract_first(),\n- 'phone': response.xpath('//*[@id=\"phone-main\"]/text()').extract_first(),\n- 'ref': \"_\".join(re.search(r\".+/(.+?)/(.+?)/(.+?)/?(?:\\.html|$)\", response.url).groups()),\n- 'website': response.url,\n- 'lat': response.xpath('//*[@itemprop=\"latitude\"]/@content').extract_first(),\n- 'lon': response.xpath('//*[@itemprop=\"longitude\"]/@content').extract_first(),\n+ 'name': response.xpath('//h1[@id=\"location-name\"]/text()').extract_first(),\n+ 'addr_full': response.xpath('//span[@class=\"c-address-street-1\"]/text()').extract_first(),\n+ 'city': response.xpath('//span[@class=\"c-address-city\"]/text()').extract_first(),\n+ 'state': response.xpath('//abbr[@class=\"c-address-state\"]/text()').extract_first(),\n+ 'postcode': response.xpath('//span[@class=\"c-address-postal-code\"]/text()').extract_first(),\n+ 'phone': response.xpath('//div[@id=\"phone-main\"]/text()').extract_first(),\n+ 'ref': response.xpath('//link[@rel=\"canonical\"]/@href').extract_first(),\n+ 'website': response.xpath('//link[@rel=\"canonical\"]/@href').extract_first(),\n+ 'lat': response.xpath('//meta[@itemprop=\"latitude\"]/@content').extract_first(),\n+ 'lon': response.xpath('//meta[@itemprop=\"longitude\"]/@content').extract_first(),\n }\n \n hours = self.parse_hours(response.xpath('//*[@itemprop=\"openingHours\"]/@content').extract())\n", "issue": "Spider lenscrafters is broken\nDuring the global build at 2021-08-25-14-42-15, spider **lenscrafters** failed with **0 features** and **1 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/logs/lenscrafters.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/lenscrafters.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-08-25-14-42-15/output/lenscrafters.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport json\nimport re\n\nimport scrapy\n\nfrom locations.items import GeojsonPointItem\nfrom locations.hours import OpeningHours\n\n\nclass LensCraftersSpider(scrapy.Spider):\n name = \"lenscrafters\"\n item_attributes = { 'brand': \"Lenscrafters\" }\n allowed_domains = ['local.lenscrafters.com']\n start_urls = [\n 'https://local.lenscrafters.com/'\n ]\n\n def parse_hours(self, hours):\n opening_hours = OpeningHours()\n for group in hours:\n if \"Closed\" in group:\n pass\n else:\n days, open_time, close_time = re.search(r'([a-zA-Z,]+)\\s([\\d:]+)-([\\d:]+)', group).groups()\n days = days.split(',')\n for day in days:\n opening_hours.add_range(day=day, open_time=open_time, close_time=close_time, time_format='%H:%M')\n\n return opening_hours.as_opening_hours()\n\n def parse(self, response):\n urls = response.xpath(\n '//a[@class=\"c-directory-list-content-item-link\" or @class=\"c-location-grid-item-link\"]/@href').extract()\n # If cannot find 'c-directory-list-content-item-link' or 'c-location-grid-item-link' then this is a store page\n if len(urls) == 0:\n properties = {\n 'name': response.xpath('//*[@class=\"location-name h1-normal\"]/text()').extract_first(),\n 'addr_full': response.xpath('//*[@class=\"c-address-street-1\"]/text()').extract_first(),\n 'city': response.xpath('//*[@class=\"c-address-city\"]/text()').extract_first(),\n 'state': response.xpath('//*[@class=\"c-address-state\"]/text()').extract_first(),\n 'postcode': response.xpath('//*[@class=\"c-address-postal-code\"]/text()').extract_first(),\n 'phone': response.xpath('//*[@id=\"phone-main\"]/text()').extract_first(),\n 'ref': \"_\".join(re.search(r\".+/(.+?)/(.+?)/(.+?)/?(?:\\.html|$)\", response.url).groups()),\n 'website': response.url,\n 'lat': response.xpath('//*[@itemprop=\"latitude\"]/@content').extract_first(),\n 'lon': response.xpath('//*[@itemprop=\"longitude\"]/@content').extract_first(),\n }\n\n hours = self.parse_hours(response.xpath('//*[@itemprop=\"openingHours\"]/@content').extract())\n if hours:\n properties[\"opening_hours\"] = hours\n\n yield GeojsonPointItem(**properties)\n else:\n for path in urls:\n yield scrapy.Request(url=response.urljoin(path), callback=self.parse)\n", "path": "locations/spiders/lenscrafters.py"}]} | 1,409 | 742 |
gh_patches_debug_19352 | rasdani/github-patches | git_diff | sublimelsp__LSP-339 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Scopes priorities while selecting configuration
## Bug:
When there are multiple language servers configured, all of which are for similar scopes (Ex. `source.json`, `source.json.sublime.settings`) the configuration with the most specific scope should be preferred; however right now one or the other could "win", some times leading to erroneous configuration.
Example comes from configuring **vscode-json-languageserver** to work with both `json` and `jsonc` languageIds.
### Suggestion:
Give priority to the configuration with the most specific scope that matches.
</issue>
<code>
[start of plugin/core/configurations.py]
1 import sublime
2
3 from .settings import ClientConfig, client_configs
4 from .logging import debug
5 from .workspace import get_project_config
6
7 assert ClientConfig
8
9 try:
10 from typing import Any, List, Dict, Tuple, Callable, Optional
11 assert Any and List and Dict and Tuple and Callable and Optional
12 except ImportError:
13 pass
14
15
16 window_client_configs = dict() # type: Dict[int, List[ClientConfig]]
17
18
19 def get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':
20 for config in configs:
21 for scope in config.scopes:
22 if len(view.sel()) > 0:
23 if view.match_selector(view.sel()[0].begin(), scope):
24 return config
25
26 return None
27
28
29 def register_client_config(config: ClientConfig) -> None:
30 window_client_configs.clear()
31 client_configs.add_external_config(config)
32
33
34 def get_global_client_config(view: sublime.View) -> 'Optional[ClientConfig]':
35 return get_scope_client_config(view, client_configs.all)
36
37
38 def get_default_client_config(view: sublime.View) -> 'Optional[ClientConfig]':
39 return get_scope_client_config(view, client_configs.defaults)
40
41
42 def get_window_client_config(view: sublime.View) -> 'Optional[ClientConfig]':
43 window = view.window()
44 if window:
45 configs_for_window = window_client_configs.get(window.id(), [])
46 return get_scope_client_config(view, configs_for_window)
47 else:
48 return None
49
50
51 def config_for_scope(view: sublime.View) -> 'Optional[ClientConfig]':
52 # check window_client_config first
53 window_client_config = get_window_client_config(view)
54 if not window_client_config:
55 global_client_config = get_global_client_config(view)
56
57 if global_client_config:
58 window = view.window()
59 if window:
60 window_client_config = apply_window_settings(global_client_config, view)
61 add_window_client_config(window, window_client_config)
62 return window_client_config
63 else:
64 # always return a client config even if the view has no window anymore
65 return global_client_config
66
67 return window_client_config
68
69
70 def add_window_client_config(window: 'sublime.Window', config: 'ClientConfig'):
71 global window_client_configs
72 window_client_configs.setdefault(window.id(), []).append(config)
73
74
75 def clear_window_client_configs(window: 'sublime.Window'):
76 global window_client_configs
77 if window.id() in window_client_configs:
78 del window_client_configs[window.id()]
79
80
81 def apply_window_settings(client_config: 'ClientConfig', view: 'sublime.View') -> 'ClientConfig':
82 window = view.window()
83 if window:
84 window_config = get_project_config(window)
85
86 if client_config.name in window_config:
87 overrides = window_config[client_config.name]
88 debug('window has override for', client_config.name, overrides)
89 return ClientConfig(
90 client_config.name,
91 overrides.get("command", client_config.binary_args),
92 overrides.get("tcp_port", client_config.tcp_port),
93 overrides.get("scopes", client_config.scopes),
94 overrides.get("syntaxes", client_config.syntaxes),
95 overrides.get("languageId", client_config.languageId),
96 overrides.get("enabled", client_config.enabled),
97 overrides.get("initializationOptions", client_config.init_options),
98 overrides.get("settings", client_config.settings),
99 overrides.get("env", client_config.env)
100 )
101
102 return client_config
103
104
105 def is_supportable_syntax(syntax: str) -> bool:
106 # TODO: filter out configs disabled by the user.
107 for config in client_configs.defaults:
108 if syntax in config.syntaxes:
109 return True
110 return False
111
112
113 def is_supported_syntax(syntax: str) -> bool:
114 for config in client_configs.all:
115 if syntax in config.syntaxes:
116 return True
117 return False
118
119
120 def is_supported_view(view: sublime.View) -> bool:
121 # TODO: perhaps make this check for a client instead of a config
122 if config_for_scope(view):
123 return True
124 else:
125 return False
126
[end of plugin/core/configurations.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/plugin/core/configurations.py b/plugin/core/configurations.py
--- a/plugin/core/configurations.py
+++ b/plugin/core/configurations.py
@@ -17,13 +17,21 @@
def get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':
+ # When there are multiple server configurations, all of which are for
+ # similar scopes (e.g. 'source.json', 'source.json.sublime.settings') the
+ # configuration with the most specific scope (highest ranked selector)
+ # in the current position is preferred.
+ scope_score = 0
+ scope_client_config = None
for config in configs:
for scope in config.scopes:
- if len(view.sel()) > 0:
- if view.match_selector(view.sel()[0].begin(), scope):
- return config
-
- return None
+ sel = view.sel()
+ if len(sel) > 0:
+ score = view.score_selector(sel[0].begin(), scope)
+ if score > scope_score:
+ scope_score = score
+ scope_client_config = config
+ return scope_client_config
def register_client_config(config: ClientConfig) -> None:
| {"golden_diff": "diff --git a/plugin/core/configurations.py b/plugin/core/configurations.py\n--- a/plugin/core/configurations.py\n+++ b/plugin/core/configurations.py\n@@ -17,13 +17,21 @@\n \n \n def get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':\n+ # When there are multiple server configurations, all of which are for\n+ # similar scopes (e.g. 'source.json', 'source.json.sublime.settings') the\n+ # configuration with the most specific scope (highest ranked selector)\n+ # in the current position is preferred.\n+ scope_score = 0\n+ scope_client_config = None\n for config in configs:\n for scope in config.scopes:\n- if len(view.sel()) > 0:\n- if view.match_selector(view.sel()[0].begin(), scope):\n- return config\n-\n- return None\n+ sel = view.sel()\n+ if len(sel) > 0:\n+ score = view.score_selector(sel[0].begin(), scope)\n+ if score > scope_score:\n+ scope_score = score\n+ scope_client_config = config\n+ return scope_client_config\n \n \n def register_client_config(config: ClientConfig) -> None:\n", "issue": "Scopes priorities while selecting configuration\n## Bug:\r\n\r\nWhen there are multiple language servers configured, all of which are for similar scopes (Ex. `source.json`, `source.json.sublime.settings`) the configuration with the most specific scope should be preferred; however right now one or the other could \"win\", some times leading to erroneous configuration.\r\n\r\nExample comes from configuring **vscode-json-languageserver** to work with both `json` and `jsonc` languageIds.\r\n\r\n### Suggestion:\r\n\r\nGive priority to the configuration with the most specific scope that matches.\r\n\n", "before_files": [{"content": "import sublime\n\nfrom .settings import ClientConfig, client_configs\nfrom .logging import debug\nfrom .workspace import get_project_config\n\nassert ClientConfig\n\ntry:\n from typing import Any, List, Dict, Tuple, Callable, Optional\n assert Any and List and Dict and Tuple and Callable and Optional\nexcept ImportError:\n pass\n\n\nwindow_client_configs = dict() # type: Dict[int, List[ClientConfig]]\n\n\ndef get_scope_client_config(view: 'sublime.View', configs: 'List[ClientConfig]') -> 'Optional[ClientConfig]':\n for config in configs:\n for scope in config.scopes:\n if len(view.sel()) > 0:\n if view.match_selector(view.sel()[0].begin(), scope):\n return config\n\n return None\n\n\ndef register_client_config(config: ClientConfig) -> None:\n window_client_configs.clear()\n client_configs.add_external_config(config)\n\n\ndef get_global_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n return get_scope_client_config(view, client_configs.all)\n\n\ndef get_default_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n return get_scope_client_config(view, client_configs.defaults)\n\n\ndef get_window_client_config(view: sublime.View) -> 'Optional[ClientConfig]':\n window = view.window()\n if window:\n configs_for_window = window_client_configs.get(window.id(), [])\n return get_scope_client_config(view, configs_for_window)\n else:\n return None\n\n\ndef config_for_scope(view: sublime.View) -> 'Optional[ClientConfig]':\n # check window_client_config first\n window_client_config = get_window_client_config(view)\n if not window_client_config:\n global_client_config = get_global_client_config(view)\n\n if global_client_config:\n window = view.window()\n if window:\n window_client_config = apply_window_settings(global_client_config, view)\n add_window_client_config(window, window_client_config)\n return window_client_config\n else:\n # always return a client config even if the view has no window anymore\n return global_client_config\n\n return window_client_config\n\n\ndef add_window_client_config(window: 'sublime.Window', config: 'ClientConfig'):\n global window_client_configs\n window_client_configs.setdefault(window.id(), []).append(config)\n\n\ndef clear_window_client_configs(window: 'sublime.Window'):\n global window_client_configs\n if window.id() in window_client_configs:\n del window_client_configs[window.id()]\n\n\ndef apply_window_settings(client_config: 'ClientConfig', view: 'sublime.View') -> 'ClientConfig':\n window = view.window()\n if window:\n window_config = get_project_config(window)\n\n if client_config.name in window_config:\n overrides = window_config[client_config.name]\n debug('window has override for', client_config.name, overrides)\n return ClientConfig(\n client_config.name,\n overrides.get(\"command\", client_config.binary_args),\n overrides.get(\"tcp_port\", client_config.tcp_port),\n overrides.get(\"scopes\", client_config.scopes),\n overrides.get(\"syntaxes\", client_config.syntaxes),\n overrides.get(\"languageId\", client_config.languageId),\n overrides.get(\"enabled\", client_config.enabled),\n overrides.get(\"initializationOptions\", client_config.init_options),\n overrides.get(\"settings\", client_config.settings),\n overrides.get(\"env\", client_config.env)\n )\n\n return client_config\n\n\ndef is_supportable_syntax(syntax: str) -> bool:\n # TODO: filter out configs disabled by the user.\n for config in client_configs.defaults:\n if syntax in config.syntaxes:\n return True\n return False\n\n\ndef is_supported_syntax(syntax: str) -> bool:\n for config in client_configs.all:\n if syntax in config.syntaxes:\n return True\n return False\n\n\ndef is_supported_view(view: sublime.View) -> bool:\n # TODO: perhaps make this check for a client instead of a config\n if config_for_scope(view):\n return True\n else:\n return False\n", "path": "plugin/core/configurations.py"}]} | 1,791 | 281 |
gh_patches_debug_12787 | rasdani/github-patches | git_diff | numba__numba-672 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Wrong type coercion on input arguments
If the following snippet, it looks like first calling the function with int arguments then coerces any further float arguments to int:
```
>>> @jit(nopython=True)
... def mpow(a, b):
... return math.pow(a, b)
...
>>>
>>> mpow(0, 1)
0.0
>>> mpow(0, 0.666)
1.0
>>> mpow(0, 1.666)
0.0
```
It doesn't happen if the function is called with float arguments first:
```
>>> @jit(nopython=True)
... def mpow2(a, b):
... return math.pow(a, b)
...
>>> mpow2(0, 0.666)
0.0
>>> mpow2(0, 1)
0.0
>>> mpow2(0, 0.666)
0.0
```
</issue>
<code>
[start of numba/typeconv/typeconv.py]
1 from __future__ import print_function, absolute_import
2 from . import _typeconv
3
4
5 class TypeManager(object):
6 def __init__(self):
7 self._ptr = _typeconv.new_type_manager()
8
9 def select_overload(self, sig, overloads):
10 sig = [t._code for t in sig]
11 overloads = [[t._code for t in s] for s in overloads ]
12 return _typeconv.select_overload(self._ptr, sig, overloads)
13
14 def check_compatible(self, fromty, toty):
15 return _typeconv.check_compatible(self._ptr, fromty._code, toty._code)
16
17 def set_compatible(self, fromty, toty, by):
18 _typeconv.set_compatible(self._ptr, fromty._code, toty._code, by)
19
20 def set_promote(self, fromty, toty):
21 self.set_compatible(fromty, toty, ord("p"))
22
23 def set_unsafe_convert(self, fromty, toty):
24 self.set_compatible(fromty, toty, ord("u"))
25
26 def set_safe_convert(self, fromty, toty):
27 self.set_compatible(fromty, toty, ord("s"))
28
29 def get_pointer(self):
30 return _typeconv.get_pointer(self._ptr)
31
[end of numba/typeconv/typeconv.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/numba/typeconv/typeconv.py b/numba/typeconv/typeconv.py
--- a/numba/typeconv/typeconv.py
+++ b/numba/typeconv/typeconv.py
@@ -6,10 +6,10 @@
def __init__(self):
self._ptr = _typeconv.new_type_manager()
- def select_overload(self, sig, overloads):
+ def select_overload(self, sig, overloads, allow_unsafe):
sig = [t._code for t in sig]
overloads = [[t._code for t in s] for s in overloads ]
- return _typeconv.select_overload(self._ptr, sig, overloads)
+ return _typeconv.select_overload(self._ptr, sig, overloads, allow_unsafe)
def check_compatible(self, fromty, toty):
return _typeconv.check_compatible(self._ptr, fromty._code, toty._code)
| {"golden_diff": "diff --git a/numba/typeconv/typeconv.py b/numba/typeconv/typeconv.py\n--- a/numba/typeconv/typeconv.py\n+++ b/numba/typeconv/typeconv.py\n@@ -6,10 +6,10 @@\n def __init__(self):\n self._ptr = _typeconv.new_type_manager()\n \n- def select_overload(self, sig, overloads):\n+ def select_overload(self, sig, overloads, allow_unsafe):\n sig = [t._code for t in sig]\n overloads = [[t._code for t in s] for s in overloads ]\n- return _typeconv.select_overload(self._ptr, sig, overloads)\n+ return _typeconv.select_overload(self._ptr, sig, overloads, allow_unsafe)\n \n def check_compatible(self, fromty, toty):\n return _typeconv.check_compatible(self._ptr, fromty._code, toty._code)\n", "issue": "Wrong type coercion on input arguments\nIf the following snippet, it looks like first calling the function with int arguments then coerces any further float arguments to int:\n\n```\n>>> @jit(nopython=True)\n... def mpow(a, b):\n... return math.pow(a, b)\n... \n>>> \n>>> mpow(0, 1)\n0.0\n>>> mpow(0, 0.666)\n1.0\n>>> mpow(0, 1.666)\n0.0\n```\n\nIt doesn't happen if the function is called with float arguments first:\n\n```\n>>> @jit(nopython=True)\n... def mpow2(a, b):\n... return math.pow(a, b)\n... \n>>> mpow2(0, 0.666)\n0.0\n>>> mpow2(0, 1)\n0.0\n>>> mpow2(0, 0.666)\n0.0\n```\n\n", "before_files": [{"content": "from __future__ import print_function, absolute_import\nfrom . import _typeconv\n\n\nclass TypeManager(object):\n def __init__(self):\n self._ptr = _typeconv.new_type_manager()\n\n def select_overload(self, sig, overloads):\n sig = [t._code for t in sig]\n overloads = [[t._code for t in s] for s in overloads ]\n return _typeconv.select_overload(self._ptr, sig, overloads)\n\n def check_compatible(self, fromty, toty):\n return _typeconv.check_compatible(self._ptr, fromty._code, toty._code)\n\n def set_compatible(self, fromty, toty, by):\n _typeconv.set_compatible(self._ptr, fromty._code, toty._code, by)\n\n def set_promote(self, fromty, toty):\n self.set_compatible(fromty, toty, ord(\"p\"))\n\n def set_unsafe_convert(self, fromty, toty):\n self.set_compatible(fromty, toty, ord(\"u\"))\n\n def set_safe_convert(self, fromty, toty):\n self.set_compatible(fromty, toty, ord(\"s\"))\n\n def get_pointer(self):\n return _typeconv.get_pointer(self._ptr)\n", "path": "numba/typeconv/typeconv.py"}]} | 1,087 | 214 |
gh_patches_debug_5376 | rasdani/github-patches | git_diff | great-expectations__great_expectations-4471 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Use cleaner solution for non-truncating division in python 2
Prefer `from __future__ import division` to `1.*x/y`
</issue>
<code>
[start of great_expectations/rule_based_profiler/types/__init__.py]
1 from .attributes import Attributes # isort:skip
2 from .builder import Builder # isort:skip
3
4 from .domain import ( # isort:skip
5 Domain,
6 SemanticDomainTypes,
7 InferredSemanticDomainType,
8 )
9 from .parameter_container import ( # isort:skip
10 DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,
11 FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,
12 PARAMETER_KEY,
13 VARIABLES_KEY,
14 VARIABLES_PREFIX,
15 ParameterNode,
16 ParameterContainer,
17 build_parameter_container,
18 build_parameter_container_for_variables,
19 is_fully_qualified_parameter_name_literal_string_format,
20 get_parameter_value_by_fully_qualified_parameter_name,
21 get_parameter_values_for_fully_qualified_parameter_names,
22 get_fully_qualified_parameter_names,
23 )
24
[end of great_expectations/rule_based_profiler/types/__init__.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/great_expectations/rule_based_profiler/types/__init__.py b/great_expectations/rule_based_profiler/types/__init__.py
--- a/great_expectations/rule_based_profiler/types/__init__.py
+++ b/great_expectations/rule_based_profiler/types/__init__.py
@@ -9,6 +9,8 @@
from .parameter_container import ( # isort:skip
DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,
FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,
+ FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY,
+ FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY,
PARAMETER_KEY,
VARIABLES_KEY,
VARIABLES_PREFIX,
| {"golden_diff": "diff --git a/great_expectations/rule_based_profiler/types/__init__.py b/great_expectations/rule_based_profiler/types/__init__.py\n--- a/great_expectations/rule_based_profiler/types/__init__.py\n+++ b/great_expectations/rule_based_profiler/types/__init__.py\n@@ -9,6 +9,8 @@\n from .parameter_container import ( # isort:skip\n DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,\n+ FULLY_QUALIFIED_PARAMETER_NAME_VALUE_KEY,\n+ FULLY_QUALIFIED_PARAMETER_NAME_METADATA_KEY,\n PARAMETER_KEY,\n VARIABLES_KEY,\n VARIABLES_PREFIX,\n", "issue": "Use cleaner solution for non-truncating division in python 2\nPrefer `from __future__ import division` to `1.*x/y`\n", "before_files": [{"content": "from .attributes import Attributes # isort:skip\nfrom .builder import Builder # isort:skip\n\nfrom .domain import ( # isort:skip\n Domain,\n SemanticDomainTypes,\n InferredSemanticDomainType,\n)\nfrom .parameter_container import ( # isort:skip\n DOMAIN_KWARGS_PARAMETER_FULLY_QUALIFIED_NAME,\n FULLY_QUALIFIED_PARAMETER_NAME_SEPARATOR_CHARACTER,\n PARAMETER_KEY,\n VARIABLES_KEY,\n VARIABLES_PREFIX,\n ParameterNode,\n ParameterContainer,\n build_parameter_container,\n build_parameter_container_for_variables,\n is_fully_qualified_parameter_name_literal_string_format,\n get_parameter_value_by_fully_qualified_parameter_name,\n get_parameter_values_for_fully_qualified_parameter_names,\n get_fully_qualified_parameter_names,\n)\n", "path": "great_expectations/rule_based_profiler/types/__init__.py"}]} | 789 | 149 |
gh_patches_debug_8097 | rasdani/github-patches | git_diff | uccser__cs-unplugged-652 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Modify docker configuration to work on OSX
Docker for Mac does not properly support the `network_mode: host` option for containers. In order to run the system on OSX, it will be necessary to network the containers using a bridged network:
> By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name."
Rather than accessing other containers via a port on localhost, containers will access each other using the instance name as the hostname. Port 80 will then be exposed from the nginx container to the host.
</issue>
<code>
[start of csunplugged/config/settings/database_proxy.py]
1 # -*- coding: utf-8 -*-
2 """Django settings for connecting via Google Cloud SQL Proxy."""
3
4 from .base import * # noqa: F403
5
6
7 # DATABASE CONFIGURATION
8 # ----------------------------------------------------------------------------
9 # See: https://docs.djangoproject.com/en/dev/ref/settings/#databases
10 DATABASES = {
11 "default": {
12 "ENGINE": "django.db.backends.postgresql",
13 "HOST": "localhost",
14 "PORT": "5433",
15 "NAME": "csunplugged",
16 "USER": env("GOOGLE_CLOUD_SQL_DATABASE_USERNAME"), # noqa: F405
17 "PASSWORD": env("GOOGLE_CLOUD_SQL_DATABASE_PASSWORD"), # noqa: F405
18 "ATOMIC_REQUESTS": True,
19 }
20 }
21
22 SECRET_KEY = env("DJANGO_SECRET_KEY") # noqa: F405
23
[end of csunplugged/config/settings/database_proxy.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/csunplugged/config/settings/database_proxy.py b/csunplugged/config/settings/database_proxy.py
--- a/csunplugged/config/settings/database_proxy.py
+++ b/csunplugged/config/settings/database_proxy.py
@@ -10,8 +10,8 @@
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
- "HOST": "localhost",
- "PORT": "5433",
+ "HOST": "cloud_sql_proxy",
+ "PORT": "5432",
"NAME": "csunplugged",
"USER": env("GOOGLE_CLOUD_SQL_DATABASE_USERNAME"), # noqa: F405
"PASSWORD": env("GOOGLE_CLOUD_SQL_DATABASE_PASSWORD"), # noqa: F405
| {"golden_diff": "diff --git a/csunplugged/config/settings/database_proxy.py b/csunplugged/config/settings/database_proxy.py\n--- a/csunplugged/config/settings/database_proxy.py\n+++ b/csunplugged/config/settings/database_proxy.py\n@@ -10,8 +10,8 @@\n DATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n- \"HOST\": \"localhost\",\n- \"PORT\": \"5433\",\n+ \"HOST\": \"cloud_sql_proxy\",\n+ \"PORT\": \"5432\",\n \"NAME\": \"csunplugged\",\n \"USER\": env(\"GOOGLE_CLOUD_SQL_DATABASE_USERNAME\"), # noqa: F405\n \"PASSWORD\": env(\"GOOGLE_CLOUD_SQL_DATABASE_PASSWORD\"), # noqa: F405\n", "issue": "Modify docker configuration to work on OSX\nDocker for Mac does not properly support the `network_mode: host` option for containers. In order to run the system on OSX, it will be necessary to network the containers using a bridged network:\r\n\r\n> By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.\"\r\n\r\nRather than accessing other containers via a port on localhost, containers will access each other using the instance name as the hostname. Port 80 will then be exposed from the nginx container to the host.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"Django settings for connecting via Google Cloud SQL Proxy.\"\"\"\n\nfrom .base import * # noqa: F403\n\n\n# DATABASE CONFIGURATION\n# ----------------------------------------------------------------------------\n# See: https://docs.djangoproject.com/en/dev/ref/settings/#databases\nDATABASES = {\n \"default\": {\n \"ENGINE\": \"django.db.backends.postgresql\",\n \"HOST\": \"localhost\",\n \"PORT\": \"5433\",\n \"NAME\": \"csunplugged\",\n \"USER\": env(\"GOOGLE_CLOUD_SQL_DATABASE_USERNAME\"), # noqa: F405\n \"PASSWORD\": env(\"GOOGLE_CLOUD_SQL_DATABASE_PASSWORD\"), # noqa: F405\n \"ATOMIC_REQUESTS\": True,\n }\n}\n\nSECRET_KEY = env(\"DJANGO_SECRET_KEY\") # noqa: F405\n", "path": "csunplugged/config/settings/database_proxy.py"}]} | 906 | 177 |
gh_patches_debug_2946 | rasdani/github-patches | git_diff | beetbox__beets-3703 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Minor documentation correction: correct id3.org url
https://github.com/beetbox/beets/blob/master/docs/faq.rst#L303
refers to:
http://www.id3.org/id3v2.4.0-structure
as a reference url for a copy of the ID3v2.4 standard documentation, but this returns a "Not found" error. I've found 2 possibilities for the replacement:
https://id3.org/id3v2.4.0-structure
(with adverts) or
https://github.com/id3/ID3v2.4/raw/master/id3v2.40-structure.txt
(without adverts)
</issue>
<code>
[start of docs/conf.py]
1 # -*- coding: utf-8 -*-
2
3 from __future__ import division, absolute_import, print_function
4
5 AUTHOR = u'Adrian Sampson'
6
7 # General configuration
8
9 extensions = ['sphinx.ext.autodoc', 'sphinx.ext.extlinks']
10
11 exclude_patterns = ['_build']
12 source_suffix = '.rst'
13 master_doc = 'index'
14
15 project = u'beets'
16 copyright = u'2016, Adrian Sampson'
17
18 version = '1.5'
19 release = '1.5.0'
20
21 pygments_style = 'sphinx'
22
23 # External links to the bug tracker and other sites.
24 extlinks = {
25 'bug': ('https://github.com/beetbox/beets/issues/%s', '#'),
26 'user': ('https://github.com/%s', ''),
27 'pypi': ('https://pypi.org/project/%s/', ''),
28 'stdlib': ('https://docs.python.org/3/library/%s.html', ''),
29 }
30
31 # Options for HTML output
32 htmlhelp_basename = 'beetsdoc'
33
34 # Options for LaTeX output
35 latex_documents = [
36 ('index', 'beets.tex', u'beets Documentation',
37 AUTHOR, 'manual'),
38 ]
39
40 # Options for manual page output
41 man_pages = [
42 ('reference/cli', 'beet', u'music tagger and library organizer',
43 [AUTHOR], 1),
44 ('reference/config', 'beetsconfig', u'beets configuration file',
45 [AUTHOR], 5),
46 ]
47
[end of docs/conf.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/docs/conf.py b/docs/conf.py
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -28,6 +28,13 @@
'stdlib': ('https://docs.python.org/3/library/%s.html', ''),
}
+linkcheck_ignore = [
+ r'https://github.com/beetbox/beets/issues/',
+ r'https://github.com/\w+$', # ignore user pages
+ r'.*localhost.*',
+ r'https://www.musixmatch.com/', # blocks requests
+]
+
# Options for HTML output
htmlhelp_basename = 'beetsdoc'
| {"golden_diff": "diff --git a/docs/conf.py b/docs/conf.py\n--- a/docs/conf.py\n+++ b/docs/conf.py\n@@ -28,6 +28,13 @@\n 'stdlib': ('https://docs.python.org/3/library/%s.html', ''),\n }\n \n+linkcheck_ignore = [\n+ r'https://github.com/beetbox/beets/issues/',\n+ r'https://github.com/\\w+$', # ignore user pages\n+ r'.*localhost.*',\n+ r'https://www.musixmatch.com/', # blocks requests\n+]\n+\n # Options for HTML output\n htmlhelp_basename = 'beetsdoc'\n", "issue": "Minor documentation correction: correct id3.org url\nhttps://github.com/beetbox/beets/blob/master/docs/faq.rst#L303\r\nrefers to:\r\nhttp://www.id3.org/id3v2.4.0-structure\r\nas a reference url for a copy of the ID3v2.4 standard documentation, but this returns a \"Not found\" error. I've found 2 possibilities for the replacement:\r\nhttps://id3.org/id3v2.4.0-structure\r\n(with adverts) or\r\nhttps://github.com/id3/ID3v2.4/raw/master/id3v2.40-structure.txt\r\n(without adverts)\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom __future__ import division, absolute_import, print_function\n\nAUTHOR = u'Adrian Sampson'\n\n# General configuration\n\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.extlinks']\n\nexclude_patterns = ['_build']\nsource_suffix = '.rst'\nmaster_doc = 'index'\n\nproject = u'beets'\ncopyright = u'2016, Adrian Sampson'\n\nversion = '1.5'\nrelease = '1.5.0'\n\npygments_style = 'sphinx'\n\n# External links to the bug tracker and other sites.\nextlinks = {\n 'bug': ('https://github.com/beetbox/beets/issues/%s', '#'),\n 'user': ('https://github.com/%s', ''),\n 'pypi': ('https://pypi.org/project/%s/', ''),\n 'stdlib': ('https://docs.python.org/3/library/%s.html', ''),\n}\n\n# Options for HTML output\nhtmlhelp_basename = 'beetsdoc'\n\n# Options for LaTeX output\nlatex_documents = [\n ('index', 'beets.tex', u'beets Documentation',\n AUTHOR, 'manual'),\n]\n\n# Options for manual page output\nman_pages = [\n ('reference/cli', 'beet', u'music tagger and library organizer',\n [AUTHOR], 1),\n ('reference/config', 'beetsconfig', u'beets configuration file',\n [AUTHOR], 5),\n]\n", "path": "docs/conf.py"}]} | 1,081 | 138 |
gh_patches_debug_27634 | rasdani/github-patches | git_diff | adap__flower-465 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve docstring for `start_server`
</issue>
<code>
[start of src/py/flwr/server/app.py]
1 # Copyright 2020 Adap GmbH. All Rights Reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14 # ==============================================================================
15 """Flower server app."""
16
17
18 from logging import INFO
19 from typing import Dict, Optional
20
21 from flwr.common import GRPC_MAX_MESSAGE_LENGTH
22 from flwr.common.logger import log
23 from flwr.server.client_manager import SimpleClientManager
24 from flwr.server.grpc_server.grpc_server import start_insecure_grpc_server
25 from flwr.server.server import Server
26 from flwr.server.strategy import FedAvg, Strategy
27
28 DEFAULT_SERVER_ADDRESS = "[::]:8080"
29
30
31 def start_server(
32 server_address: str = DEFAULT_SERVER_ADDRESS,
33 server: Optional[Server] = None,
34 config: Optional[Dict[str, int]] = None,
35 strategy: Optional[Strategy] = None,
36 grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
37 ) -> None:
38 """Start a Flower server using the gRPC transport layer."""
39
40 # Create server instance if none was given
41 if server is None:
42 client_manager = SimpleClientManager()
43 if strategy is None:
44 strategy = FedAvg()
45 server = Server(client_manager=client_manager, strategy=strategy)
46
47 # Set default config values
48 if config is None:
49 config = {}
50 if "num_rounds" not in config:
51 config["num_rounds"] = 1
52
53 # Start gRPC server
54 grpc_server = start_insecure_grpc_server(
55 client_manager=server.client_manager(),
56 server_address=server_address,
57 max_message_length=grpc_max_message_length,
58 )
59 log(INFO, "Flower server running (insecure, %s rounds)", config["num_rounds"])
60
61 # Fit model
62 hist = server.fit(num_rounds=config["num_rounds"])
63 log(INFO, "app_fit: losses_distributed %s", str(hist.losses_distributed))
64 log(INFO, "app_fit: accuracies_distributed %s", str(hist.accuracies_distributed))
65 log(INFO, "app_fit: losses_centralized %s", str(hist.losses_centralized))
66 log(INFO, "app_fit: accuracies_centralized %s", str(hist.accuracies_centralized))
67
68 # Temporary workaround to force distributed evaluation
69 server.strategy.eval_fn = None # type: ignore
70
71 # Evaluate the final trained model
72 res = server.evaluate(rnd=-1)
73 if res is not None:
74 loss, (results, failures) = res
75 log(INFO, "app_evaluate: federated loss: %s", str(loss))
76 log(
77 INFO,
78 "app_evaluate: results %s",
79 str([(res[0].cid, res[1]) for res in results]),
80 )
81 log(INFO, "app_evaluate: failures %s", str(failures))
82 else:
83 log(INFO, "app_evaluate: no evaluation result")
84
85 # Stop the gRPC server
86 grpc_server.stop(1)
87
[end of src/py/flwr/server/app.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/py/flwr/server/app.py b/src/py/flwr/server/app.py
--- a/src/py/flwr/server/app.py
+++ b/src/py/flwr/server/app.py
@@ -35,7 +35,33 @@
strategy: Optional[Strategy] = None,
grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,
) -> None:
- """Start a Flower server using the gRPC transport layer."""
+ """Start a Flower server using the gRPC transport layer.
+
+ Arguments:
+ server_address: Optional[str] (default: `"[::]:8080"`). The IPv6
+ address of the server.
+ server: Optional[flwr.server.Server] (default: None). An implementation
+ of the abstract base class `flwr.server.Server`. If no instance is
+ provided, then `start_server` will create one.
+ config: Optional[Dict[str, int]] (default: None). The only currently
+ supported values is `num_rounds`, so a full configuration object
+ instructing the server to perform three rounds of federated
+ learning looks like the following: `{"num_rounds": 3}`.
+ strategy: Optional[flwr.server.Strategy] (default: None). An
+ implementation of the abstract base class `flwr.server.Strategy`.
+ If no strategy is provided, then `start_server` will use
+ `flwr.server.strategy.FedAvg`.
+ grpc_max_message_length: int (default: 536_870_912, this equals 512MB).
+ The maximum length of gRPC messages that can be exchanged with the
+ Flower clients. The default should be sufficient for most models.
+ Users who train very large models might need to increase this
+ value. Note that the Flower clients needs to started with the same
+ value (see `flwr.client.start_client`), otherwise clients will not
+ know about the increased limit and block larger messages.
+
+ Returns:
+ None.
+ """
# Create server instance if none was given
if server is None:
| {"golden_diff": "diff --git a/src/py/flwr/server/app.py b/src/py/flwr/server/app.py\n--- a/src/py/flwr/server/app.py\n+++ b/src/py/flwr/server/app.py\n@@ -35,7 +35,33 @@\n strategy: Optional[Strategy] = None,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n ) -> None:\n- \"\"\"Start a Flower server using the gRPC transport layer.\"\"\"\n+ \"\"\"Start a Flower server using the gRPC transport layer.\n+\n+ Arguments:\n+ server_address: Optional[str] (default: `\"[::]:8080\"`). The IPv6\n+ address of the server.\n+ server: Optional[flwr.server.Server] (default: None). An implementation\n+ of the abstract base class `flwr.server.Server`. If no instance is\n+ provided, then `start_server` will create one.\n+ config: Optional[Dict[str, int]] (default: None). The only currently\n+ supported values is `num_rounds`, so a full configuration object\n+ instructing the server to perform three rounds of federated\n+ learning looks like the following: `{\"num_rounds\": 3}`.\n+ strategy: Optional[flwr.server.Strategy] (default: None). An\n+ implementation of the abstract base class `flwr.server.Strategy`.\n+ If no strategy is provided, then `start_server` will use\n+ `flwr.server.strategy.FedAvg`.\n+ grpc_max_message_length: int (default: 536_870_912, this equals 512MB).\n+ The maximum length of gRPC messages that can be exchanged with the\n+ Flower clients. The default should be sufficient for most models.\n+ Users who train very large models might need to increase this\n+ value. Note that the Flower clients needs to started with the same\n+ value (see `flwr.client.start_client`), otherwise clients will not\n+ know about the increased limit and block larger messages.\n+\n+ Returns:\n+ None.\n+ \"\"\"\n \n # Create server instance if none was given\n if server is None:\n", "issue": "Improve docstring for `start_server`\n\n", "before_files": [{"content": "# Copyright 2020 Adap GmbH. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================\n\"\"\"Flower server app.\"\"\"\n\n\nfrom logging import INFO\nfrom typing import Dict, Optional\n\nfrom flwr.common import GRPC_MAX_MESSAGE_LENGTH\nfrom flwr.common.logger import log\nfrom flwr.server.client_manager import SimpleClientManager\nfrom flwr.server.grpc_server.grpc_server import start_insecure_grpc_server\nfrom flwr.server.server import Server\nfrom flwr.server.strategy import FedAvg, Strategy\n\nDEFAULT_SERVER_ADDRESS = \"[::]:8080\"\n\n\ndef start_server(\n server_address: str = DEFAULT_SERVER_ADDRESS,\n server: Optional[Server] = None,\n config: Optional[Dict[str, int]] = None,\n strategy: Optional[Strategy] = None,\n grpc_max_message_length: int = GRPC_MAX_MESSAGE_LENGTH,\n) -> None:\n \"\"\"Start a Flower server using the gRPC transport layer.\"\"\"\n\n # Create server instance if none was given\n if server is None:\n client_manager = SimpleClientManager()\n if strategy is None:\n strategy = FedAvg()\n server = Server(client_manager=client_manager, strategy=strategy)\n\n # Set default config values\n if config is None:\n config = {}\n if \"num_rounds\" not in config:\n config[\"num_rounds\"] = 1\n\n # Start gRPC server\n grpc_server = start_insecure_grpc_server(\n client_manager=server.client_manager(),\n server_address=server_address,\n max_message_length=grpc_max_message_length,\n )\n log(INFO, \"Flower server running (insecure, %s rounds)\", config[\"num_rounds\"])\n\n # Fit model\n hist = server.fit(num_rounds=config[\"num_rounds\"])\n log(INFO, \"app_fit: losses_distributed %s\", str(hist.losses_distributed))\n log(INFO, \"app_fit: accuracies_distributed %s\", str(hist.accuracies_distributed))\n log(INFO, \"app_fit: losses_centralized %s\", str(hist.losses_centralized))\n log(INFO, \"app_fit: accuracies_centralized %s\", str(hist.accuracies_centralized))\n\n # Temporary workaround to force distributed evaluation\n server.strategy.eval_fn = None # type: ignore\n\n # Evaluate the final trained model\n res = server.evaluate(rnd=-1)\n if res is not None:\n loss, (results, failures) = res\n log(INFO, \"app_evaluate: federated loss: %s\", str(loss))\n log(\n INFO,\n \"app_evaluate: results %s\",\n str([(res[0].cid, res[1]) for res in results]),\n )\n log(INFO, \"app_evaluate: failures %s\", str(failures))\n else:\n log(INFO, \"app_evaluate: no evaluation result\")\n\n # Stop the gRPC server\n grpc_server.stop(1)\n", "path": "src/py/flwr/server/app.py"}]} | 1,469 | 477 |
gh_patches_debug_2933 | rasdani/github-patches | git_diff | conda__conda-5009 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
When lacking permissions to write, clone message should quote prefix.
When trying to install a new package into a location that the user lacks write permissions (read-only root), conda helpfully suggests cloning the environment into a new location:
```
CondaIOError: IO error: Missing write permissions in: C:\Program Files\Anaconda
#
# You don't appear to have the necessary permissions to install packages
# into the install area 'C:\Program Files\Anaconda'.
# However you can clone this environment into your home directory and
# then make changes to it.
# This may be done using the command:
#
# $ conda create -n my_deathstar --clone=C:\Program Files\Anaconda\envs\deathstar
```
As shown in the example above, this clone path may include spaces. This will be particularly common on Windows, where a global install will result in files written to Program Files, which a non-administrator user will not be able to write to, and contains spaces. Because the command presents a prefix, it should be quoted to guard against this case.
</issue>
<code>
[start of conda/cli/help.py]
1 from __future__ import absolute_import, division, print_function, unicode_literals
2
3 from os.path import join
4
5 from .common import name_prefix
6 from ..base.context import context
7 from ..exceptions import CondaIOError
8
9
10 def read_message(fn):
11 res = []
12 for envs_dir in context.envs_dirs:
13 path = join(envs_dir, '.conda-help', fn)
14 try:
15 with open(path) as fi:
16 s = fi.read().decode('utf-8')
17 s = s.replace('${envs_dir}', envs_dir)
18 res.append(s)
19 except IOError:
20 pass
21 return ''.join(res)
22
23
24 def root_read_only(command, prefix, json=False):
25 assert command in {'install', 'update', 'remove'}
26
27 msg = read_message('ro.txt')
28 if not msg:
29 msg = """\
30 Missing write permissions in: ${root_dir}
31 #
32 # You don't appear to have the necessary permissions to ${command} packages
33 # into the install area '${root_dir}'.
34 # However you can clone this environment into your home directory and
35 # then make changes to it.
36 # This may be done using the command:
37 #
38 # $ conda create -n my_${name} --clone=${prefix}
39 """
40 msg = msg.replace('${root_dir}', context.root_prefix)
41 msg = msg.replace('${prefix}', prefix)
42 msg = msg.replace('${name}', name_prefix(prefix))
43 msg = msg.replace('${command}', command)
44 raise CondaIOError(msg)
45
[end of conda/cli/help.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conda/cli/help.py b/conda/cli/help.py
--- a/conda/cli/help.py
+++ b/conda/cli/help.py
@@ -35,7 +35,7 @@
# then make changes to it.
# This may be done using the command:
#
-# $ conda create -n my_${name} --clone=${prefix}
+# $ conda create -n my_${name} --clone="${prefix}"
"""
msg = msg.replace('${root_dir}', context.root_prefix)
msg = msg.replace('${prefix}', prefix)
| {"golden_diff": "diff --git a/conda/cli/help.py b/conda/cli/help.py\n--- a/conda/cli/help.py\n+++ b/conda/cli/help.py\n@@ -35,7 +35,7 @@\n # then make changes to it.\n # This may be done using the command:\n #\n-# $ conda create -n my_${name} --clone=${prefix}\n+# $ conda create -n my_${name} --clone=\"${prefix}\"\n \"\"\"\n msg = msg.replace('${root_dir}', context.root_prefix)\n msg = msg.replace('${prefix}', prefix)\n", "issue": "When lacking permissions to write, clone message should quote prefix.\nWhen trying to install a new package into a location that the user lacks write permissions (read-only root), conda helpfully suggests cloning the environment into a new location:\r\n\r\n```\r\nCondaIOError: IO error: Missing write permissions in: C:\\Program Files\\Anaconda\r\n#\r\n# You don't appear to have the necessary permissions to install packages\r\n# into the install area 'C:\\Program Files\\Anaconda'.\r\n# However you can clone this environment into your home directory and\r\n# then make changes to it.\r\n# This may be done using the command:\r\n#\r\n# $ conda create -n my_deathstar --clone=C:\\Program Files\\Anaconda\\envs\\deathstar\r\n```\r\nAs shown in the example above, this clone path may include spaces. This will be particularly common on Windows, where a global install will result in files written to Program Files, which a non-administrator user will not be able to write to, and contains spaces. Because the command presents a prefix, it should be quoted to guard against this case.\r\n\r\n\n", "before_files": [{"content": "from __future__ import absolute_import, division, print_function, unicode_literals\n\nfrom os.path import join\n\nfrom .common import name_prefix\nfrom ..base.context import context\nfrom ..exceptions import CondaIOError\n\n\ndef read_message(fn):\n res = []\n for envs_dir in context.envs_dirs:\n path = join(envs_dir, '.conda-help', fn)\n try:\n with open(path) as fi:\n s = fi.read().decode('utf-8')\n s = s.replace('${envs_dir}', envs_dir)\n res.append(s)\n except IOError:\n pass\n return ''.join(res)\n\n\ndef root_read_only(command, prefix, json=False):\n assert command in {'install', 'update', 'remove'}\n\n msg = read_message('ro.txt')\n if not msg:\n msg = \"\"\"\\\nMissing write permissions in: ${root_dir}\n#\n# You don't appear to have the necessary permissions to ${command} packages\n# into the install area '${root_dir}'.\n# However you can clone this environment into your home directory and\n# then make changes to it.\n# This may be done using the command:\n#\n# $ conda create -n my_${name} --clone=${prefix}\n\"\"\"\n msg = msg.replace('${root_dir}', context.root_prefix)\n msg = msg.replace('${prefix}', prefix)\n msg = msg.replace('${name}', name_prefix(prefix))\n msg = msg.replace('${command}', command)\n raise CondaIOError(msg)\n", "path": "conda/cli/help.py"}]} | 1,166 | 119 |
gh_patches_debug_610 | rasdani/github-patches | git_diff | ivy-llc__ivy-23142 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
ifft
</issue>
<code>
[start of ivy/functional/frontends/jax/numpy/fft.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
4 from ivy.func_wrapper import with_unsupported_dtypes
5
6
7 @to_ivy_arrays_and_back
8 def fft(a, n=None, axis=-1, norm=None):
9 if norm is None:
10 norm = "backward"
11 return ivy.fft(a, axis, norm=norm, n=n)
12
13
14 @to_ivy_arrays_and_back
15 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
16 def fftshift(x, axes=None, name=None):
17 shape = x.shape
18
19 if axes is None:
20 axes = tuple(range(x.ndim))
21 shifts = [(dim // 2) for dim in shape]
22 elif isinstance(axes, int):
23 shifts = shape[axes] // 2
24 else:
25 shifts = [shape[ax] // 2 for ax in axes]
26
27 roll = ivy.roll(x, shifts, axis=axes)
28
29 return roll
30
[end of ivy/functional/frontends/jax/numpy/fft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py
--- a/ivy/functional/frontends/jax/numpy/fft.py
+++ b/ivy/functional/frontends/jax/numpy/fft.py
@@ -27,3 +27,10 @@
roll = ivy.roll(x, shifts, axis=axes)
return roll
+
+
+@to_ivy_arrays_and_back
+def ifft(a, n=None, axis=-1, norm=None):
+ if norm is None:
+ norm = "backward"
+ return ivy.ifft(a, axis, norm=norm, n=n)
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py\n--- a/ivy/functional/frontends/jax/numpy/fft.py\n+++ b/ivy/functional/frontends/jax/numpy/fft.py\n@@ -27,3 +27,10 @@\n roll = ivy.roll(x, shifts, axis=axes)\n \n return roll\n+\n+\n+@to_ivy_arrays_and_back\n+def ifft(a, n=None, axis=-1, norm=None):\n+ if norm is None:\n+ norm = \"backward\"\n+ return ivy.ifft(a, axis, norm=norm, n=n)\n", "issue": "ifft\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = [shape[ax] // 2 for ax in axes]\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n", "path": "ivy/functional/frontends/jax/numpy/fft.py"}]} | 840 | 156 |
gh_patches_debug_50232 | rasdani/github-patches | git_diff | pex-tool__pex-1720 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Release 2.1.79
On the docket:
+ [x] The --lock resolver only includes extras from the 1st encounter of a required project in its graph walk. #1717
+ [x] Support canonicalizing absolute paths in locks. (#1716)
</issue>
<code>
[start of pex/version.py]
1 # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
2 # Licensed under the Apache License, Version 2.0 (see LICENSE).
3
4 __version__ = "2.1.78"
5
[end of pex/version.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pex/version.py b/pex/version.py
--- a/pex/version.py
+++ b/pex/version.py
@@ -1,4 +1,4 @@
# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
-__version__ = "2.1.78"
+__version__ = "2.1.79"
| {"golden_diff": "diff --git a/pex/version.py b/pex/version.py\n--- a/pex/version.py\n+++ b/pex/version.py\n@@ -1,4 +1,4 @@\n # Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n # Licensed under the Apache License, Version 2.0 (see LICENSE).\n \n-__version__ = \"2.1.78\"\n+__version__ = \"2.1.79\"\n", "issue": "Release 2.1.79\nOn the docket:\r\n+ [x] The --lock resolver only includes extras from the 1st encounter of a required project in its graph walk. #1717 \r\n+ [x] Support canonicalizing absolute paths in locks. (#1716)\n", "before_files": [{"content": "# Copyright 2015 Pants project contributors (see CONTRIBUTORS.md).\n# Licensed under the Apache License, Version 2.0 (see LICENSE).\n\n__version__ = \"2.1.78\"\n", "path": "pex/version.py"}]} | 647 | 97 |
gh_patches_debug_16247 | rasdani/github-patches | git_diff | pyca__cryptography-1397 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dsa_private_key.pem vector has p and q whose lengths we don't normally allow
We currently enforce that `p` and `q` have lengths which are one of:
- `(1024, 160)`
- `(2048, 256)`
- `(3072, 256)`
However, this vector has `(p, q)` with lengths of `(2048, 160)`. Do we need to be less restrictive, use a different vector?
This was discovered in the process of writing a pure python PEM loader.
</issue>
<code>
[start of cryptography/hazmat/primitives/asymmetric/dsa.py]
1 # Licensed under the Apache License, Version 2.0 (the "License");
2 # you may not use this file except in compliance with the License.
3 # You may obtain a copy of the License at
4 #
5 # http://www.apache.org/licenses/LICENSE-2.0
6 #
7 # Unless required by applicable law or agreed to in writing, software
8 # distributed under the License is distributed on an "AS IS" BASIS,
9 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
10 # implied.
11 # See the License for the specific language governing permissions and
12 # limitations under the License.
13
14 from __future__ import absolute_import, division, print_function
15
16 import six
17
18 from cryptography import utils
19
20
21 def generate_parameters(key_size, backend):
22 return backend.generate_dsa_parameters(key_size)
23
24
25 def generate_private_key(key_size, backend):
26 return backend.generate_dsa_private_key_and_parameters(key_size)
27
28
29 def _check_dsa_parameters(parameters):
30 if (utils.bit_length(parameters.p),
31 utils.bit_length(parameters.q)) not in (
32 (1024, 160),
33 (2048, 256),
34 (3072, 256)):
35 raise ValueError(
36 "p and q's bit-lengths must be one of these pairs (1024, 160), "
37 "(2048, 256), or (3072, 256). Not ({0:d}, {1:d})".format(
38 utils.bit_length(parameters.p), utils.bit_length(parameters.q)
39 )
40 )
41
42 if not (1 < parameters.g < parameters.p):
43 raise ValueError("g, p don't satisfy 1 < g < p.")
44
45
46 def _check_dsa_private_numbers(numbers):
47 parameters = numbers.public_numbers.parameter_numbers
48 _check_dsa_parameters(parameters)
49 if numbers.x <= 0 or numbers.x >= parameters.q:
50 raise ValueError("x must be > 0 and < q.")
51
52 if numbers.public_numbers.y != pow(parameters.g, numbers.x, parameters.p):
53 raise ValueError("y must be equal to (g ** x % p).")
54
55
56 class DSAParameterNumbers(object):
57 def __init__(self, p, q, g):
58 if (
59 not isinstance(p, six.integer_types) or
60 not isinstance(q, six.integer_types) or
61 not isinstance(g, six.integer_types)
62 ):
63 raise TypeError(
64 "DSAParameterNumbers p, q, and g arguments must be integers."
65 )
66
67 self._p = p
68 self._q = q
69 self._g = g
70
71 @property
72 def p(self):
73 return self._p
74
75 @property
76 def q(self):
77 return self._q
78
79 @property
80 def g(self):
81 return self._g
82
83 def parameters(self, backend):
84 return backend.load_dsa_parameter_numbers(self)
85
86
87 class DSAPublicNumbers(object):
88 def __init__(self, y, parameter_numbers):
89 if not isinstance(y, six.integer_types):
90 raise TypeError("DSAPublicNumbers y argument must be an integer.")
91
92 if not isinstance(parameter_numbers, DSAParameterNumbers):
93 raise TypeError(
94 "parameter_numbers must be a DSAParameterNumbers instance."
95 )
96
97 self._y = y
98 self._parameter_numbers = parameter_numbers
99
100 @property
101 def y(self):
102 return self._y
103
104 @property
105 def parameter_numbers(self):
106 return self._parameter_numbers
107
108 def public_key(self, backend):
109 return backend.load_dsa_public_numbers(self)
110
111
112 class DSAPrivateNumbers(object):
113 def __init__(self, x, public_numbers):
114 if not isinstance(x, six.integer_types):
115 raise TypeError("DSAPrivateNumbers x argument must be an integer.")
116
117 if not isinstance(public_numbers, DSAPublicNumbers):
118 raise TypeError(
119 "public_numbers must be a DSAPublicNumbers instance."
120 )
121 self._public_numbers = public_numbers
122 self._x = x
123
124 @property
125 def x(self):
126 return self._x
127
128 @property
129 def public_numbers(self):
130 return self._public_numbers
131
132 def private_key(self, backend):
133 return backend.load_dsa_private_numbers(self)
134
[end of cryptography/hazmat/primitives/asymmetric/dsa.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cryptography/hazmat/primitives/asymmetric/dsa.py b/cryptography/hazmat/primitives/asymmetric/dsa.py
--- a/cryptography/hazmat/primitives/asymmetric/dsa.py
+++ b/cryptography/hazmat/primitives/asymmetric/dsa.py
@@ -27,17 +27,10 @@
def _check_dsa_parameters(parameters):
- if (utils.bit_length(parameters.p),
- utils.bit_length(parameters.q)) not in (
- (1024, 160),
- (2048, 256),
- (3072, 256)):
- raise ValueError(
- "p and q's bit-lengths must be one of these pairs (1024, 160), "
- "(2048, 256), or (3072, 256). Not ({0:d}, {1:d})".format(
- utils.bit_length(parameters.p), utils.bit_length(parameters.q)
- )
- )
+ if utils.bit_length(parameters.p) not in [1024, 2048, 3072]:
+ raise ValueError("p must be exactly 1024, 2048, or 3072 bits long")
+ if utils.bit_length(parameters.q) not in [160, 256]:
+ raise ValueError("q must be exactly 160 or 256 bits long")
if not (1 < parameters.g < parameters.p):
raise ValueError("g, p don't satisfy 1 < g < p.")
| {"golden_diff": "diff --git a/cryptography/hazmat/primitives/asymmetric/dsa.py b/cryptography/hazmat/primitives/asymmetric/dsa.py\n--- a/cryptography/hazmat/primitives/asymmetric/dsa.py\n+++ b/cryptography/hazmat/primitives/asymmetric/dsa.py\n@@ -27,17 +27,10 @@\n \n \n def _check_dsa_parameters(parameters):\n- if (utils.bit_length(parameters.p),\n- utils.bit_length(parameters.q)) not in (\n- (1024, 160),\n- (2048, 256),\n- (3072, 256)):\n- raise ValueError(\n- \"p and q's bit-lengths must be one of these pairs (1024, 160), \"\n- \"(2048, 256), or (3072, 256). Not ({0:d}, {1:d})\".format(\n- utils.bit_length(parameters.p), utils.bit_length(parameters.q)\n- )\n- )\n+ if utils.bit_length(parameters.p) not in [1024, 2048, 3072]:\n+ raise ValueError(\"p must be exactly 1024, 2048, or 3072 bits long\")\n+ if utils.bit_length(parameters.q) not in [160, 256]:\n+ raise ValueError(\"q must be exactly 160 or 256 bits long\")\n \n if not (1 < parameters.g < parameters.p):\n raise ValueError(\"g, p don't satisfy 1 < g < p.\")\n", "issue": "dsa_private_key.pem vector has p and q whose lengths we don't normally allow\nWe currently enforce that `p` and `q` have lengths which are one of:\n- `(1024, 160)`\n- `(2048, 256)`\n- `(3072, 256)`\n\nHowever, this vector has `(p, q)` with lengths of `(2048, 160)`. Do we need to be less restrictive, use a different vector?\n\nThis was discovered in the process of writing a pure python PEM loader.\n\n", "before_files": [{"content": "# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n# implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nfrom __future__ import absolute_import, division, print_function\n\nimport six\n\nfrom cryptography import utils\n\n\ndef generate_parameters(key_size, backend):\n return backend.generate_dsa_parameters(key_size)\n\n\ndef generate_private_key(key_size, backend):\n return backend.generate_dsa_private_key_and_parameters(key_size)\n\n\ndef _check_dsa_parameters(parameters):\n if (utils.bit_length(parameters.p),\n utils.bit_length(parameters.q)) not in (\n (1024, 160),\n (2048, 256),\n (3072, 256)):\n raise ValueError(\n \"p and q's bit-lengths must be one of these pairs (1024, 160), \"\n \"(2048, 256), or (3072, 256). Not ({0:d}, {1:d})\".format(\n utils.bit_length(parameters.p), utils.bit_length(parameters.q)\n )\n )\n\n if not (1 < parameters.g < parameters.p):\n raise ValueError(\"g, p don't satisfy 1 < g < p.\")\n\n\ndef _check_dsa_private_numbers(numbers):\n parameters = numbers.public_numbers.parameter_numbers\n _check_dsa_parameters(parameters)\n if numbers.x <= 0 or numbers.x >= parameters.q:\n raise ValueError(\"x must be > 0 and < q.\")\n\n if numbers.public_numbers.y != pow(parameters.g, numbers.x, parameters.p):\n raise ValueError(\"y must be equal to (g ** x % p).\")\n\n\nclass DSAParameterNumbers(object):\n def __init__(self, p, q, g):\n if (\n not isinstance(p, six.integer_types) or\n not isinstance(q, six.integer_types) or\n not isinstance(g, six.integer_types)\n ):\n raise TypeError(\n \"DSAParameterNumbers p, q, and g arguments must be integers.\"\n )\n\n self._p = p\n self._q = q\n self._g = g\n\n @property\n def p(self):\n return self._p\n\n @property\n def q(self):\n return self._q\n\n @property\n def g(self):\n return self._g\n\n def parameters(self, backend):\n return backend.load_dsa_parameter_numbers(self)\n\n\nclass DSAPublicNumbers(object):\n def __init__(self, y, parameter_numbers):\n if not isinstance(y, six.integer_types):\n raise TypeError(\"DSAPublicNumbers y argument must be an integer.\")\n\n if not isinstance(parameter_numbers, DSAParameterNumbers):\n raise TypeError(\n \"parameter_numbers must be a DSAParameterNumbers instance.\"\n )\n\n self._y = y\n self._parameter_numbers = parameter_numbers\n\n @property\n def y(self):\n return self._y\n\n @property\n def parameter_numbers(self):\n return self._parameter_numbers\n\n def public_key(self, backend):\n return backend.load_dsa_public_numbers(self)\n\n\nclass DSAPrivateNumbers(object):\n def __init__(self, x, public_numbers):\n if not isinstance(x, six.integer_types):\n raise TypeError(\"DSAPrivateNumbers x argument must be an integer.\")\n\n if not isinstance(public_numbers, DSAPublicNumbers):\n raise TypeError(\n \"public_numbers must be a DSAPublicNumbers instance.\"\n )\n self._public_numbers = public_numbers\n self._x = x\n\n @property\n def x(self):\n return self._x\n\n @property\n def public_numbers(self):\n return self._public_numbers\n\n def private_key(self, backend):\n return backend.load_dsa_private_numbers(self)\n", "path": "cryptography/hazmat/primitives/asymmetric/dsa.py"}]} | 1,890 | 365 |
gh_patches_debug_16759 | rasdani/github-patches | git_diff | bridgecrewio__checkov-1479 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
CKV_AWS_78 misreported
**Describe the bug**
Checkov is returning as vulnerability CKV_AWS_78, but the solution breaks Terraform validation.
Accordigly to Checkov if `encryption_disabled = false` is not set in the main block it can be considered a vulnerability
```
resource "aws_codebuild_project" "project-with-cache" {
name = "test-project-cache"
description = "test_codebuild_project_cache"
build_timeout = "5"
queued_timeout = "5"
+ encryption_disabled = false
}
```
as described here: https://docs.bridgecrew.io/docs/bc_aws_general_30
Unfortunately in Terraform v1.0.3 `encryption_disabled` is not available in that location but only in blocks `artifacts`, `secondary_artifacts` and `logs_config: s3_logs` as you can see here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codebuild_project
So if not set it gives vulnerability, if set terraform fails during the validation.
**To Reproduce**
Steps to reproduce the behavior:
1. Set in **aws_codebuild_project** **encryption_disabled = false**
```
resource "aws_codebuild_project" "project-with-cache" {
name = "test-project-cache"
description = "test_codebuild_project_cache"
build_timeout = "5"
queued_timeout = "5"
+ encryption_disabled = false
}
```
2. Run `terraform validate`
3. See error
**Expected behavior**
No vulnerability or vulnerability if not set the attribute in all the 3 blocks
**Desktop (please complete the following information):**
- terraform --version: Terraform v1.0.3 on linux_amd64
- checkov --version: 2.0.326
</issue>
<code>
[start of checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py]
1 from checkov.common.models.enums import CheckResult, CheckCategories
2 from checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck
3
4
5 class CodeBuildProjectEncryption(BaseResourceCheck):
6
7 def __init__(self):
8 name = "Ensure that CodeBuild Project encryption is not disabled"
9 id = "CKV_AWS_78"
10 supported_resources = ['aws_codebuild_project']
11 categories = [CheckCategories.ENCRYPTION]
12 super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)
13
14 def scan_resource_conf(self, conf):
15 if 'artifacts' not in conf:
16 return CheckResult.UNKNOWN
17 artifact = conf['artifacts'][0]
18 if isinstance(artifact, dict):
19 if artifact['type'] == "NO_ARTIFACTS":
20 self.evaluated_keys = 'artifacts/[0]/type'
21 elif 'encryption_disabled' in artifact and artifact['encryption_disabled']:
22 self.evaluated_keys = 'artifacts/[0]/encryption_disabled'
23 return CheckResult.FAILED
24 return CheckResult.PASSED
25
26
27 check = CodeBuildProjectEncryption()
28
[end of checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py b/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py
--- a/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py
+++ b/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py
@@ -16,11 +16,13 @@
return CheckResult.UNKNOWN
artifact = conf['artifacts'][0]
if isinstance(artifact, dict):
- if artifact['type'] == "NO_ARTIFACTS":
+ if artifact['type'] == ["NO_ARTIFACTS"]:
self.evaluated_keys = 'artifacts/[0]/type'
- elif 'encryption_disabled' in artifact and artifact['encryption_disabled']:
- self.evaluated_keys = 'artifacts/[0]/encryption_disabled'
- return CheckResult.FAILED
+ return CheckResult.UNKNOWN
+ if 'encryption_disabled' in artifact:
+ if artifact['encryption_disabled'] == [True]:
+ self.evaluated_keys = 'artifacts/[0]/encryption_disabled'
+ return CheckResult.FAILED
return CheckResult.PASSED
| {"golden_diff": "diff --git a/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py b/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py\n--- a/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py\n+++ b/checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py\n@@ -16,11 +16,13 @@\n return CheckResult.UNKNOWN\n artifact = conf['artifacts'][0]\n if isinstance(artifact, dict):\n- if artifact['type'] == \"NO_ARTIFACTS\":\n+ if artifact['type'] == [\"NO_ARTIFACTS\"]:\n self.evaluated_keys = 'artifacts/[0]/type'\n- elif 'encryption_disabled' in artifact and artifact['encryption_disabled']:\n- self.evaluated_keys = 'artifacts/[0]/encryption_disabled'\n- return CheckResult.FAILED\n+ return CheckResult.UNKNOWN\n+ if 'encryption_disabled' in artifact: \n+ if artifact['encryption_disabled'] == [True]:\n+ self.evaluated_keys = 'artifacts/[0]/encryption_disabled'\n+ return CheckResult.FAILED\n return CheckResult.PASSED\n", "issue": "CKV_AWS_78 misreported\n**Describe the bug**\r\nCheckov is returning as vulnerability CKV_AWS_78, but the solution breaks Terraform validation.\r\n\r\nAccordigly to Checkov if `encryption_disabled = false` is not set in the main block it can be considered a vulnerability\r\n\r\n```\r\nresource \"aws_codebuild_project\" \"project-with-cache\" {\r\n name = \"test-project-cache\"\r\n description = \"test_codebuild_project_cache\"\r\n build_timeout = \"5\"\r\n queued_timeout = \"5\"\r\n+ encryption_disabled = false\r\n} \r\n```\r\nas described here: https://docs.bridgecrew.io/docs/bc_aws_general_30\r\n\r\nUnfortunately in Terraform v1.0.3 `encryption_disabled` is not available in that location but only in blocks `artifacts`, `secondary_artifacts` and `logs_config: s3_logs` as you can see here: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codebuild_project\r\n\r\nSo if not set it gives vulnerability, if set terraform fails during the validation.\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Set in **aws_codebuild_project** **encryption_disabled = false**\r\n```\r\nresource \"aws_codebuild_project\" \"project-with-cache\" {\r\n name = \"test-project-cache\"\r\n description = \"test_codebuild_project_cache\"\r\n build_timeout = \"5\"\r\n queued_timeout = \"5\"\r\n+ encryption_disabled = false\r\n} \r\n```\r\n2. Run `terraform validate`\r\n3. See error\r\n\r\n**Expected behavior**\r\nNo vulnerability or vulnerability if not set the attribute in all the 3 blocks\r\n\r\n**Desktop (please complete the following information):**\r\n - terraform --version: Terraform v1.0.3 on linux_amd64\r\n - checkov --version: 2.0.326\r\n\r\n\n", "before_files": [{"content": "from checkov.common.models.enums import CheckResult, CheckCategories\nfrom checkov.terraform.checks.resource.base_resource_check import BaseResourceCheck\n\n\nclass CodeBuildProjectEncryption(BaseResourceCheck):\n\n def __init__(self):\n name = \"Ensure that CodeBuild Project encryption is not disabled\"\n id = \"CKV_AWS_78\"\n supported_resources = ['aws_codebuild_project']\n categories = [CheckCategories.ENCRYPTION]\n super().__init__(name=name, id=id, categories=categories, supported_resources=supported_resources)\n\n def scan_resource_conf(self, conf):\n if 'artifacts' not in conf:\n return CheckResult.UNKNOWN\n artifact = conf['artifacts'][0]\n if isinstance(artifact, dict):\n if artifact['type'] == \"NO_ARTIFACTS\":\n self.evaluated_keys = 'artifacts/[0]/type'\n elif 'encryption_disabled' in artifact and artifact['encryption_disabled']:\n self.evaluated_keys = 'artifacts/[0]/encryption_disabled'\n return CheckResult.FAILED\n return CheckResult.PASSED\n\n\ncheck = CodeBuildProjectEncryption()\n", "path": "checkov/terraform/checks/resource/aws/CodeBuildProjectEncryption.py"}]} | 1,238 | 254 |
gh_patches_debug_2490 | rasdani/github-patches | git_diff | dotkom__onlineweb4-165 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Adding 'Offline Informasjonstekster' causes error
Not really sure what this does but it casts an error saying:
Exception Type: IntegrityError
Exception Value: column key is not unique
</issue>
<code>
[start of apps/offline/admin.py]
1 from apps.offline.models import ProxyChunk, Issue
2 from chunks.models import Chunk
3 from django.contrib import admin
4 from django.db.models import Q
5
6
7 class ProxyChunkAdmin(admin.ModelAdmin):
8
9 readonly_fields = ['key']
10
11 def queryset(self, request):
12 offline = Chunk.objects.filter(Q(key='offline_ingress') | Q(key='offline_brodtekst'))
13 return offline
14
15 admin.site.register(ProxyChunk, ProxyChunkAdmin)
16 admin.site.register(Issue)
17
[end of apps/offline/admin.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/apps/offline/admin.py b/apps/offline/admin.py
--- a/apps/offline/admin.py
+++ b/apps/offline/admin.py
@@ -8,6 +8,9 @@
readonly_fields = ['key']
+ def has_add_permission(self, request):
+ return False
+
def queryset(self, request):
offline = Chunk.objects.filter(Q(key='offline_ingress') | Q(key='offline_brodtekst'))
return offline
| {"golden_diff": "diff --git a/apps/offline/admin.py b/apps/offline/admin.py\n--- a/apps/offline/admin.py\n+++ b/apps/offline/admin.py\n@@ -8,6 +8,9 @@\n \n readonly_fields = ['key']\n \n+ def has_add_permission(self, request):\n+ return False\n+\n def queryset(self, request):\n offline = Chunk.objects.filter(Q(key='offline_ingress') | Q(key='offline_brodtekst'))\n return offline\n", "issue": "Adding 'Offline Informasjonstekster' causes error\nNot really sure what this does but it casts an error saying:\n\nException Type: IntegrityError\nException Value: column key is not unique\n\n", "before_files": [{"content": "from apps.offline.models import ProxyChunk, Issue\nfrom chunks.models import Chunk\nfrom django.contrib import admin\nfrom django.db.models import Q\n\n\nclass ProxyChunkAdmin(admin.ModelAdmin):\n\n readonly_fields = ['key']\n\n def queryset(self, request):\n offline = Chunk.objects.filter(Q(key='offline_ingress') | Q(key='offline_brodtekst'))\n return offline\n\nadmin.site.register(ProxyChunk, ProxyChunkAdmin)\nadmin.site.register(Issue)\n", "path": "apps/offline/admin.py"}]} | 703 | 102 |
gh_patches_debug_34711 | rasdani/github-patches | git_diff | scikit-image__scikit-image-6035 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Segfault in peak_local_max with large numbed of segments
## Description
scikit-image dives to (absolutely uncatchable and untrackable) segfault in peak_local_max.
## Way to reproduce
```python
import numpy as np
from scipy.ndimage import distance_transform_edt
from skimage.feature import peak_local_max
def segment(binary_image):
distance = distance_transform_edt(binary_image)
peak_local_max(
distance, min_distance=100, footprint=np.ones((3, 3)), labels=binary_image,
)
for p in [0.05, 0.95, 0.001, 0.999]:
print(p)
segment(np.random.random([2048, 2048]) < p)
```
## Version information
```python
# Paste the output of the following python commands
from __future__ import print_function
import sys; print(sys.version)
import platform; print(platform.platform())
import skimage; print(f'scikit-image version: {skimage.__version__}')
import numpy; print(f'numpy version: {numpy.__version__}')
```
```python
3.8.10 (default, Sep 28 2021, 16:10:42)
[GCC 9.3.0]
Linux-5.10.47-linuxkit-x86_64-with-glibc2.29
scikit-image version: 0.18.3
numpy version: 1.21.4
```
</issue>
<code>
[start of skimage/_shared/coord.py]
1 import numpy as np
2 from scipy.spatial import cKDTree, distance
3
4
5 def _ensure_spacing(coord, spacing, p_norm, max_out):
6 """Returns a subset of coord where a minimum spacing is guaranteed.
7
8 Parameters
9 ----------
10 coord : ndarray
11 The coordinates of the considered points.
12 spacing : float
13 the maximum allowed spacing between the points.
14 p_norm : float
15 Which Minkowski p-norm to use. Should be in the range [1, inf].
16 A finite large p may cause a ValueError if overflow can occur.
17 ``inf`` corresponds to the Chebyshev distance and 2 to the
18 Euclidean distance.
19 max_out: int
20 If not None, at most the first ``max_out`` candidates are
21 returned.
22
23 Returns
24 -------
25 output : ndarray
26 A subset of coord where a minimum spacing is guaranteed.
27
28 """
29
30 # Use KDtree to find the peaks that are too close to each other
31 tree = cKDTree(coord)
32
33 indices = tree.query_ball_point(coord, r=spacing, p=p_norm)
34 rejected_peaks_indices = set()
35 naccepted = 0
36 for idx, candidates in enumerate(indices):
37 if idx not in rejected_peaks_indices:
38 # keep current point and the points at exactly spacing from it
39 candidates.remove(idx)
40 dist = distance.cdist([coord[idx]],
41 coord[candidates],
42 distance.minkowski,
43 p=p_norm).reshape(-1)
44 candidates = [c for c, d in zip(candidates, dist)
45 if d < spacing]
46
47 # candidates.remove(keep)
48 rejected_peaks_indices.update(candidates)
49 naccepted += 1
50 if max_out is not None and naccepted >= max_out:
51 break
52
53 # Remove the peaks that are too close to each other
54 output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)
55 if max_out is not None:
56 output = output[:max_out]
57
58 return output
59
60
61 def ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50,
62 max_out=None):
63 """Returns a subset of coord where a minimum spacing is guaranteed.
64
65 Parameters
66 ----------
67 coords : array_like
68 The coordinates of the considered points.
69 spacing : float
70 the maximum allowed spacing between the points.
71 p_norm : float
72 Which Minkowski p-norm to use. Should be in the range [1, inf].
73 A finite large p may cause a ValueError if overflow can occur.
74 ``inf`` corresponds to the Chebyshev distance and 2 to the
75 Euclidean distance.
76 min_split_size : int
77 Minimum split size used to process ``coord`` by batch to save
78 memory. If None, the memory saving strategy is not applied.
79 max_out : int
80 If not None, only the first ``max_out`` candidates are returned.
81
82 Returns
83 -------
84 output : array_like
85 A subset of coord where a minimum spacing is guaranteed.
86
87 """
88
89 output = coords
90 if len(coords):
91
92 coords = np.atleast_2d(coords)
93 if min_split_size is None:
94 batch_list = [coords]
95 else:
96 coord_count = len(coords)
97 split_count = int(np.log2(coord_count / min_split_size)) + 1
98 split_idx = np.cumsum(
99 [coord_count // (2 ** i) for i in range(1, split_count)])
100 batch_list = np.array_split(coords, split_idx)
101
102 output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)
103 for batch in batch_list:
104 output = _ensure_spacing(np.vstack([output, batch]),
105 spacing, p_norm, max_out)
106 if max_out is not None and len(output) >= max_out:
107 break
108
109 return output
110
[end of skimage/_shared/coord.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/skimage/_shared/coord.py b/skimage/_shared/coord.py
--- a/skimage/_shared/coord.py
+++ b/skimage/_shared/coord.py
@@ -59,7 +59,7 @@
def ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50,
- max_out=None):
+ max_out=None, *, max_split_size=2000):
"""Returns a subset of coord where a minimum spacing is guaranteed.
Parameters
@@ -74,10 +74,19 @@
``inf`` corresponds to the Chebyshev distance and 2 to the
Euclidean distance.
min_split_size : int
- Minimum split size used to process ``coord`` by batch to save
+ Minimum split size used to process ``coords`` by batch to save
memory. If None, the memory saving strategy is not applied.
max_out : int
If not None, only the first ``max_out`` candidates are returned.
+ max_split_size : int
+ Maximum split size used to process ``coords`` by batch to save
+ memory. This number was decided by profiling with a large number
+ of points. Too small a number results in too much looping in
+ Python instead of C, slowing down the process, while too large
+ a number results in large memory allocations, slowdowns, and,
+ potentially, in the process being killed -- see gh-6010. See
+ benchmark results `here
+ <https://github.com/scikit-image/scikit-image/pull/6035#discussion_r751518691>`_.
Returns
-------
@@ -94,9 +103,12 @@
batch_list = [coords]
else:
coord_count = len(coords)
- split_count = int(np.log2(coord_count / min_split_size)) + 1
- split_idx = np.cumsum(
- [coord_count // (2 ** i) for i in range(1, split_count)])
+ split_idx = [min_split_size]
+ split_size = min_split_size
+ while coord_count - split_idx[-1] > max_split_size:
+ split_size *= 2
+ split_idx.append(split_idx[-1] + min(split_size,
+ max_split_size))
batch_list = np.array_split(coords, split_idx)
output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)
| {"golden_diff": "diff --git a/skimage/_shared/coord.py b/skimage/_shared/coord.py\n--- a/skimage/_shared/coord.py\n+++ b/skimage/_shared/coord.py\n@@ -59,7 +59,7 @@\n \n \n def ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50,\n- max_out=None):\n+ max_out=None, *, max_split_size=2000):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n \n Parameters\n@@ -74,10 +74,19 @@\n ``inf`` corresponds to the Chebyshev distance and 2 to the\n Euclidean distance.\n min_split_size : int\n- Minimum split size used to process ``coord`` by batch to save\n+ Minimum split size used to process ``coords`` by batch to save\n memory. If None, the memory saving strategy is not applied.\n max_out : int\n If not None, only the first ``max_out`` candidates are returned.\n+ max_split_size : int\n+ Maximum split size used to process ``coords`` by batch to save\n+ memory. This number was decided by profiling with a large number\n+ of points. Too small a number results in too much looping in\n+ Python instead of C, slowing down the process, while too large\n+ a number results in large memory allocations, slowdowns, and,\n+ potentially, in the process being killed -- see gh-6010. See\n+ benchmark results `here\n+ <https://github.com/scikit-image/scikit-image/pull/6035#discussion_r751518691>`_.\n \n Returns\n -------\n@@ -94,9 +103,12 @@\n batch_list = [coords]\n else:\n coord_count = len(coords)\n- split_count = int(np.log2(coord_count / min_split_size)) + 1\n- split_idx = np.cumsum(\n- [coord_count // (2 ** i) for i in range(1, split_count)])\n+ split_idx = [min_split_size]\n+ split_size = min_split_size\n+ while coord_count - split_idx[-1] > max_split_size:\n+ split_size *= 2\n+ split_idx.append(split_idx[-1] + min(split_size,\n+ max_split_size))\n batch_list = np.array_split(coords, split_idx)\n \n output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)\n", "issue": "Segfault in peak_local_max with large numbed of segments\n## Description\r\n\r\nscikit-image dives to (absolutely uncatchable and untrackable) segfault in peak_local_max.\r\n\r\n## Way to reproduce\r\n```python\r\nimport numpy as np\r\nfrom scipy.ndimage import distance_transform_edt\r\nfrom skimage.feature import peak_local_max\r\n\r\n\r\ndef segment(binary_image):\r\n distance = distance_transform_edt(binary_image)\r\n peak_local_max(\r\n distance, min_distance=100, footprint=np.ones((3, 3)), labels=binary_image,\r\n )\r\n\r\nfor p in [0.05, 0.95, 0.001, 0.999]:\r\n print(p)\r\n segment(np.random.random([2048, 2048]) < p)\r\n\r\n\r\n```\r\n\r\n\r\n## Version information\r\n```python\r\n# Paste the output of the following python commands\r\nfrom __future__ import print_function\r\nimport sys; print(sys.version)\r\nimport platform; print(platform.platform())\r\nimport skimage; print(f'scikit-image version: {skimage.__version__}')\r\nimport numpy; print(f'numpy version: {numpy.__version__}')\r\n```\r\n\r\n```python\r\n3.8.10 (default, Sep 28 2021, 16:10:42) \r\n[GCC 9.3.0]\r\nLinux-5.10.47-linuxkit-x86_64-with-glibc2.29\r\nscikit-image version: 0.18.3\r\nnumpy version: 1.21.4\r\n```\r\n\n", "before_files": [{"content": "import numpy as np\nfrom scipy.spatial import cKDTree, distance\n\n\ndef _ensure_spacing(coord, spacing, p_norm, max_out):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n\n Parameters\n ----------\n coord : ndarray\n The coordinates of the considered points.\n spacing : float\n the maximum allowed spacing between the points.\n p_norm : float\n Which Minkowski p-norm to use. Should be in the range [1, inf].\n A finite large p may cause a ValueError if overflow can occur.\n ``inf`` corresponds to the Chebyshev distance and 2 to the\n Euclidean distance.\n max_out: int\n If not None, at most the first ``max_out`` candidates are\n returned.\n\n Returns\n -------\n output : ndarray\n A subset of coord where a minimum spacing is guaranteed.\n\n \"\"\"\n\n # Use KDtree to find the peaks that are too close to each other\n tree = cKDTree(coord)\n\n indices = tree.query_ball_point(coord, r=spacing, p=p_norm)\n rejected_peaks_indices = set()\n naccepted = 0\n for idx, candidates in enumerate(indices):\n if idx not in rejected_peaks_indices:\n # keep current point and the points at exactly spacing from it\n candidates.remove(idx)\n dist = distance.cdist([coord[idx]],\n coord[candidates],\n distance.minkowski,\n p=p_norm).reshape(-1)\n candidates = [c for c, d in zip(candidates, dist)\n if d < spacing]\n\n # candidates.remove(keep)\n rejected_peaks_indices.update(candidates)\n naccepted += 1\n if max_out is not None and naccepted >= max_out:\n break\n\n # Remove the peaks that are too close to each other\n output = np.delete(coord, tuple(rejected_peaks_indices), axis=0)\n if max_out is not None:\n output = output[:max_out]\n\n return output\n\n\ndef ensure_spacing(coords, spacing=1, p_norm=np.inf, min_split_size=50,\n max_out=None):\n \"\"\"Returns a subset of coord where a minimum spacing is guaranteed.\n\n Parameters\n ----------\n coords : array_like\n The coordinates of the considered points.\n spacing : float\n the maximum allowed spacing between the points.\n p_norm : float\n Which Minkowski p-norm to use. Should be in the range [1, inf].\n A finite large p may cause a ValueError if overflow can occur.\n ``inf`` corresponds to the Chebyshev distance and 2 to the\n Euclidean distance.\n min_split_size : int\n Minimum split size used to process ``coord`` by batch to save\n memory. If None, the memory saving strategy is not applied.\n max_out : int\n If not None, only the first ``max_out`` candidates are returned.\n\n Returns\n -------\n output : array_like\n A subset of coord where a minimum spacing is guaranteed.\n\n \"\"\"\n\n output = coords\n if len(coords):\n\n coords = np.atleast_2d(coords)\n if min_split_size is None:\n batch_list = [coords]\n else:\n coord_count = len(coords)\n split_count = int(np.log2(coord_count / min_split_size)) + 1\n split_idx = np.cumsum(\n [coord_count // (2 ** i) for i in range(1, split_count)])\n batch_list = np.array_split(coords, split_idx)\n\n output = np.zeros((0, coords.shape[1]), dtype=coords.dtype)\n for batch in batch_list:\n output = _ensure_spacing(np.vstack([output, batch]),\n spacing, p_norm, max_out)\n if max_out is not None and len(output) >= max_out:\n break\n\n return output\n", "path": "skimage/_shared/coord.py"}]} | 1,940 | 561 |
gh_patches_debug_38340 | rasdani/github-patches | git_diff | python-poetry__poetry-2602 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Poetry 1.1.0a2 includes python code in the version string
To reproduce:
1. `poetry init` with default choices; no dependencies.
1. `poetry add pytest-cov`
1. `poetry install`
The last command prints:
```
Installing dependencies from lock file
Package operations: 0 installs, 1 update, 0 removals
- Updating pytest-cov (2.10.0 import os, sys;exec('if \'COV_CORE_SOURCE\' in os.environ:\n try:\n from pytest_cov.embed import init\n init()\n except Exception as exc:\n sys.stderr.write(\n "pytest-cov: Failed to setup subprocess coverage. "\n "Environ: {0!r} "\n "Exception: {1!r}\\n".format(\n dict((k, v) for k, v in os.environ.items() if k.startswith(\'COV_CORE\')),\n exc\n )\n )\n') -> 2.10.0)
```
</issue>
<code>
[start of poetry/repositories/installed_repository.py]
1 from poetry.core.packages import Package
2 from poetry.utils._compat import Path
3 from poetry.utils._compat import metadata
4 from poetry.utils.env import Env
5
6 from .repository import Repository
7
8
9 _VENDORS = Path(__file__).parent.parent.joinpath("_vendor")
10
11
12 class InstalledRepository(Repository):
13 @classmethod
14 def load(cls, env): # type: (Env) -> InstalledRepository
15 """
16 Load installed packages.
17 """
18 repo = cls()
19 seen = set()
20
21 for entry in reversed(env.sys_path):
22 for distribution in sorted(
23 metadata.distributions(path=[entry]), key=lambda d: str(d._path),
24 ):
25 name = distribution.metadata["name"]
26 path = Path(str(distribution._path))
27 version = distribution.metadata["version"]
28 package = Package(name, version, version)
29 package.description = distribution.metadata.get("summary", "")
30
31 if package.name in seen:
32 continue
33
34 try:
35 path.relative_to(_VENDORS)
36 except ValueError:
37 pass
38 else:
39 continue
40
41 seen.add(package.name)
42
43 repo.add_package(package)
44
45 is_standard_package = True
46 try:
47 path.relative_to(env.site_packages)
48 except ValueError:
49 is_standard_package = False
50
51 if is_standard_package:
52 if (
53 path.name.endswith(".dist-info")
54 and env.site_packages.joinpath(
55 "{}.pth".format(package.pretty_name)
56 ).exists()
57 ):
58 with env.site_packages.joinpath(
59 "{}.pth".format(package.pretty_name)
60 ).open() as f:
61 directory = Path(f.readline().strip())
62 package.source_type = "directory"
63 package.source_url = directory.as_posix()
64
65 continue
66
67 src_path = env.path / "src"
68
69 # A VCS dependency should have been installed
70 # in the src directory. If not, it's a path dependency
71 try:
72 path.relative_to(src_path)
73
74 from poetry.core.vcs.git import Git
75
76 git = Git()
77 revision = git.rev_parse("HEAD", src_path / package.name).strip()
78 url = git.remote_url(src_path / package.name)
79
80 package.source_type = "git"
81 package.source_url = url
82 package.source_reference = revision
83 except ValueError:
84 package.source_type = "directory"
85 package.source_url = str(path.parent)
86
87 return repo
88
[end of poetry/repositories/installed_repository.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/poetry/repositories/installed_repository.py b/poetry/repositories/installed_repository.py
--- a/poetry/repositories/installed_repository.py
+++ b/poetry/repositories/installed_repository.py
@@ -1,3 +1,5 @@
+from typing import Set
+
from poetry.core.packages import Package
from poetry.utils._compat import Path
from poetry.utils._compat import metadata
@@ -10,6 +12,37 @@
class InstalledRepository(Repository):
+ @classmethod
+ def get_package_paths(cls, sitedir, name): # type: (Path, str) -> Set[Path]
+ """
+ Process a .pth file within the site-packages directory, and return any valid
+ paths. We skip executable .pth files as there is no reliable means to do this
+ without side-effects to current run-time. Mo check is made that the item refers
+ to a directory rather than a file, however, in order to maintain backwards
+ compatibility, we allow non-existing paths to be discovered. The latter
+ behaviour is different to how Python's site-specific hook configuration works.
+
+ Reference: https://docs.python.org/3.8/library/site.html
+
+ :param sitedir: The site-packages directory to search for .pth file.
+ :param name: The name of the package to search .pth file for.
+ :return: A `Set` of valid `Path` objects.
+ """
+ paths = set()
+
+ pth_file = sitedir.joinpath("{}.pth".format(name))
+ if pth_file.exists():
+ with pth_file.open() as f:
+ for line in f:
+ line = line.strip()
+ if line and not line.startswith(("#", "import ", "import\t")):
+ path = Path(line)
+ if not path.is_absolute():
+ path = sitedir.joinpath(path)
+ paths.add(path)
+
+ return paths
+
@classmethod
def load(cls, env): # type: (Env) -> InstalledRepository
"""
@@ -49,19 +82,14 @@
is_standard_package = False
if is_standard_package:
- if (
- path.name.endswith(".dist-info")
- and env.site_packages.joinpath(
- "{}.pth".format(package.pretty_name)
- ).exists()
- ):
- with env.site_packages.joinpath(
- "{}.pth".format(package.pretty_name)
- ).open() as f:
- directory = Path(f.readline().strip())
+ if path.name.endswith(".dist-info"):
+ paths = cls.get_package_paths(
+ sitedir=env.site_packages, name=package.pretty_name
+ )
+ if paths:
+ # TODO: handle multiple source directories?
package.source_type = "directory"
- package.source_url = directory.as_posix()
-
+ package.source_url = paths.pop().as_posix()
continue
src_path = env.path / "src"
| {"golden_diff": "diff --git a/poetry/repositories/installed_repository.py b/poetry/repositories/installed_repository.py\n--- a/poetry/repositories/installed_repository.py\n+++ b/poetry/repositories/installed_repository.py\n@@ -1,3 +1,5 @@\n+from typing import Set\n+\n from poetry.core.packages import Package\n from poetry.utils._compat import Path\n from poetry.utils._compat import metadata\n@@ -10,6 +12,37 @@\n \n \n class InstalledRepository(Repository):\n+ @classmethod\n+ def get_package_paths(cls, sitedir, name): # type: (Path, str) -> Set[Path]\n+ \"\"\"\n+ Process a .pth file within the site-packages directory, and return any valid\n+ paths. We skip executable .pth files as there is no reliable means to do this\n+ without side-effects to current run-time. Mo check is made that the item refers\n+ to a directory rather than a file, however, in order to maintain backwards\n+ compatibility, we allow non-existing paths to be discovered. The latter\n+ behaviour is different to how Python's site-specific hook configuration works.\n+\n+ Reference: https://docs.python.org/3.8/library/site.html\n+\n+ :param sitedir: The site-packages directory to search for .pth file.\n+ :param name: The name of the package to search .pth file for.\n+ :return: A `Set` of valid `Path` objects.\n+ \"\"\"\n+ paths = set()\n+\n+ pth_file = sitedir.joinpath(\"{}.pth\".format(name))\n+ if pth_file.exists():\n+ with pth_file.open() as f:\n+ for line in f:\n+ line = line.strip()\n+ if line and not line.startswith((\"#\", \"import \", \"import\\t\")):\n+ path = Path(line)\n+ if not path.is_absolute():\n+ path = sitedir.joinpath(path)\n+ paths.add(path)\n+\n+ return paths\n+\n @classmethod\n def load(cls, env): # type: (Env) -> InstalledRepository\n \"\"\"\n@@ -49,19 +82,14 @@\n is_standard_package = False\n \n if is_standard_package:\n- if (\n- path.name.endswith(\".dist-info\")\n- and env.site_packages.joinpath(\n- \"{}.pth\".format(package.pretty_name)\n- ).exists()\n- ):\n- with env.site_packages.joinpath(\n- \"{}.pth\".format(package.pretty_name)\n- ).open() as f:\n- directory = Path(f.readline().strip())\n+ if path.name.endswith(\".dist-info\"):\n+ paths = cls.get_package_paths(\n+ sitedir=env.site_packages, name=package.pretty_name\n+ )\n+ if paths:\n+ # TODO: handle multiple source directories?\n package.source_type = \"directory\"\n- package.source_url = directory.as_posix()\n-\n+ package.source_url = paths.pop().as_posix()\n continue\n \n src_path = env.path / \"src\"\n", "issue": "Poetry 1.1.0a2 includes python code in the version string\nTo reproduce:\r\n\r\n1. `poetry init` with default choices; no dependencies.\r\n1. `poetry add pytest-cov`\r\n1. `poetry install`\r\n\r\nThe last command prints:\r\n```\r\nInstalling dependencies from lock file\r\n\r\nPackage operations: 0 installs, 1 update, 0 removals\r\n\r\n- Updating pytest-cov (2.10.0 import os, sys;exec('if \\'COV_CORE_SOURCE\\' in os.environ:\\n try:\\n from pytest_cov.embed import init\\n init()\\n except Exception as exc:\\n sys.stderr.write(\\n \"pytest-cov: Failed to setup subprocess coverage. \"\\n \"Environ: {0!r} \"\\n \"Exception: {1!r}\\\\n\".format(\\n dict((k, v) for k, v in os.environ.items() if k.startswith(\\'COV_CORE\\')),\\n exc\\n )\\n )\\n') -> 2.10.0) \r\n```\n", "before_files": [{"content": "from poetry.core.packages import Package\nfrom poetry.utils._compat import Path\nfrom poetry.utils._compat import metadata\nfrom poetry.utils.env import Env\n\nfrom .repository import Repository\n\n\n_VENDORS = Path(__file__).parent.parent.joinpath(\"_vendor\")\n\n\nclass InstalledRepository(Repository):\n @classmethod\n def load(cls, env): # type: (Env) -> InstalledRepository\n \"\"\"\n Load installed packages.\n \"\"\"\n repo = cls()\n seen = set()\n\n for entry in reversed(env.sys_path):\n for distribution in sorted(\n metadata.distributions(path=[entry]), key=lambda d: str(d._path),\n ):\n name = distribution.metadata[\"name\"]\n path = Path(str(distribution._path))\n version = distribution.metadata[\"version\"]\n package = Package(name, version, version)\n package.description = distribution.metadata.get(\"summary\", \"\")\n\n if package.name in seen:\n continue\n\n try:\n path.relative_to(_VENDORS)\n except ValueError:\n pass\n else:\n continue\n\n seen.add(package.name)\n\n repo.add_package(package)\n\n is_standard_package = True\n try:\n path.relative_to(env.site_packages)\n except ValueError:\n is_standard_package = False\n\n if is_standard_package:\n if (\n path.name.endswith(\".dist-info\")\n and env.site_packages.joinpath(\n \"{}.pth\".format(package.pretty_name)\n ).exists()\n ):\n with env.site_packages.joinpath(\n \"{}.pth\".format(package.pretty_name)\n ).open() as f:\n directory = Path(f.readline().strip())\n package.source_type = \"directory\"\n package.source_url = directory.as_posix()\n\n continue\n\n src_path = env.path / \"src\"\n\n # A VCS dependency should have been installed\n # in the src directory. If not, it's a path dependency\n try:\n path.relative_to(src_path)\n\n from poetry.core.vcs.git import Git\n\n git = Git()\n revision = git.rev_parse(\"HEAD\", src_path / package.name).strip()\n url = git.remote_url(src_path / package.name)\n\n package.source_type = \"git\"\n package.source_url = url\n package.source_reference = revision\n except ValueError:\n package.source_type = \"directory\"\n package.source_url = str(path.parent)\n\n return repo\n", "path": "poetry/repositories/installed_repository.py"}]} | 1,448 | 672 |
gh_patches_debug_4899 | rasdani/github-patches | git_diff | ivy-llc__ivy-18924 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
dropout3d
</issue>
<code>
[start of ivy/functional/frontends/paddle/nn/functional/common.py]
1 # local
2 import ivy
3 from ivy.func_wrapper import with_supported_dtypes
4 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
5
6
7 @to_ivy_arrays_and_back
8 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
9 def cosine_similarity(x1, x2, *, axis=1, eps=1e-08):
10 if len(x1.shape) == len(x2.shape) and len(x2.shape) >= 2:
11 numerator = ivy.sum(x1 * x2, axis=axis)
12 x1_squared_norm = ivy.sum(ivy.square(x1), axis=axis)
13 x2_squared_norm = ivy.sum(ivy.square(x2), axis=axis)
14 else:
15 numerator = ivy.sum(x1 * x2)
16 x1_squared_norm = ivy.sum(ivy.square(x1))
17 x2_squared_norm = ivy.sum(ivy.square(x2))
18
19 x1_norm = ivy.sqrt(x1_squared_norm)
20 x2_norm = ivy.sqrt(x2_squared_norm)
21 norm_mm = x1_norm * x2_norm
22 denominator = ivy.maximum(norm_mm, eps)
23
24 cosine = numerator / denominator
25 return cosine
26
27
28 @to_ivy_arrays_and_back
29 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
30 def dropout2d(x, *, p=0.5, training=True, data_format="NCHW", name=None):
31 return ivy.dropout2d(x, p=p, training=training, data_format=data_format)
32
33
34 def get_mask(shape, device, prob, seed=None):
35 mask = ivy.where(
36 ivy.random_uniform(shape=shape, device=device, seed=seed) < prob,
37 0.0,
38 1.0,
39 )
40 return mask
41
42
43 @to_ivy_arrays_and_back
44 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
45 def dropout(x, p=0.5, axis=None, training=True, mode="upscale_in_train", name=None):
46 if axis > 1:
47 raise ValueError("Axis value can only be 0 or 1 or None.")
48 elif axis is None or (isinstance(axis, list) and len(axis) == 2):
49 mask = get_mask(shape=x.shape, device=ivy.dev(x), prob=p, seed=None)
50 elif axis == 0:
51 mask = get_mask(shape=(x.shape[0], 1), device=ivy.dev(x), prob=p)
52 mask = ivy.broadcast_to(mask, x.shape)
53 elif axis == 1:
54 mask = get_mask(shape=(1, x.shape[1]), device=ivy.dev(x), prob=p)
55 mask = ivy.broadcast_to(mask, x.shape)
56 if mode == "upscale_in_train":
57 if training:
58 out = ivy.multiply(x, mask)
59 ret = ivy.multiply(out, 1.0 / (1.0 - p))
60 else:
61 ret = x
62 else:
63 if training:
64 ret = ivy.multiply(x, mask)
65 else:
66 ret = ivy.multiply(x, (1.0 - p))
67 return ret
68
69
70 @to_ivy_arrays_and_back
71 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
72 def zeropad2d(x, padding, data_format="NCHW", name=None):
73 if ivy.is_array(padding):
74 padding = padding.to_list()
75 if isinstance(padding, int):
76 padding = [padding, padding, padding, padding]
77 if len(padding) != 4:
78 raise ValueError("Padding length should be 4.")
79 if x.ndim != 4:
80 raise ValueError("Input x must be 4-dimensional.")
81 if data_format == "NCHW":
82 padding = ((0, 0), (0, 0), (padding[2], padding[3]), (padding[0], padding[1]))
83 elif data_format == "NHWC":
84 padding = ((0, 0), (padding[2], padding[3]), (padding[0], padding[1]), (0, 0))
85 else:
86 raise ValueError("Unknown data_format: {}".format(data_format))
87 return ivy.pad(x, padding, mode="constant", constant_values=0.0)
88
89
90 @to_ivy_arrays_and_back
91 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
92 def interpolate(
93 x,
94 size=None,
95 scale_factor=None,
96 mode="nearest",
97 align_corners=False,
98 align_mode=0,
99 data_format="NCHW",
100 name=None,
101 ):
102 return ivy.interpolate(
103 x, size, mode=mode, scale_factor=scale_factor, align_corners=align_corners
104 )
105
106
107 @to_ivy_arrays_and_back
108 @with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
109 def linear(x, weight, bias=None, name=None):
110 weight = ivy.swapaxes(weight, -1, -2)
111 return ivy.linear(x, weight, bias=bias)
112
[end of ivy/functional/frontends/paddle/nn/functional/common.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/paddle/nn/functional/common.py b/ivy/functional/frontends/paddle/nn/functional/common.py
--- a/ivy/functional/frontends/paddle/nn/functional/common.py
+++ b/ivy/functional/frontends/paddle/nn/functional/common.py
@@ -109,3 +109,9 @@
def linear(x, weight, bias=None, name=None):
weight = ivy.swapaxes(weight, -1, -2)
return ivy.linear(x, weight, bias=bias)
+
+
+@to_ivy_arrays_and_back
+@with_supported_dtypes({"2.5.1 and below": ("float32", "float64")}, "paddle")
+def dropout3d(x, p=0.5, training=True, data_format="NCDHW", name=None):
+ return ivy.dropout3d(x, p, training=training, data_format=data_format)
| {"golden_diff": "diff --git a/ivy/functional/frontends/paddle/nn/functional/common.py b/ivy/functional/frontends/paddle/nn/functional/common.py\n--- a/ivy/functional/frontends/paddle/nn/functional/common.py\n+++ b/ivy/functional/frontends/paddle/nn/functional/common.py\n@@ -109,3 +109,9 @@\n def linear(x, weight, bias=None, name=None):\n weight = ivy.swapaxes(weight, -1, -2)\n return ivy.linear(x, weight, bias=bias)\n+\n+\n+@to_ivy_arrays_and_back\n+@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\n+def dropout3d(x, p=0.5, training=True, data_format=\"NCDHW\", name=None):\n+ return ivy.dropout3d(x, p, training=training, data_format=data_format)\n", "issue": "dropout3d\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef cosine_similarity(x1, x2, *, axis=1, eps=1e-08):\n if len(x1.shape) == len(x2.shape) and len(x2.shape) >= 2:\n numerator = ivy.sum(x1 * x2, axis=axis)\n x1_squared_norm = ivy.sum(ivy.square(x1), axis=axis)\n x2_squared_norm = ivy.sum(ivy.square(x2), axis=axis)\n else:\n numerator = ivy.sum(x1 * x2)\n x1_squared_norm = ivy.sum(ivy.square(x1))\n x2_squared_norm = ivy.sum(ivy.square(x2))\n\n x1_norm = ivy.sqrt(x1_squared_norm)\n x2_norm = ivy.sqrt(x2_squared_norm)\n norm_mm = x1_norm * x2_norm\n denominator = ivy.maximum(norm_mm, eps)\n\n cosine = numerator / denominator\n return cosine\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef dropout2d(x, *, p=0.5, training=True, data_format=\"NCHW\", name=None):\n return ivy.dropout2d(x, p=p, training=training, data_format=data_format)\n\n\ndef get_mask(shape, device, prob, seed=None):\n mask = ivy.where(\n ivy.random_uniform(shape=shape, device=device, seed=seed) < prob,\n 0.0,\n 1.0,\n )\n return mask\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef dropout(x, p=0.5, axis=None, training=True, mode=\"upscale_in_train\", name=None):\n if axis > 1:\n raise ValueError(\"Axis value can only be 0 or 1 or None.\")\n elif axis is None or (isinstance(axis, list) and len(axis) == 2):\n mask = get_mask(shape=x.shape, device=ivy.dev(x), prob=p, seed=None)\n elif axis == 0:\n mask = get_mask(shape=(x.shape[0], 1), device=ivy.dev(x), prob=p)\n mask = ivy.broadcast_to(mask, x.shape)\n elif axis == 1:\n mask = get_mask(shape=(1, x.shape[1]), device=ivy.dev(x), prob=p)\n mask = ivy.broadcast_to(mask, x.shape)\n if mode == \"upscale_in_train\":\n if training:\n out = ivy.multiply(x, mask)\n ret = ivy.multiply(out, 1.0 / (1.0 - p))\n else:\n ret = x\n else:\n if training:\n ret = ivy.multiply(x, mask)\n else:\n ret = ivy.multiply(x, (1.0 - p))\n return ret\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef zeropad2d(x, padding, data_format=\"NCHW\", name=None):\n if ivy.is_array(padding):\n padding = padding.to_list()\n if isinstance(padding, int):\n padding = [padding, padding, padding, padding]\n if len(padding) != 4:\n raise ValueError(\"Padding length should be 4.\")\n if x.ndim != 4:\n raise ValueError(\"Input x must be 4-dimensional.\")\n if data_format == \"NCHW\":\n padding = ((0, 0), (0, 0), (padding[2], padding[3]), (padding[0], padding[1]))\n elif data_format == \"NHWC\":\n padding = ((0, 0), (padding[2], padding[3]), (padding[0], padding[1]), (0, 0))\n else:\n raise ValueError(\"Unknown data_format: {}\".format(data_format))\n return ivy.pad(x, padding, mode=\"constant\", constant_values=0.0)\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef interpolate(\n x,\n size=None,\n scale_factor=None,\n mode=\"nearest\",\n align_corners=False,\n align_mode=0,\n data_format=\"NCHW\",\n name=None,\n):\n return ivy.interpolate(\n x, size, mode=mode, scale_factor=scale_factor, align_corners=align_corners\n )\n\n\n@to_ivy_arrays_and_back\n@with_supported_dtypes({\"2.5.1 and below\": (\"float32\", \"float64\")}, \"paddle\")\ndef linear(x, weight, bias=None, name=None):\n weight = ivy.swapaxes(weight, -1, -2)\n return ivy.linear(x, weight, bias=bias)\n", "path": "ivy/functional/frontends/paddle/nn/functional/common.py"}]} | 1,963 | 211 |
gh_patches_debug_18102 | rasdani/github-patches | git_diff | iterative__dvc-417 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Running DVC outside of Git dir
We should handle all the cases like this.
```
cd /
$ dvc repro
No handlers could be found for logger "dvc"
```
</issue>
<code>
[start of dvc/logger.py]
1 import sys
2 import logging
3
4 import colorama
5
6
7 colorama.init()
8
9
10 class Logger(object):
11 FMT = '%(message)s'
12 DEFAULT_LEVEL = logging.INFO
13
14 LEVEL_MAP = {
15 'debug': logging.DEBUG,
16 'info': logging.INFO,
17 'warn': logging.WARNING,
18 'error': logging.ERROR
19 }
20
21 COLOR_MAP = {
22 'debug': colorama.Fore.BLUE,
23 'warn': colorama.Fore.YELLOW,
24 'error': colorama.Fore.RED
25 }
26
27 def __init__(self, config=None):
28 sh = logging.StreamHandler(sys.stdout)
29 sh.setFormatter(logging.Formatter(self.FMT))
30 sh.setLevel(logging.DEBUG)
31
32 self.logger().addHandler(sh)
33 level = None
34 if config:
35 level = config['Global'].get('LogLevel', None)
36 self.set_level(level)
37
38 @staticmethod
39 def logger():
40 return logging.getLogger('dvc')
41
42 @staticmethod
43 def set_level(level=None):
44 if not level:
45 lvl = Logger.DEFAULT_LEVEL
46 else:
47 lvl = Logger.LEVEL_MAP.get(level.lower(), Logger.DEFAULT_LEVEL)
48 Logger.logger().setLevel(lvl)
49
50 @staticmethod
51 def be_quiet():
52 Logger.logger().setLevel(logging.CRITICAL)
53
54 @staticmethod
55 def be_verbose():
56 Logger.logger().setLevel(logging.DEBUG)
57
58 @staticmethod
59 def colorize(msg, typ):
60 header = ''
61 footer = ''
62
63 if sys.stdout.isatty():
64 header = Logger.COLOR_MAP.get(typ.lower(), '')
65 footer = colorama.Style.RESET_ALL
66
67 return u'{}{}{}'.format(header, msg, footer)
68
69 @staticmethod
70 def error(msg, **kwargs):
71 exc_info = Logger.logger().getEffectiveLevel() == logging.DEBUG
72 return Logger.logger().error(Logger.colorize(msg, 'error'), exc_info=exc_info, **kwargs)
73
74 @staticmethod
75 def warn(msg, **kwargs):
76 return Logger.logger().warn(Logger.colorize(msg, 'warn'), **kwargs)
77
78 @staticmethod
79 def debug(msg, **kwargs):
80 return Logger.logger().debug(Logger.colorize(msg, 'debug'), **kwargs)
81
82 @staticmethod
83 def info(msg, **kwargs):
84 return Logger.logger().info(Logger.colorize(msg, 'info'), **kwargs)
85
[end of dvc/logger.py]
[start of dvc/main.py]
1 from dvc.logger import Logger
2 from dvc.cli import parse_args
3
4 def main(argv=None):
5 args = parse_args(argv)
6
7 try:
8 cmd = args.func(args)
9 except Exception as ex:
10 Logger.error('Initialization error: {}'.format(str(ex)))
11 return 255
12
13 return cmd.run_cmd()
14
[end of dvc/main.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dvc/logger.py b/dvc/logger.py
--- a/dvc/logger.py
+++ b/dvc/logger.py
@@ -25,15 +25,19 @@
}
def __init__(self, config=None):
+ self.init(config)
+
+ @staticmethod
+ def init(config=None):
sh = logging.StreamHandler(sys.stdout)
- sh.setFormatter(logging.Formatter(self.FMT))
+ sh.setFormatter(logging.Formatter(Logger.FMT))
sh.setLevel(logging.DEBUG)
- self.logger().addHandler(sh)
+ Logger.logger().addHandler(sh)
level = None
if config:
level = config['Global'].get('LogLevel', None)
- self.set_level(level)
+ Logger.set_level(level)
@staticmethod
def logger():
diff --git a/dvc/main.py b/dvc/main.py
--- a/dvc/main.py
+++ b/dvc/main.py
@@ -2,6 +2,8 @@
from dvc.cli import parse_args
def main(argv=None):
+ Logger.init()
+
args = parse_args(argv)
try:
| {"golden_diff": "diff --git a/dvc/logger.py b/dvc/logger.py\n--- a/dvc/logger.py\n+++ b/dvc/logger.py\n@@ -25,15 +25,19 @@\n }\n \n def __init__(self, config=None):\n+ self.init(config)\n+\n+ @staticmethod\n+ def init(config=None):\n sh = logging.StreamHandler(sys.stdout)\n- sh.setFormatter(logging.Formatter(self.FMT))\n+ sh.setFormatter(logging.Formatter(Logger.FMT))\n sh.setLevel(logging.DEBUG)\n \n- self.logger().addHandler(sh)\n+ Logger.logger().addHandler(sh)\n level = None\n if config:\n level = config['Global'].get('LogLevel', None)\n- self.set_level(level)\n+ Logger.set_level(level)\n \n @staticmethod\n def logger():\ndiff --git a/dvc/main.py b/dvc/main.py\n--- a/dvc/main.py\n+++ b/dvc/main.py\n@@ -2,6 +2,8 @@\n from dvc.cli import parse_args\n \n def main(argv=None):\n+ Logger.init()\n+\n args = parse_args(argv)\n \n try:\n", "issue": "Running DVC outside of Git dir\nWe should handle all the cases like this.\r\n\r\n```\r\ncd /\r\n$ dvc repro\r\nNo handlers could be found for logger \"dvc\"\r\n```\n", "before_files": [{"content": "import sys\nimport logging\n\nimport colorama\n\n\ncolorama.init()\n\n\nclass Logger(object):\n FMT = '%(message)s'\n DEFAULT_LEVEL = logging.INFO\n\n LEVEL_MAP = {\n 'debug': logging.DEBUG,\n 'info': logging.INFO,\n 'warn': logging.WARNING,\n 'error': logging.ERROR\n }\n\n COLOR_MAP = {\n 'debug': colorama.Fore.BLUE,\n 'warn': colorama.Fore.YELLOW,\n 'error': colorama.Fore.RED\n }\n\n def __init__(self, config=None):\n sh = logging.StreamHandler(sys.stdout)\n sh.setFormatter(logging.Formatter(self.FMT))\n sh.setLevel(logging.DEBUG)\n\n self.logger().addHandler(sh)\n level = None\n if config:\n level = config['Global'].get('LogLevel', None)\n self.set_level(level)\n\n @staticmethod\n def logger():\n return logging.getLogger('dvc')\n\n @staticmethod\n def set_level(level=None):\n if not level:\n lvl = Logger.DEFAULT_LEVEL\n else:\n lvl = Logger.LEVEL_MAP.get(level.lower(), Logger.DEFAULT_LEVEL)\n Logger.logger().setLevel(lvl)\n\n @staticmethod\n def be_quiet():\n Logger.logger().setLevel(logging.CRITICAL)\n\n @staticmethod\n def be_verbose():\n Logger.logger().setLevel(logging.DEBUG)\n\n @staticmethod\n def colorize(msg, typ):\n header = ''\n footer = ''\n\n if sys.stdout.isatty():\n header = Logger.COLOR_MAP.get(typ.lower(), '')\n footer = colorama.Style.RESET_ALL\n\n return u'{}{}{}'.format(header, msg, footer)\n\n @staticmethod\n def error(msg, **kwargs):\n exc_info = Logger.logger().getEffectiveLevel() == logging.DEBUG\n return Logger.logger().error(Logger.colorize(msg, 'error'), exc_info=exc_info, **kwargs)\n\n @staticmethod\n def warn(msg, **kwargs):\n return Logger.logger().warn(Logger.colorize(msg, 'warn'), **kwargs)\n\n @staticmethod\n def debug(msg, **kwargs):\n return Logger.logger().debug(Logger.colorize(msg, 'debug'), **kwargs)\n\n @staticmethod\n def info(msg, **kwargs):\n return Logger.logger().info(Logger.colorize(msg, 'info'), **kwargs)\n", "path": "dvc/logger.py"}, {"content": "from dvc.logger import Logger\nfrom dvc.cli import parse_args\n\ndef main(argv=None):\n args = parse_args(argv)\n\n try:\n cmd = args.func(args)\n except Exception as ex:\n Logger.error('Initialization error: {}'.format(str(ex)))\n return 255\n\n return cmd.run_cmd()\n", "path": "dvc/main.py"}]} | 1,336 | 244 |
gh_patches_debug_22662 | rasdani/github-patches | git_diff | psf__black-2839 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Improve documentation for configuration options
Currently, our config options are documented only in a collapsed-by-default text block in https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html#command-line-options. This is not very discoverable and makes it hard to give more detailed documentation, such as examples.
Instead, we should have a docs page with a separate section for each option. We can start with the existing descriptions, and extend them as needed for options with more complicated behavior.
</issue>
<code>
[start of scripts/check_version_in_basics_example.py]
1 """
2 Check that the rev value in the example from ``the_basics.md`` matches
3 the latest version of Black. This saves us from forgetting to update that
4 during the release process.
5 """
6
7 import os
8 import sys
9
10 import commonmark
11 from bs4 import BeautifulSoup
12
13
14 def main(changes: str, the_basics: str) -> None:
15 changes_html = commonmark.commonmark(changes)
16 changes_soup = BeautifulSoup(changes_html, "html.parser")
17 headers = changes_soup.find_all("h2")
18 tags = [header.string for header in headers if header.string != "Unreleased"]
19 latest_tag = tags[0]
20
21 the_basics_html = commonmark.commonmark(the_basics)
22 the_basics_soup = BeautifulSoup(the_basics_html, "html.parser")
23 (version_example,) = [
24 code_block.string
25 for code_block in the_basics_soup.find_all(class_="language-console")
26 if "$ black --version" in code_block.string
27 ]
28
29 for tag in tags:
30 if tag in version_example and tag != latest_tag:
31 print(
32 "Please set the version in the ``black --version`` "
33 "example from ``the_basics.md`` to be the latest one.\n"
34 f"Expected {latest_tag}, got {tag}.\n"
35 )
36 sys.exit(1)
37
38
39 if __name__ == "__main__":
40 with open("CHANGES.md", encoding="utf-8") as fd:
41 changes = fd.read()
42 with open(
43 os.path.join("docs", "usage_and_configuration", "the_basics.md"),
44 encoding="utf-8",
45 ) as fd:
46 the_basics = fd.read()
47 main(changes, the_basics)
48
[end of scripts/check_version_in_basics_example.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scripts/check_version_in_basics_example.py b/scripts/check_version_in_basics_example.py
--- a/scripts/check_version_in_basics_example.py
+++ b/scripts/check_version_in_basics_example.py
@@ -20,20 +20,21 @@
the_basics_html = commonmark.commonmark(the_basics)
the_basics_soup = BeautifulSoup(the_basics_html, "html.parser")
- (version_example,) = [
+ version_examples = [
code_block.string
for code_block in the_basics_soup.find_all(class_="language-console")
if "$ black --version" in code_block.string
]
for tag in tags:
- if tag in version_example and tag != latest_tag:
- print(
- "Please set the version in the ``black --version`` "
- "example from ``the_basics.md`` to be the latest one.\n"
- f"Expected {latest_tag}, got {tag}.\n"
- )
- sys.exit(1)
+ for version_example in version_examples:
+ if tag in version_example and tag != latest_tag:
+ print(
+ "Please set the version in the ``black --version`` "
+ "examples from ``the_basics.md`` to be the latest one.\n"
+ f"Expected {latest_tag}, got {tag}.\n"
+ )
+ sys.exit(1)
if __name__ == "__main__":
| {"golden_diff": "diff --git a/scripts/check_version_in_basics_example.py b/scripts/check_version_in_basics_example.py\n--- a/scripts/check_version_in_basics_example.py\n+++ b/scripts/check_version_in_basics_example.py\n@@ -20,20 +20,21 @@\n \n the_basics_html = commonmark.commonmark(the_basics)\n the_basics_soup = BeautifulSoup(the_basics_html, \"html.parser\")\n- (version_example,) = [\n+ version_examples = [\n code_block.string\n for code_block in the_basics_soup.find_all(class_=\"language-console\")\n if \"$ black --version\" in code_block.string\n ]\n \n for tag in tags:\n- if tag in version_example and tag != latest_tag:\n- print(\n- \"Please set the version in the ``black --version`` \"\n- \"example from ``the_basics.md`` to be the latest one.\\n\"\n- f\"Expected {latest_tag}, got {tag}.\\n\"\n- )\n- sys.exit(1)\n+ for version_example in version_examples:\n+ if tag in version_example and tag != latest_tag:\n+ print(\n+ \"Please set the version in the ``black --version`` \"\n+ \"examples from ``the_basics.md`` to be the latest one.\\n\"\n+ f\"Expected {latest_tag}, got {tag}.\\n\"\n+ )\n+ sys.exit(1)\n \n \n if __name__ == \"__main__\":\n", "issue": "Improve documentation for configuration options\nCurrently, our config options are documented only in a collapsed-by-default text block in https://black.readthedocs.io/en/stable/usage_and_configuration/the_basics.html#command-line-options. This is not very discoverable and makes it hard to give more detailed documentation, such as examples.\r\n\r\nInstead, we should have a docs page with a separate section for each option. We can start with the existing descriptions, and extend them as needed for options with more complicated behavior.\n", "before_files": [{"content": "\"\"\"\nCheck that the rev value in the example from ``the_basics.md`` matches\nthe latest version of Black. This saves us from forgetting to update that\nduring the release process.\n\"\"\"\n\nimport os\nimport sys\n\nimport commonmark\nfrom bs4 import BeautifulSoup\n\n\ndef main(changes: str, the_basics: str) -> None:\n changes_html = commonmark.commonmark(changes)\n changes_soup = BeautifulSoup(changes_html, \"html.parser\")\n headers = changes_soup.find_all(\"h2\")\n tags = [header.string for header in headers if header.string != \"Unreleased\"]\n latest_tag = tags[0]\n\n the_basics_html = commonmark.commonmark(the_basics)\n the_basics_soup = BeautifulSoup(the_basics_html, \"html.parser\")\n (version_example,) = [\n code_block.string\n for code_block in the_basics_soup.find_all(class_=\"language-console\")\n if \"$ black --version\" in code_block.string\n ]\n\n for tag in tags:\n if tag in version_example and tag != latest_tag:\n print(\n \"Please set the version in the ``black --version`` \"\n \"example from ``the_basics.md`` to be the latest one.\\n\"\n f\"Expected {latest_tag}, got {tag}.\\n\"\n )\n sys.exit(1)\n\n\nif __name__ == \"__main__\":\n with open(\"CHANGES.md\", encoding=\"utf-8\") as fd:\n changes = fd.read()\n with open(\n os.path.join(\"docs\", \"usage_and_configuration\", \"the_basics.md\"),\n encoding=\"utf-8\",\n ) as fd:\n the_basics = fd.read()\n main(changes, the_basics)\n", "path": "scripts/check_version_in_basics_example.py"}]} | 1,098 | 319 |
gh_patches_debug_738 | rasdani/github-patches | git_diff | certbot__certbot-7766 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Required pyparsing version
I've been experimenting with writing tests using the oldest allowed versions of our Python dependencies. `setup.py` for `letsencrypt-nginx` says it requires `pyparsing>=1.5.5` but when I pin version 1.5.5, I encounter problems. You can see Travis logs of the issue [here](https://travis-ci.org/letsencrypt/letsencrypt/jobs/100739657) and [here](https://travis-ci.org/letsencrypt/letsencrypt/jobs/100739658).
We should determine what version we require and update `setup.py` accordingly.
</issue>
<code>
[start of certbot-nginx/setup.py]
1 import sys
2
3 from setuptools import find_packages
4 from setuptools import setup
5 from setuptools.command.test import test as TestCommand
6
7 version = '1.3.0.dev0'
8
9 # Remember to update local-oldest-requirements.txt when changing the minimum
10 # acme/certbot version.
11 install_requires = [
12 'acme>=1.0.0',
13 'certbot>=1.1.0',
14 'mock',
15 'PyOpenSSL',
16 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?
17 'setuptools',
18 'zope.interface',
19 ]
20
21
22 class PyTest(TestCommand):
23 user_options = []
24
25 def initialize_options(self):
26 TestCommand.initialize_options(self)
27 self.pytest_args = ''
28
29 def run_tests(self):
30 import shlex
31 # import here, cause outside the eggs aren't loaded
32 import pytest
33 errno = pytest.main(shlex.split(self.pytest_args))
34 sys.exit(errno)
35
36
37 setup(
38 name='certbot-nginx',
39 version=version,
40 description="Nginx plugin for Certbot",
41 url='https://github.com/letsencrypt/letsencrypt',
42 author="Certbot Project",
43 author_email='[email protected]',
44 license='Apache License 2.0',
45 python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',
46 classifiers=[
47 'Development Status :: 5 - Production/Stable',
48 'Environment :: Plugins',
49 'Intended Audience :: System Administrators',
50 'License :: OSI Approved :: Apache Software License',
51 'Operating System :: POSIX :: Linux',
52 'Programming Language :: Python',
53 'Programming Language :: Python :: 2',
54 'Programming Language :: Python :: 2.7',
55 'Programming Language :: Python :: 3',
56 'Programming Language :: Python :: 3.5',
57 'Programming Language :: Python :: 3.6',
58 'Programming Language :: Python :: 3.7',
59 'Programming Language :: Python :: 3.8',
60 'Topic :: Internet :: WWW/HTTP',
61 'Topic :: Security',
62 'Topic :: System :: Installation/Setup',
63 'Topic :: System :: Networking',
64 'Topic :: System :: Systems Administration',
65 'Topic :: Utilities',
66 ],
67
68 packages=find_packages(),
69 include_package_data=True,
70 install_requires=install_requires,
71 entry_points={
72 'certbot.plugins': [
73 'nginx = certbot_nginx._internal.configurator:NginxConfigurator',
74 ],
75 },
76 test_suite='certbot_nginx',
77 tests_require=["pytest"],
78 cmdclass={"test": PyTest},
79 )
80
[end of certbot-nginx/setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/certbot-nginx/setup.py b/certbot-nginx/setup.py
--- a/certbot-nginx/setup.py
+++ b/certbot-nginx/setup.py
@@ -13,7 +13,7 @@
'certbot>=1.1.0',
'mock',
'PyOpenSSL',
- 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?
+ 'pyparsing>=1.5.5', # Python3 support
'setuptools',
'zope.interface',
]
| {"golden_diff": "diff --git a/certbot-nginx/setup.py b/certbot-nginx/setup.py\n--- a/certbot-nginx/setup.py\n+++ b/certbot-nginx/setup.py\n@@ -13,7 +13,7 @@\n 'certbot>=1.1.0',\n 'mock',\n 'PyOpenSSL',\n- 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?\n+ 'pyparsing>=1.5.5', # Python3 support\n 'setuptools',\n 'zope.interface',\n ]\n", "issue": "Required pyparsing version\nI've been experimenting with writing tests using the oldest allowed versions of our Python dependencies. `setup.py` for `letsencrypt-nginx` says it requires `pyparsing>=1.5.5` but when I pin version 1.5.5, I encounter problems. You can see Travis logs of the issue [here](https://travis-ci.org/letsencrypt/letsencrypt/jobs/100739657) and [here](https://travis-ci.org/letsencrypt/letsencrypt/jobs/100739658).\n\nWe should determine what version we require and update `setup.py` accordingly.\n\n", "before_files": [{"content": "import sys\n\nfrom setuptools import find_packages\nfrom setuptools import setup\nfrom setuptools.command.test import test as TestCommand\n\nversion = '1.3.0.dev0'\n\n# Remember to update local-oldest-requirements.txt when changing the minimum\n# acme/certbot version.\ninstall_requires = [\n 'acme>=1.0.0',\n 'certbot>=1.1.0',\n 'mock',\n 'PyOpenSSL',\n 'pyparsing>=1.5.5', # Python3 support; perhaps unnecessary?\n 'setuptools',\n 'zope.interface',\n]\n\n\nclass PyTest(TestCommand):\n user_options = []\n\n def initialize_options(self):\n TestCommand.initialize_options(self)\n self.pytest_args = ''\n\n def run_tests(self):\n import shlex\n # import here, cause outside the eggs aren't loaded\n import pytest\n errno = pytest.main(shlex.split(self.pytest_args))\n sys.exit(errno)\n\n\nsetup(\n name='certbot-nginx',\n version=version,\n description=\"Nginx plugin for Certbot\",\n url='https://github.com/letsencrypt/letsencrypt',\n author=\"Certbot Project\",\n author_email='[email protected]',\n license='Apache License 2.0',\n python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, !=3.4.*',\n classifiers=[\n 'Development Status :: 5 - Production/Stable',\n 'Environment :: Plugins',\n 'Intended Audience :: System Administrators',\n 'License :: OSI Approved :: Apache Software License',\n 'Operating System :: POSIX :: Linux',\n 'Programming Language :: Python',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.5',\n 'Programming Language :: Python :: 3.6',\n 'Programming Language :: Python :: 3.7',\n 'Programming Language :: Python :: 3.8',\n 'Topic :: Internet :: WWW/HTTP',\n 'Topic :: Security',\n 'Topic :: System :: Installation/Setup',\n 'Topic :: System :: Networking',\n 'Topic :: System :: Systems Administration',\n 'Topic :: Utilities',\n ],\n\n packages=find_packages(),\n include_package_data=True,\n install_requires=install_requires,\n entry_points={\n 'certbot.plugins': [\n 'nginx = certbot_nginx._internal.configurator:NginxConfigurator',\n ],\n },\n test_suite='certbot_nginx',\n tests_require=[\"pytest\"],\n cmdclass={\"test\": PyTest},\n)\n", "path": "certbot-nginx/setup.py"}]} | 1,416 | 127 |
gh_patches_debug_4844 | rasdani/github-patches | git_diff | twisted__twisted-11722 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
.hypothesis should be in .gitignore
**Describe the incorrect behavior you saw**
`git diff` shows me an untracked `.hypothesis` directory.
**Describe how to cause this behavior**
I ran the tests.
**Describe the correct behavior you'd like to see**
`.hypothesis` [shouldn't be checked in](https://hypothesis.readthedocs.io/en/latest/database.html#the-hypothesis-example-database), so it should be ignored by `git`.
</issue>
<code>
[start of .github/scripts/check-pr-text.py]
1 #
2 # This script is designed to be called by the GHA workflow.
3 #
4 # It is designed to check that the PR text complies to our dev standards.
5 #
6 # The input is received via the environmet variables:
7 # * PR_TITLE - title of the PR
8 # * PR_BODY - the description of the PR
9 #
10 # To test it run
11 #
12 # $ export PR_TITLE='#1234 Test Title'
13 # $ export PR_BODY='some lines
14 # > Fixes #12345
15 # > more lines'
16 # $ python3 .github/scripts/check-pr-text.py
17 #
18 import os
19 import re
20 import sys
21
22 pr_title = os.environ.get("PR_TITLE", "")
23 pr_body = os.environ.get("PR_BODY", "")
24
25 print("--- DEBUG ---")
26 print(f"Title: {pr_title}")
27 print(f"Body:\n {pr_body}")
28 print("-------------")
29
30
31 def fail(message):
32 print(message)
33 print("Fix the title and then trigger a new push.")
34 print("A re-run for this job will not work.")
35 sys.exit(1)
36
37
38 if not pr_title:
39 fail("Title for the PR not found. " "Maybe missing PR_TITLE env var.")
40
41 if not pr_body:
42 fail("Body for the PR not found. " "Maybe missing PR_BODY env var.")
43
44 title_search = re.search(r"^(#\d+) .+", pr_title)
45 if not title_search:
46 fail(
47 "Title of PR has no issue ID reference. It must look like “#1234 Foo bar baz”."
48 )
49 else:
50 print(f"PR title is complaint for {title_search[1]}. Good job.")
51
52
53 body_search = re.search(r".*Fixes (#\d+).+", pr_body)
54 if not body_search:
55 fail('Body of PR has no "Fixes #12345" issue ID reference.')
56 else:
57 print(f"PR description is complaint for {body_search[1]}. Good job.")
58
59
60 if title_search[1] != body_search[1]:
61 fail("PR title and description have different IDs.")
62
63 # All good.
64 sys.exit(0)
65
[end of .github/scripts/check-pr-text.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/.github/scripts/check-pr-text.py b/.github/scripts/check-pr-text.py
--- a/.github/scripts/check-pr-text.py
+++ b/.github/scripts/check-pr-text.py
@@ -41,7 +41,7 @@
if not pr_body:
fail("Body for the PR not found. " "Maybe missing PR_BODY env var.")
-title_search = re.search(r"^(#\d+) .+", pr_title)
+title_search = re.search(r"^(#\d+):? .+", pr_title)
if not title_search:
fail(
"Title of PR has no issue ID reference. It must look like “#1234 Foo bar baz”."
| {"golden_diff": "diff --git a/.github/scripts/check-pr-text.py b/.github/scripts/check-pr-text.py\n--- a/.github/scripts/check-pr-text.py\n+++ b/.github/scripts/check-pr-text.py\n@@ -41,7 +41,7 @@\n if not pr_body:\n fail(\"Body for the PR not found. \" \"Maybe missing PR_BODY env var.\")\n \n-title_search = re.search(r\"^(#\\d+) .+\", pr_title)\n+title_search = re.search(r\"^(#\\d+):? .+\", pr_title)\n if not title_search:\n fail(\n \"Title of PR has no issue ID reference. It must look like \u201c#1234 Foo bar baz\u201d.\"\n", "issue": ".hypothesis should be in .gitignore\n**Describe the incorrect behavior you saw**\r\n\r\n`git diff` shows me an untracked `.hypothesis` directory.\r\n\r\n**Describe how to cause this behavior**\r\n\r\nI ran the tests.\r\n\r\n**Describe the correct behavior you'd like to see**\r\n\r\n`.hypothesis` [shouldn't be checked in](https://hypothesis.readthedocs.io/en/latest/database.html#the-hypothesis-example-database), so it should be ignored by `git`.\n", "before_files": [{"content": "#\n# This script is designed to be called by the GHA workflow.\n#\n# It is designed to check that the PR text complies to our dev standards.\n#\n# The input is received via the environmet variables:\n# * PR_TITLE - title of the PR\n# * PR_BODY - the description of the PR\n#\n# To test it run\n#\n# $ export PR_TITLE='#1234 Test Title'\n# $ export PR_BODY='some lines\n# > Fixes #12345\n# > more lines'\n# $ python3 .github/scripts/check-pr-text.py\n#\nimport os\nimport re\nimport sys\n\npr_title = os.environ.get(\"PR_TITLE\", \"\")\npr_body = os.environ.get(\"PR_BODY\", \"\")\n\nprint(\"--- DEBUG ---\")\nprint(f\"Title: {pr_title}\")\nprint(f\"Body:\\n {pr_body}\")\nprint(\"-------------\")\n\n\ndef fail(message):\n print(message)\n print(\"Fix the title and then trigger a new push.\")\n print(\"A re-run for this job will not work.\")\n sys.exit(1)\n\n\nif not pr_title:\n fail(\"Title for the PR not found. \" \"Maybe missing PR_TITLE env var.\")\n\nif not pr_body:\n fail(\"Body for the PR not found. \" \"Maybe missing PR_BODY env var.\")\n\ntitle_search = re.search(r\"^(#\\d+) .+\", pr_title)\nif not title_search:\n fail(\n \"Title of PR has no issue ID reference. It must look like \u201c#1234 Foo bar baz\u201d.\"\n )\nelse:\n print(f\"PR title is complaint for {title_search[1]}. Good job.\")\n\n\nbody_search = re.search(r\".*Fixes (#\\d+).+\", pr_body)\nif not body_search:\n fail('Body of PR has no \"Fixes #12345\" issue ID reference.')\nelse:\n print(f\"PR description is complaint for {body_search[1]}. Good job.\")\n\n\nif title_search[1] != body_search[1]:\n fail(\"PR title and description have different IDs.\")\n\n# All good.\nsys.exit(0)\n", "path": ".github/scripts/check-pr-text.py"}]} | 1,222 | 147 |
gh_patches_debug_391 | rasdani/github-patches | git_diff | getmoto__moto-1992 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Replace pyaml dependency with PyYAML
There is a dependency on pyaml in setup.py:
https://github.com/spulec/moto/blob/master/setup.py#L18
I think that this is intended to be PyYAML (which pyaml depends on), and I do not see any usages of pyaml itself in this codebase.
pyaml uses WTFPL (https://github.com/mk-fg/pretty-yaml/blob/master/COPYING) which is not approved by the OSI (https://opensource.org/minutes20090304)
</issue>
<code>
[start of setup.py]
1 #!/usr/bin/env python
2 from __future__ import unicode_literals
3 import setuptools
4 from setuptools import setup, find_packages
5 import sys
6
7
8 install_requires = [
9 "Jinja2>=2.7.3",
10 "boto>=2.36.0",
11 "boto3>=1.6.16",
12 "botocore>=1.12.13",
13 "cryptography>=2.3.0",
14 "requests>=2.5",
15 "xmltodict",
16 "six>1.9",
17 "werkzeug",
18 "pyaml",
19 "pytz",
20 "python-dateutil<3.0.0,>=2.1",
21 "python-jose<3.0.0",
22 "mock",
23 "docker>=2.5.1",
24 "jsondiff==1.1.1",
25 "aws-xray-sdk!=0.96,>=0.93",
26 "responses>=0.9.0",
27 ]
28
29 extras_require = {
30 'server': ['flask'],
31 }
32
33 # https://hynek.me/articles/conditional-python-dependencies/
34 if int(setuptools.__version__.split(".", 1)[0]) < 18:
35 if sys.version_info[0:2] < (3, 3):
36 install_requires.append("backports.tempfile")
37 else:
38 extras_require[":python_version<'3.3'"] = ["backports.tempfile"]
39
40
41 setup(
42 name='moto',
43 version='1.3.7',
44 description='A library that allows your python tests to easily'
45 ' mock out the boto library',
46 author='Steve Pulec',
47 author_email='[email protected]',
48 url='https://github.com/spulec/moto',
49 entry_points={
50 'console_scripts': [
51 'moto_server = moto.server:main',
52 ],
53 },
54 packages=find_packages(exclude=("tests", "tests.*")),
55 install_requires=install_requires,
56 extras_require=extras_require,
57 include_package_data=True,
58 license="Apache",
59 test_suite="tests",
60 classifiers=[
61 "Programming Language :: Python :: 2",
62 "Programming Language :: Python :: 2.7",
63 "Programming Language :: Python :: 3",
64 "Programming Language :: Python :: 3.3",
65 "Programming Language :: Python :: 3.4",
66 "Programming Language :: Python :: 3.5",
67 "Programming Language :: Python :: 3.6",
68 "License :: OSI Approved :: Apache Software License",
69 "Topic :: Software Development :: Testing",
70 ],
71 )
72
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -15,7 +15,7 @@
"xmltodict",
"six>1.9",
"werkzeug",
- "pyaml",
+ "PyYAML",
"pytz",
"python-dateutil<3.0.0,>=2.1",
"python-jose<3.0.0",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -15,7 +15,7 @@\n \"xmltodict\",\n \"six>1.9\",\n \"werkzeug\",\n- \"pyaml\",\n+ \"PyYAML\",\n \"pytz\",\n \"python-dateutil<3.0.0,>=2.1\",\n \"python-jose<3.0.0\",\n", "issue": "Replace pyaml dependency with PyYAML\nThere is a dependency on pyaml in setup.py:\r\n\r\nhttps://github.com/spulec/moto/blob/master/setup.py#L18\r\n\r\nI think that this is intended to be PyYAML (which pyaml depends on), and I do not see any usages of pyaml itself in this codebase.\r\n\r\npyaml uses WTFPL (https://github.com/mk-fg/pretty-yaml/blob/master/COPYING) which is not approved by the OSI (https://opensource.org/minutes20090304)\n", "before_files": [{"content": "#!/usr/bin/env python\nfrom __future__ import unicode_literals\nimport setuptools\nfrom setuptools import setup, find_packages\nimport sys\n\n\ninstall_requires = [\n \"Jinja2>=2.7.3\",\n \"boto>=2.36.0\",\n \"boto3>=1.6.16\",\n \"botocore>=1.12.13\",\n \"cryptography>=2.3.0\",\n \"requests>=2.5\",\n \"xmltodict\",\n \"six>1.9\",\n \"werkzeug\",\n \"pyaml\",\n \"pytz\",\n \"python-dateutil<3.0.0,>=2.1\",\n \"python-jose<3.0.0\",\n \"mock\",\n \"docker>=2.5.1\",\n \"jsondiff==1.1.1\",\n \"aws-xray-sdk!=0.96,>=0.93\",\n \"responses>=0.9.0\",\n]\n\nextras_require = {\n 'server': ['flask'],\n}\n\n# https://hynek.me/articles/conditional-python-dependencies/\nif int(setuptools.__version__.split(\".\", 1)[0]) < 18:\n if sys.version_info[0:2] < (3, 3):\n install_requires.append(\"backports.tempfile\")\nelse:\n extras_require[\":python_version<'3.3'\"] = [\"backports.tempfile\"]\n\n\nsetup(\n name='moto',\n version='1.3.7',\n description='A library that allows your python tests to easily'\n ' mock out the boto library',\n author='Steve Pulec',\n author_email='[email protected]',\n url='https://github.com/spulec/moto',\n entry_points={\n 'console_scripts': [\n 'moto_server = moto.server:main',\n ],\n },\n packages=find_packages(exclude=(\"tests\", \"tests.*\")),\n install_requires=install_requires,\n extras_require=extras_require,\n include_package_data=True,\n license=\"Apache\",\n test_suite=\"tests\",\n classifiers=[\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Topic :: Software Development :: Testing\",\n ],\n)\n", "path": "setup.py"}]} | 1,344 | 99 |
gh_patches_debug_24229 | rasdani/github-patches | git_diff | streamlink__streamlink-2160 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Skai plugin broken
<!--
Thanks for reporting a plugin issue!
USE THE TEMPLATE. Otherwise your plugin issue may be rejected.
First, see the contribution guidelines:
https://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink
Also check the list of open and closed plugin issues:
https://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22
Please see the text preview to avoid unnecessary formatting errors.
-->
## Plugin Issue
- [x] This is a plugin issue and I have read the contribution guidelines.
### Description
Skai plugin is broken since yesterday, but actually it is no longer needed because they provide a lot more stable stream (they don't change stream three or so times a day). **Imho it can be removed.**
New live url as follows:
http://www.skaitv.gr/live
</issue>
<code>
[start of src/streamlink/plugins/skai.py]
1 import re
2
3 from streamlink.plugin import Plugin
4 from streamlink.plugin.api import validate
5
6 YOUTUBE_URL = "https://www.youtube.com/watch?v={0}"
7 _url_re = re.compile(r'http(s)?://www\.skai.gr/.*')
8 _youtube_id = re.compile(r'<span\s+itemprop="contentUrl"\s+href="(.*)"></span>', re.MULTILINE)
9 _youtube_url_schema = validate.Schema(
10 validate.all(
11 validate.transform(_youtube_id.search),
12 validate.any(
13 None,
14 validate.all(
15 validate.get(1),
16 validate.text
17 )
18 )
19 )
20 )
21
22
23 class Skai(Plugin):
24 @classmethod
25 def can_handle_url(cls, url):
26 return _url_re.match(url)
27
28 def _get_streams(self):
29 channel_id = self.session.http.get(self.url, schema=_youtube_url_schema)
30 if channel_id:
31 return self.session.streams(YOUTUBE_URL.format(channel_id))
32
33
34 __plugin__ = Skai
35
[end of src/streamlink/plugins/skai.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/skai.py b/src/streamlink/plugins/skai.py
--- a/src/streamlink/plugins/skai.py
+++ b/src/streamlink/plugins/skai.py
@@ -3,20 +3,15 @@
from streamlink.plugin import Plugin
from streamlink.plugin.api import validate
-YOUTUBE_URL = "https://www.youtube.com/watch?v={0}"
-_url_re = re.compile(r'http(s)?://www\.skai.gr/.*')
-_youtube_id = re.compile(r'<span\s+itemprop="contentUrl"\s+href="(.*)"></span>', re.MULTILINE)
-_youtube_url_schema = validate.Schema(
- validate.all(
- validate.transform(_youtube_id.search),
- validate.any(
- None,
- validate.all(
- validate.get(1),
- validate.text
- )
- )
- )
+
+_url_re = re.compile(r'http(s)?://www\.skai(?:tv)?.gr/.*')
+_api_url = "http://www.skaitv.gr/json/live.php"
+_api_res_schema = validate.Schema(validate.all(
+ validate.get("now"),
+ {
+ "livestream": validate.url()
+ },
+ validate.get("livestream"))
)
@@ -26,9 +21,10 @@
return _url_re.match(url)
def _get_streams(self):
- channel_id = self.session.http.get(self.url, schema=_youtube_url_schema)
- if channel_id:
- return self.session.streams(YOUTUBE_URL.format(channel_id))
+ api_res = self.session.http.get(_api_url)
+ yt_url = self.session.http.json(api_res, schema=_api_res_schema)
+ if yt_url:
+ return self.session.streams(yt_url)
__plugin__ = Skai
| {"golden_diff": "diff --git a/src/streamlink/plugins/skai.py b/src/streamlink/plugins/skai.py\n--- a/src/streamlink/plugins/skai.py\n+++ b/src/streamlink/plugins/skai.py\n@@ -3,20 +3,15 @@\n from streamlink.plugin import Plugin\n from streamlink.plugin.api import validate\n \n-YOUTUBE_URL = \"https://www.youtube.com/watch?v={0}\"\n-_url_re = re.compile(r'http(s)?://www\\.skai.gr/.*')\n-_youtube_id = re.compile(r'<span\\s+itemprop=\"contentUrl\"\\s+href=\"(.*)\"></span>', re.MULTILINE)\n-_youtube_url_schema = validate.Schema(\n- validate.all(\n- validate.transform(_youtube_id.search),\n- validate.any(\n- None,\n- validate.all(\n- validate.get(1),\n- validate.text\n- )\n- )\n- )\n+\n+_url_re = re.compile(r'http(s)?://www\\.skai(?:tv)?.gr/.*')\n+_api_url = \"http://www.skaitv.gr/json/live.php\"\n+_api_res_schema = validate.Schema(validate.all(\n+ validate.get(\"now\"),\n+ {\n+ \"livestream\": validate.url()\n+ },\n+ validate.get(\"livestream\"))\n )\n \n \n@@ -26,9 +21,10 @@\n return _url_re.match(url)\n \n def _get_streams(self):\n- channel_id = self.session.http.get(self.url, schema=_youtube_url_schema)\n- if channel_id:\n- return self.session.streams(YOUTUBE_URL.format(channel_id))\n+ api_res = self.session.http.get(_api_url)\n+ yt_url = self.session.http.json(api_res, schema=_api_res_schema)\n+ if yt_url:\n+ return self.session.streams(yt_url)\n \n \n __plugin__ = Skai\n", "issue": "Skai plugin broken\n<!--\r\nThanks for reporting a plugin issue!\r\nUSE THE TEMPLATE. Otherwise your plugin issue may be rejected.\r\n\r\nFirst, see the contribution guidelines:\r\nhttps://github.com/streamlink/streamlink/blob/master/CONTRIBUTING.md#contributing-to-streamlink\r\n\r\nAlso check the list of open and closed plugin issues:\r\nhttps://github.com/streamlink/streamlink/issues?q=is%3Aissue+label%3A%22plugin+issue%22\r\n\r\nPlease see the text preview to avoid unnecessary formatting errors.\r\n-->\r\n\r\n\r\n## Plugin Issue\r\n\r\n- [x] This is a plugin issue and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\n\r\nSkai plugin is broken since yesterday, but actually it is no longer needed because they provide a lot more stable stream (they don't change stream three or so times a day). **Imho it can be removed.**\r\n\r\nNew live url as follows:\r\n\r\nhttp://www.skaitv.gr/live\r\n\n", "before_files": [{"content": "import re\n\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import validate\n\nYOUTUBE_URL = \"https://www.youtube.com/watch?v={0}\"\n_url_re = re.compile(r'http(s)?://www\\.skai.gr/.*')\n_youtube_id = re.compile(r'<span\\s+itemprop=\"contentUrl\"\\s+href=\"(.*)\"></span>', re.MULTILINE)\n_youtube_url_schema = validate.Schema(\n validate.all(\n validate.transform(_youtube_id.search),\n validate.any(\n None,\n validate.all(\n validate.get(1),\n validate.text\n )\n )\n )\n)\n\n\nclass Skai(Plugin):\n @classmethod\n def can_handle_url(cls, url):\n return _url_re.match(url)\n\n def _get_streams(self):\n channel_id = self.session.http.get(self.url, schema=_youtube_url_schema)\n if channel_id:\n return self.session.streams(YOUTUBE_URL.format(channel_id))\n\n\n__plugin__ = Skai\n", "path": "src/streamlink/plugins/skai.py"}]} | 1,015 | 401 |
gh_patches_debug_6369 | rasdani/github-patches | git_diff | ivy-llc__ivy-18211 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
selu
#14951
</issue>
<code>
[start of ivy/functional/frontends/mindspore/ops/function/nn_func.py]
1 """Includes Mindspore Frontend functions listed in the TODO list
2 https://github.com/unifyai/ivy/issues/14951."""
3
4 # local
5 import ivy
6 from ivy.func_wrapper import with_supported_dtypes
7 from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
8
9
10 @with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
11 @to_ivy_arrays_and_back
12 def softsign(x):
13 return ivy.divide(x, ivy.add(1, ivy.abs(x)))
14
[end of ivy/functional/frontends/mindspore/ops/function/nn_func.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py
+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py
@@ -7,6 +7,12 @@
from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back
+@with_supported_dtypes({"2.0.0 and below": ("float16", "float32")}, "mindspore")
+@to_ivy_arrays_and_back
+def selu(input_x):
+ return ivy.selu(input_x)
+
+
@with_supported_dtypes({"2.0 and below": ("float16", "float32")}, "mindspore")
@to_ivy_arrays_and_back
def softsign(x):
| {"golden_diff": "diff --git a/ivy/functional/frontends/mindspore/ops/function/nn_func.py b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n--- a/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n+++ b/ivy/functional/frontends/mindspore/ops/function/nn_func.py\n@@ -7,6 +7,12 @@\n from ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n \n \n+@with_supported_dtypes({\"2.0.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n+@to_ivy_arrays_and_back\n+def selu(input_x):\n+ return ivy.selu(input_x)\n+\n+ \n @with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n @to_ivy_arrays_and_back\n def softsign(x):\n", "issue": "selu\n#14951 \n", "before_files": [{"content": "\"\"\"Includes Mindspore Frontend functions listed in the TODO list\nhttps://github.com/unifyai/ivy/issues/14951.\"\"\"\n\n# local\nimport ivy\nfrom ivy.func_wrapper import with_supported_dtypes\nfrom ivy.functional.frontends.paddle.func_wrapper import to_ivy_arrays_and_back\n\n\n@with_supported_dtypes({\"2.0 and below\": (\"float16\", \"float32\")}, \"mindspore\")\n@to_ivy_arrays_and_back\ndef softsign(x):\n return ivy.divide(x, ivy.add(1, ivy.abs(x)))\n", "path": "ivy/functional/frontends/mindspore/ops/function/nn_func.py"}]} | 718 | 212 |
gh_patches_debug_5837 | rasdani/github-patches | git_diff | googleapis__google-api-python-client-1639 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
docs: 404 error while accessing contribution guide
When I was trying to access the contribution guide mentioned in `CONTRIBUTING.rst`, I am getting 404 error - https://googleapis.github.io/google-api-python-client/contributing.html


</issue>
<code>
[start of owlbot.py]
1 # Copyright 2020 Google LLC
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import synthtool as s
16 from synthtool import gcp
17
18 from synthtool.languages import python
19
20 common = gcp.CommonTemplates()
21
22 # ----------------------------------------------------------------------------
23 # Add templated files
24 # ----------------------------------------------------------------------------
25 templated_files = common.py_library()
26
27 # Copy kokoro configs.
28 # Docs are excluded as repo docs cannot currently be generated using sphinx.
29 s.move(templated_files / '.kokoro', excludes=['**/docs/*', 'publish-docs.sh'])
30 s.move(templated_files / '.trampolinerc') # config file for trampoline_v2
31
32 # Also move issue templates
33 s.move(templated_files / '.github', excludes=['CODEOWNERS'])
34
35 # Move scripts folder needed for samples CI
36 s.move(templated_files / 'scripts')
37
38 # ----------------------------------------------------------------------------
39 # Samples templates
40 # ----------------------------------------------------------------------------
41
42 python.py_samples(skip_readmes=True)
43
[end of owlbot.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/owlbot.py b/owlbot.py
--- a/owlbot.py
+++ b/owlbot.py
@@ -35,6 +35,9 @@
# Move scripts folder needed for samples CI
s.move(templated_files / 'scripts')
+# Copy CONTRIBUTING.rst
+s.move(templated_files / 'CONTRIBUTING.rst')
+
# ----------------------------------------------------------------------------
# Samples templates
# ----------------------------------------------------------------------------
| {"golden_diff": "diff --git a/owlbot.py b/owlbot.py\n--- a/owlbot.py\n+++ b/owlbot.py\n@@ -35,6 +35,9 @@\n # Move scripts folder needed for samples CI\n s.move(templated_files / 'scripts')\n \n+# Copy CONTRIBUTING.rst\n+s.move(templated_files / 'CONTRIBUTING.rst')\n+\n # ----------------------------------------------------------------------------\n # Samples templates\n # ----------------------------------------------------------------------------\n", "issue": "docs: 404 error while accessing contribution guide\nWhen I was trying to access the contribution guide mentioned in `CONTRIBUTING.rst`, I am getting 404 error - https://googleapis.github.io/google-api-python-client/contributing.html\r\n\r\n\r\n\r\n\r\n\r\n\n", "before_files": [{"content": "# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport synthtool as s\nfrom synthtool import gcp\n\nfrom synthtool.languages import python\n\ncommon = gcp.CommonTemplates()\n\n# ----------------------------------------------------------------------------\n# Add templated files\n# ----------------------------------------------------------------------------\ntemplated_files = common.py_library()\n\n# Copy kokoro configs.\n# Docs are excluded as repo docs cannot currently be generated using sphinx.\ns.move(templated_files / '.kokoro', excludes=['**/docs/*', 'publish-docs.sh'])\ns.move(templated_files / '.trampolinerc') # config file for trampoline_v2\n\n# Also move issue templates\ns.move(templated_files / '.github', excludes=['CODEOWNERS'])\n\n# Move scripts folder needed for samples CI\ns.move(templated_files / 'scripts')\n\n# ----------------------------------------------------------------------------\n# Samples templates\n# ----------------------------------------------------------------------------\n\npython.py_samples(skip_readmes=True)\n", "path": "owlbot.py"}]} | 1,105 | 88 |
gh_patches_debug_12483 | rasdani/github-patches | git_diff | freedomofpress__securedrop-2929 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Bump Ansible to 2.4
## Description
The current version of Ansible in the admin workstation uses PyCrypto==2.6.1 as a dependency, which is causing CI safety failures when checking the admin pip requirements due to the fact that there is a CVE in PyCrypto 2.6.1. See upstream discussion in https://github.com/ansible/ansible/issues/23179.
We should bump to a more recent version of Ansible in the admin workstations that does not have PyCrypto as a dependency
## User Stories
As a SecureDrop administrator, I don't want to run software relying on unmaintained dependencies.
Temporarily disable safety check
## Description
We'll need to temporarily disable safety in order to merge until #2926 is resolved (and we'll need to cherry pick the disabling of safety into the 0.5.2 release branch).
## User Stories
As a SecureDrop maintainer, I don't want to merge with failing CI.
</issue>
<code>
[start of install_files/ansible-base/callback_plugins/ansible_version_check.py]
1 # -*- encoding:utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import sys
5
6 import ansible
7
8 try:
9 # Version 2.0+
10 from ansible.plugins.callback import CallbackBase
11 except ImportError:
12 CallbackBase = object
13
14
15 def print_red_bold(text):
16 print('\x1b[31;1m' + text + '\x1b[0m')
17
18
19 class CallbackModule(CallbackBase):
20 def __init__(self):
21 # Can't use `on_X` because this isn't forwards compatible with Ansible 2.0+
22 required_version = '2.3.2' # Keep synchronized with group_vars/all/main.yml
23 if not ansible.__version__.startswith(required_version):
24 print_red_bold(
25 "SecureDrop restriction: only Ansible {version}.* is supported. "
26 .format(version=required_version)
27 )
28 sys.exit(1)
29
[end of install_files/ansible-base/callback_plugins/ansible_version_check.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/install_files/ansible-base/callback_plugins/ansible_version_check.py b/install_files/ansible-base/callback_plugins/ansible_version_check.py
--- a/install_files/ansible-base/callback_plugins/ansible_version_check.py
+++ b/install_files/ansible-base/callback_plugins/ansible_version_check.py
@@ -19,7 +19,7 @@
class CallbackModule(CallbackBase):
def __init__(self):
# Can't use `on_X` because this isn't forwards compatible with Ansible 2.0+
- required_version = '2.3.2' # Keep synchronized with group_vars/all/main.yml
+ required_version = '2.4.2' # Keep synchronized with requirements files
if not ansible.__version__.startswith(required_version):
print_red_bold(
"SecureDrop restriction: only Ansible {version}.* is supported. "
| {"golden_diff": "diff --git a/install_files/ansible-base/callback_plugins/ansible_version_check.py b/install_files/ansible-base/callback_plugins/ansible_version_check.py\n--- a/install_files/ansible-base/callback_plugins/ansible_version_check.py\n+++ b/install_files/ansible-base/callback_plugins/ansible_version_check.py\n@@ -19,7 +19,7 @@\n class CallbackModule(CallbackBase):\n def __init__(self):\n # Can't use `on_X` because this isn't forwards compatible with Ansible 2.0+\n- required_version = '2.3.2' # Keep synchronized with group_vars/all/main.yml\n+ required_version = '2.4.2' # Keep synchronized with requirements files\n if not ansible.__version__.startswith(required_version):\n print_red_bold(\n \"SecureDrop restriction: only Ansible {version}.* is supported. \"\n", "issue": "Bump Ansible to 2.4\n## Description\r\n\r\nThe current version of Ansible in the admin workstation uses PyCrypto==2.6.1 as a dependency, which is causing CI safety failures when checking the admin pip requirements due to the fact that there is a CVE in PyCrypto 2.6.1. See upstream discussion in https://github.com/ansible/ansible/issues/23179. \r\n\r\nWe should bump to a more recent version of Ansible in the admin workstations that does not have PyCrypto as a dependency\r\n\r\n## User Stories\r\n\r\nAs a SecureDrop administrator, I don't want to run software relying on unmaintained dependencies.\nTemporarily disable safety check\n## Description\r\n\r\nWe'll need to temporarily disable safety in order to merge until #2926 is resolved (and we'll need to cherry pick the disabling of safety into the 0.5.2 release branch). \r\n\r\n## User Stories\r\n\r\nAs a SecureDrop maintainer, I don't want to merge with failing CI. \n", "before_files": [{"content": "# -*- encoding:utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\n\nimport ansible\n\ntry:\n # Version 2.0+\n from ansible.plugins.callback import CallbackBase\nexcept ImportError:\n CallbackBase = object\n\n\ndef print_red_bold(text):\n print('\\x1b[31;1m' + text + '\\x1b[0m')\n\n\nclass CallbackModule(CallbackBase):\n def __init__(self):\n # Can't use `on_X` because this isn't forwards compatible with Ansible 2.0+\n required_version = '2.3.2' # Keep synchronized with group_vars/all/main.yml\n if not ansible.__version__.startswith(required_version):\n print_red_bold(\n \"SecureDrop restriction: only Ansible {version}.* is supported. \"\n .format(version=required_version)\n )\n sys.exit(1)\n", "path": "install_files/ansible-base/callback_plugins/ansible_version_check.py"}]} | 1,013 | 186 |
gh_patches_debug_6736 | rasdani/github-patches | git_diff | freqtrade__freqtrade-3490 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
I get the same profit / loss report in all time frames
<!--
Have you searched for similar issues before posting it? Yes
If you have discovered a bug in the bot, please [search our issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: ____Ubuntu 18.04.4 LTS
* Python Version: _____Python 3.6.9
* CCXT version: _____ ccxt==1.29.5
* Freqtrade Version: ____ freqtrade develop-761407f7
Today, I updated with the method below.
cd freqtrade
git pull
python3 -m pip install -r requirements.txt --user
python3 -m pip install -r requirements-hyperopt.txt --user
python3 -m pip install -r requirements-plot.txt --user
and
freqtrade download-data --days 365 --timeframes 5m 15m 30m 1h 4h 1d
https://github.com/freqtrade/freqtrade/issues/3104 (I keep getting this error on 1 and 5 minute candles)
I use StaticPairList
I did backtest as below
freqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 1m
freqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 5m
freqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 15m
freqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 30m
freqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 4h
freqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 1d
The problem I encountered:
I get the same profit / loss report in all time frames
</issue>
<code>
[start of freqtrade/configuration/deprecated_settings.py]
1 """
2 Functions to handle deprecated settings
3 """
4
5 import logging
6 from typing import Any, Dict
7
8 from freqtrade.exceptions import OperationalException
9
10
11 logger = logging.getLogger(__name__)
12
13
14 def check_conflicting_settings(config: Dict[str, Any],
15 section1: str, name1: str,
16 section2: str, name2: str) -> None:
17 section1_config = config.get(section1, {})
18 section2_config = config.get(section2, {})
19 if name1 in section1_config and name2 in section2_config:
20 raise OperationalException(
21 f"Conflicting settings `{section1}.{name1}` and `{section2}.{name2}` "
22 "(DEPRECATED) detected in the configuration file. "
23 "This deprecated setting will be removed in the next versions of Freqtrade. "
24 f"Please delete it from your configuration and use the `{section1}.{name1}` "
25 "setting instead."
26 )
27
28
29 def process_deprecated_setting(config: Dict[str, Any],
30 section1: str, name1: str,
31 section2: str, name2: str) -> None:
32 section2_config = config.get(section2, {})
33
34 if name2 in section2_config:
35 logger.warning(
36 "DEPRECATED: "
37 f"The `{section2}.{name2}` setting is deprecated and "
38 "will be removed in the next versions of Freqtrade. "
39 f"Please use the `{section1}.{name1}` setting in your configuration instead."
40 )
41 section1_config = config.get(section1, {})
42 section1_config[name1] = section2_config[name2]
43
44
45 def process_temporary_deprecated_settings(config: Dict[str, Any]) -> None:
46
47 check_conflicting_settings(config, 'ask_strategy', 'use_sell_signal',
48 'experimental', 'use_sell_signal')
49 check_conflicting_settings(config, 'ask_strategy', 'sell_profit_only',
50 'experimental', 'sell_profit_only')
51 check_conflicting_settings(config, 'ask_strategy', 'ignore_roi_if_buy_signal',
52 'experimental', 'ignore_roi_if_buy_signal')
53
54 process_deprecated_setting(config, 'ask_strategy', 'use_sell_signal',
55 'experimental', 'use_sell_signal')
56 process_deprecated_setting(config, 'ask_strategy', 'sell_profit_only',
57 'experimental', 'sell_profit_only')
58 process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',
59 'experimental', 'ignore_roi_if_buy_signal')
60
61 if (config.get('edge', {}).get('enabled', False)
62 and 'capital_available_percentage' in config.get('edge', {})):
63 raise OperationalException(
64 "DEPRECATED: "
65 "Using 'edge.capital_available_percentage' has been deprecated in favor of "
66 "'tradable_balance_ratio'. Please migrate your configuration to "
67 "'tradable_balance_ratio' and remove 'capital_available_percentage' "
68 "from the edge configuration."
69 )
70 if 'ticker_interval' in config:
71 logger.warning(
72 "DEPRECATED: "
73 "Please use 'timeframe' instead of 'ticker_interval."
74 )
75 config['timeframe'] = config['ticker_interval']
76
[end of freqtrade/configuration/deprecated_settings.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/freqtrade/configuration/deprecated_settings.py b/freqtrade/configuration/deprecated_settings.py
--- a/freqtrade/configuration/deprecated_settings.py
+++ b/freqtrade/configuration/deprecated_settings.py
@@ -72,4 +72,9 @@
"DEPRECATED: "
"Please use 'timeframe' instead of 'ticker_interval."
)
+ if 'timeframe' in config:
+ raise OperationalException(
+ "Both 'timeframe' and 'ticker_interval' detected."
+ "Please remove 'ticker_interval' from your configuration to continue operating."
+ )
config['timeframe'] = config['ticker_interval']
| {"golden_diff": "diff --git a/freqtrade/configuration/deprecated_settings.py b/freqtrade/configuration/deprecated_settings.py\n--- a/freqtrade/configuration/deprecated_settings.py\n+++ b/freqtrade/configuration/deprecated_settings.py\n@@ -72,4 +72,9 @@\n \"DEPRECATED: \"\n \"Please use 'timeframe' instead of 'ticker_interval.\"\n )\n+ if 'timeframe' in config:\n+ raise OperationalException(\n+ \"Both 'timeframe' and 'ticker_interval' detected.\"\n+ \"Please remove 'ticker_interval' from your configuration to continue operating.\"\n+ )\n config['timeframe'] = config['ticker_interval']\n", "issue": "I get the same profit / loss report in all time frames\n<!-- \r\nHave you searched for similar issues before posting it? Yes\r\n\r\nIf you have discovered a bug in the bot, please [search our issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue). \r\nIf it hasn't been reported, please create a new issue.\r\n\r\nPlease do not use bug reports to request new features.\r\n-->\r\n\r\n## Describe your environment\r\n\r\n * Operating system: ____Ubuntu 18.04.4 LTS\r\n * Python Version: _____Python 3.6.9\r\n * CCXT version: _____ ccxt==1.29.5\r\n * Freqtrade Version: ____ freqtrade develop-761407f7\r\n \r\nToday, I updated with the method below.\r\ncd freqtrade\r\ngit pull\r\npython3 -m pip install -r requirements.txt --user\r\npython3 -m pip install -r requirements-hyperopt.txt --user\r\npython3 -m pip install -r requirements-plot.txt --user\r\n\r\nand\r\nfreqtrade download-data --days 365 --timeframes 5m 15m 30m 1h 4h 1d\r\nhttps://github.com/freqtrade/freqtrade/issues/3104 (I keep getting this error on 1 and 5 minute candles)\r\n\r\nI use StaticPairList\r\n\r\nI did backtest as below\r\nfreqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 1m\r\nfreqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 5m\r\nfreqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 15m\r\nfreqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 30m\r\nfreqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 4h\r\nfreqtrade backtesting --strategy strateji --timerange=20200101- --ticker-interval 1d\r\n\r\nThe problem I encountered:\r\nI get the same profit / loss report in all time frames\n", "before_files": [{"content": "\"\"\"\nFunctions to handle deprecated settings\n\"\"\"\n\nimport logging\nfrom typing import Any, Dict\n\nfrom freqtrade.exceptions import OperationalException\n\n\nlogger = logging.getLogger(__name__)\n\n\ndef check_conflicting_settings(config: Dict[str, Any],\n section1: str, name1: str,\n section2: str, name2: str) -> None:\n section1_config = config.get(section1, {})\n section2_config = config.get(section2, {})\n if name1 in section1_config and name2 in section2_config:\n raise OperationalException(\n f\"Conflicting settings `{section1}.{name1}` and `{section2}.{name2}` \"\n \"(DEPRECATED) detected in the configuration file. \"\n \"This deprecated setting will be removed in the next versions of Freqtrade. \"\n f\"Please delete it from your configuration and use the `{section1}.{name1}` \"\n \"setting instead.\"\n )\n\n\ndef process_deprecated_setting(config: Dict[str, Any],\n section1: str, name1: str,\n section2: str, name2: str) -> None:\n section2_config = config.get(section2, {})\n\n if name2 in section2_config:\n logger.warning(\n \"DEPRECATED: \"\n f\"The `{section2}.{name2}` setting is deprecated and \"\n \"will be removed in the next versions of Freqtrade. \"\n f\"Please use the `{section1}.{name1}` setting in your configuration instead.\"\n )\n section1_config = config.get(section1, {})\n section1_config[name1] = section2_config[name2]\n\n\ndef process_temporary_deprecated_settings(config: Dict[str, Any]) -> None:\n\n check_conflicting_settings(config, 'ask_strategy', 'use_sell_signal',\n 'experimental', 'use_sell_signal')\n check_conflicting_settings(config, 'ask_strategy', 'sell_profit_only',\n 'experimental', 'sell_profit_only')\n check_conflicting_settings(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n\n process_deprecated_setting(config, 'ask_strategy', 'use_sell_signal',\n 'experimental', 'use_sell_signal')\n process_deprecated_setting(config, 'ask_strategy', 'sell_profit_only',\n 'experimental', 'sell_profit_only')\n process_deprecated_setting(config, 'ask_strategy', 'ignore_roi_if_buy_signal',\n 'experimental', 'ignore_roi_if_buy_signal')\n\n if (config.get('edge', {}).get('enabled', False)\n and 'capital_available_percentage' in config.get('edge', {})):\n raise OperationalException(\n \"DEPRECATED: \"\n \"Using 'edge.capital_available_percentage' has been deprecated in favor of \"\n \"'tradable_balance_ratio'. Please migrate your configuration to \"\n \"'tradable_balance_ratio' and remove 'capital_available_percentage' \"\n \"from the edge configuration.\"\n )\n if 'ticker_interval' in config:\n logger.warning(\n \"DEPRECATED: \"\n \"Please use 'timeframe' instead of 'ticker_interval.\"\n )\n config['timeframe'] = config['ticker_interval']\n", "path": "freqtrade/configuration/deprecated_settings.py"}]} | 1,854 | 141 |
gh_patches_debug_14756 | rasdani/github-patches | git_diff | translate__pootle-4277 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
`sync_stores` doesn't handle disabled projects
We addressed the similar issue for `update_stores` #4198.
`sync_stores` should work for disabled projects as well https://github.com/translate/pootle/issues/4198#issuecomment-161717337.
</issue>
<code>
[start of pootle/apps/pootle_app/management/commands/sync_stores.py]
1 #!/usr/bin/env python
2 # -*- coding: utf-8 -*-
3 #
4 # Copyright (C) Pootle contributors.
5 #
6 # This file is a part of the Pootle project. It is distributed under the GPL3
7 # or later license. See the LICENSE file for a copy of the license and the
8 # AUTHORS file for copyright and authorship information.
9
10 import os
11 os.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'
12 from optparse import make_option
13
14 from pootle_app.management.commands import PootleCommand
15
16
17 class Command(PootleCommand):
18 option_list = PootleCommand.option_list + (
19 make_option(
20 '--overwrite',
21 action='store_true',
22 dest='overwrite',
23 default=False,
24 help="Don't just save translations, but "
25 "overwrite files to reflect state in database",
26 ),
27 make_option(
28 '--skip-missing',
29 action='store_true',
30 dest='skip_missing',
31 default=False,
32 help="Ignore missing files on disk",
33 ),
34 make_option(
35 '--force',
36 action='store_true',
37 dest='force',
38 default=False,
39 help="Don't ignore stores synced after last change",
40 ),
41 )
42 help = "Save new translations to disk manually."
43
44 def handle_all_stores(self, translation_project, **options):
45 translation_project.sync(
46 conservative=not options['overwrite'],
47 skip_missing=options['skip_missing'],
48 only_newer=not options['force']
49 )
50
51 def handle_store(self, store, **options):
52 store.sync(
53 conservative=not options['overwrite'],
54 update_structure=options['overwrite'],
55 skip_missing=options['skip_missing'],
56 only_newer=not options['force']
57 )
58
[end of pootle/apps/pootle_app/management/commands/sync_stores.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pootle/apps/pootle_app/management/commands/sync_stores.py b/pootle/apps/pootle_app/management/commands/sync_stores.py
--- a/pootle/apps/pootle_app/management/commands/sync_stores.py
+++ b/pootle/apps/pootle_app/management/commands/sync_stores.py
@@ -40,13 +40,15 @@
),
)
help = "Save new translations to disk manually."
+ process_disabled_projects = True
def handle_all_stores(self, translation_project, **options):
- translation_project.sync(
- conservative=not options['overwrite'],
- skip_missing=options['skip_missing'],
- only_newer=not options['force']
- )
+ if translation_project.directory_exists_on_disk():
+ translation_project.sync(
+ conservative=not options['overwrite'],
+ skip_missing=options['skip_missing'],
+ only_newer=not options['force']
+ )
def handle_store(self, store, **options):
store.sync(
| {"golden_diff": "diff --git a/pootle/apps/pootle_app/management/commands/sync_stores.py b/pootle/apps/pootle_app/management/commands/sync_stores.py\n--- a/pootle/apps/pootle_app/management/commands/sync_stores.py\n+++ b/pootle/apps/pootle_app/management/commands/sync_stores.py\n@@ -40,13 +40,15 @@\n ),\n )\n help = \"Save new translations to disk manually.\"\n+ process_disabled_projects = True\n \n def handle_all_stores(self, translation_project, **options):\n- translation_project.sync(\n- conservative=not options['overwrite'],\n- skip_missing=options['skip_missing'],\n- only_newer=not options['force']\n- )\n+ if translation_project.directory_exists_on_disk():\n+ translation_project.sync(\n+ conservative=not options['overwrite'],\n+ skip_missing=options['skip_missing'],\n+ only_newer=not options['force']\n+ )\n \n def handle_store(self, store, **options):\n store.sync(\n", "issue": "`sync_stores` doesn't handle disabled projects\nWe addressed the similar issue for `update_stores` #4198.\n`sync_stores` should work for disabled projects as well https://github.com/translate/pootle/issues/4198#issuecomment-161717337.\n\n", "before_files": [{"content": "#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#\n# Copyright (C) Pootle contributors.\n#\n# This file is a part of the Pootle project. It is distributed under the GPL3\n# or later license. See the LICENSE file for a copy of the license and the\n# AUTHORS file for copyright and authorship information.\n\nimport os\nos.environ['DJANGO_SETTINGS_MODULE'] = 'pootle.settings'\nfrom optparse import make_option\n\nfrom pootle_app.management.commands import PootleCommand\n\n\nclass Command(PootleCommand):\n option_list = PootleCommand.option_list + (\n make_option(\n '--overwrite',\n action='store_true',\n dest='overwrite',\n default=False,\n help=\"Don't just save translations, but \"\n \"overwrite files to reflect state in database\",\n ),\n make_option(\n '--skip-missing',\n action='store_true',\n dest='skip_missing',\n default=False,\n help=\"Ignore missing files on disk\",\n ),\n make_option(\n '--force',\n action='store_true',\n dest='force',\n default=False,\n help=\"Don't ignore stores synced after last change\",\n ),\n )\n help = \"Save new translations to disk manually.\"\n\n def handle_all_stores(self, translation_project, **options):\n translation_project.sync(\n conservative=not options['overwrite'],\n skip_missing=options['skip_missing'],\n only_newer=not options['force']\n )\n\n def handle_store(self, store, **options):\n store.sync(\n conservative=not options['overwrite'],\n update_structure=options['overwrite'],\n skip_missing=options['skip_missing'],\n only_newer=not options['force']\n )\n", "path": "pootle/apps/pootle_app/management/commands/sync_stores.py"}]} | 1,105 | 234 |
gh_patches_debug_40664 | rasdani/github-patches | git_diff | medtagger__MedTagger-40 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add "radon" tool to the Backend and enable it in CI
## Expected Behavior
Python code in backend should be validated by "radon" tool in CI.
## Actual Behavior
MedTagger backend uses a few linters already but we should add more validators to increate automation and code quality.
</issue>
<code>
[start of backend/scripts/migrate_hbase.py]
1 """Script that can migrate existing HBase schema or prepare empty database with given schema.
2
3 How to use it?
4 --------------
5 Run this script just by executing following line in the root directory of this project:
6
7 (venv) $ python3.6 scripts/migrate_hbase.py
8
9 """
10 import argparse
11 import logging
12 import logging.config
13
14 from medtagger.clients.hbase_client import HBaseClient
15 from utils import get_connection_to_hbase, user_agrees
16
17 logging.config.fileConfig('logging.conf')
18 logger = logging.getLogger(__name__)
19
20 parser = argparse.ArgumentParser(description='HBase migration.')
21 parser.add_argument('-y', '--yes', dest='yes', action='store_const', const=True)
22 args = parser.parse_args()
23
24
25 HBASE_SCHEMA = HBaseClient.HBASE_SCHEMA
26 connection = get_connection_to_hbase()
27 existing_tables = set(connection.tables())
28 schema_tables = set(HBASE_SCHEMA)
29 tables_to_drop = list(existing_tables - schema_tables)
30 for table_name in tables_to_drop:
31 if args.yes or user_agrees('Do you want to drop table "{}"?'.format(table_name)):
32 logger.info('Dropping table "%s".', table_name)
33 table = connection.table(table_name)
34 table.drop()
35
36 for table_name in HBASE_SCHEMA:
37 table = connection.table(table_name)
38 if not table.exists():
39 if args.yes or user_agrees('Do you want to create table "{}"?'.format(table_name)):
40 list_of_columns = HBASE_SCHEMA[table_name]
41 logger.info('Creating table "%s" with columns %s.', table_name, list_of_columns)
42 table.create(*list_of_columns)
43 table.enable_if_exists_checks()
44 else:
45 existing_column_families = set(table.columns())
46 schema_column_families = set(HBASE_SCHEMA[table_name])
47 columns_to_add = list(schema_column_families - existing_column_families)
48 columns_to_drop = list(existing_column_families - schema_column_families)
49
50 if columns_to_add:
51 if args.yes or user_agrees('Do you want to add columns {} to "{}"?'.format(columns_to_add, table_name)):
52 table.add_columns(*columns_to_add)
53
54 if columns_to_drop:
55 if args.yes or user_agrees('Do you want to drop columns {} from "{}"?'.format(columns_to_drop, table_name)):
56 table.drop_columns(*columns_to_drop)
57
[end of backend/scripts/migrate_hbase.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/backend/scripts/migrate_hbase.py b/backend/scripts/migrate_hbase.py
--- a/backend/scripts/migrate_hbase.py
+++ b/backend/scripts/migrate_hbase.py
@@ -11,6 +11,8 @@
import logging
import logging.config
+from starbase import Table
+
from medtagger.clients.hbase_client import HBaseClient
from utils import get_connection_to_hbase, user_agrees
@@ -22,35 +24,59 @@
args = parser.parse_args()
-HBASE_SCHEMA = HBaseClient.HBASE_SCHEMA
-connection = get_connection_to_hbase()
-existing_tables = set(connection.tables())
-schema_tables = set(HBASE_SCHEMA)
-tables_to_drop = list(existing_tables - schema_tables)
-for table_name in tables_to_drop:
+def create_new_table(table: Table) -> None:
+ """Create new table once user agrees on that."""
+ table_name = table.name
+ if args.yes or user_agrees('Do you want to create table "{}"?'.format(table_name)):
+ list_of_columns = HBaseClient.HBASE_SCHEMA[table_name]
+ logger.info('Creating table "%s" with columns %s.', table_name, list_of_columns)
+ table.create(*list_of_columns)
+ table.enable_if_exists_checks()
+
+
+def update_table_schema(table: Table) -> None:
+ """Update table schema once user agrees on that."""
+ table_name = table.name
+ existing_column_families = set(table.columns())
+ schema_column_families = set(HBaseClient.HBASE_SCHEMA[table_name])
+ columns_to_add = list(schema_column_families - existing_column_families)
+ columns_to_drop = list(existing_column_families - schema_column_families)
+
+ if columns_to_add:
+ if args.yes or user_agrees('Do you want to add columns {} to "{}"?'.format(columns_to_add, table_name)):
+ table.add_columns(*columns_to_add)
+
+ if columns_to_drop:
+ if args.yes or user_agrees('Do you want to drop columns {} from "{}"?'.format(columns_to_drop, table_name)):
+ table.drop_columns(*columns_to_drop)
+
+
+def drop_table(table: Table) -> None:
+ """Drop table once user agrees on that."""
+ table_name = table.name
if args.yes or user_agrees('Do you want to drop table "{}"?'.format(table_name)):
logger.info('Dropping table "%s".', table_name)
- table = connection.table(table_name)
table.drop()
-for table_name in HBASE_SCHEMA:
- table = connection.table(table_name)
- if not table.exists():
- if args.yes or user_agrees('Do you want to create table "{}"?'.format(table_name)):
- list_of_columns = HBASE_SCHEMA[table_name]
- logger.info('Creating table "%s" with columns %s.', table_name, list_of_columns)
- table.create(*list_of_columns)
- table.enable_if_exists_checks()
- else:
- existing_column_families = set(table.columns())
- schema_column_families = set(HBASE_SCHEMA[table_name])
- columns_to_add = list(schema_column_families - existing_column_families)
- columns_to_drop = list(existing_column_families - schema_column_families)
-
- if columns_to_add:
- if args.yes or user_agrees('Do you want to add columns {} to "{}"?'.format(columns_to_add, table_name)):
- table.add_columns(*columns_to_add)
-
- if columns_to_drop:
- if args.yes or user_agrees('Do you want to drop columns {} from "{}"?'.format(columns_to_drop, table_name)):
- table.drop_columns(*columns_to_drop)
+
+def main() -> None:
+ """Run main functionality of this script."""
+ connection = get_connection_to_hbase()
+ existing_tables = set(connection.tables())
+ schema_tables = set(HBaseClient.HBASE_SCHEMA)
+ tables_to_drop = list(existing_tables - schema_tables)
+
+ for table_name in tables_to_drop:
+ table = connection.table(table_name)
+ drop_table(table)
+
+ for table_name in HBaseClient.HBASE_SCHEMA:
+ table = connection.table(table_name)
+ if not table.exists():
+ create_new_table(table)
+ else:
+ update_table_schema(table)
+
+
+if __name__ == '__main__':
+ main()
| {"golden_diff": "diff --git a/backend/scripts/migrate_hbase.py b/backend/scripts/migrate_hbase.py\n--- a/backend/scripts/migrate_hbase.py\n+++ b/backend/scripts/migrate_hbase.py\n@@ -11,6 +11,8 @@\n import logging\n import logging.config\n \n+from starbase import Table\n+\n from medtagger.clients.hbase_client import HBaseClient\n from utils import get_connection_to_hbase, user_agrees\n \n@@ -22,35 +24,59 @@\n args = parser.parse_args()\n \n \n-HBASE_SCHEMA = HBaseClient.HBASE_SCHEMA\n-connection = get_connection_to_hbase()\n-existing_tables = set(connection.tables())\n-schema_tables = set(HBASE_SCHEMA)\n-tables_to_drop = list(existing_tables - schema_tables)\n-for table_name in tables_to_drop:\n+def create_new_table(table: Table) -> None:\n+ \"\"\"Create new table once user agrees on that.\"\"\"\n+ table_name = table.name\n+ if args.yes or user_agrees('Do you want to create table \"{}\"?'.format(table_name)):\n+ list_of_columns = HBaseClient.HBASE_SCHEMA[table_name]\n+ logger.info('Creating table \"%s\" with columns %s.', table_name, list_of_columns)\n+ table.create(*list_of_columns)\n+ table.enable_if_exists_checks()\n+\n+\n+def update_table_schema(table: Table) -> None:\n+ \"\"\"Update table schema once user agrees on that.\"\"\"\n+ table_name = table.name\n+ existing_column_families = set(table.columns())\n+ schema_column_families = set(HBaseClient.HBASE_SCHEMA[table_name])\n+ columns_to_add = list(schema_column_families - existing_column_families)\n+ columns_to_drop = list(existing_column_families - schema_column_families)\n+\n+ if columns_to_add:\n+ if args.yes or user_agrees('Do you want to add columns {} to \"{}\"?'.format(columns_to_add, table_name)):\n+ table.add_columns(*columns_to_add)\n+\n+ if columns_to_drop:\n+ if args.yes or user_agrees('Do you want to drop columns {} from \"{}\"?'.format(columns_to_drop, table_name)):\n+ table.drop_columns(*columns_to_drop)\n+\n+\n+def drop_table(table: Table) -> None:\n+ \"\"\"Drop table once user agrees on that.\"\"\"\n+ table_name = table.name\n if args.yes or user_agrees('Do you want to drop table \"{}\"?'.format(table_name)):\n logger.info('Dropping table \"%s\".', table_name)\n- table = connection.table(table_name)\n table.drop()\n \n-for table_name in HBASE_SCHEMA:\n- table = connection.table(table_name)\n- if not table.exists():\n- if args.yes or user_agrees('Do you want to create table \"{}\"?'.format(table_name)):\n- list_of_columns = HBASE_SCHEMA[table_name]\n- logger.info('Creating table \"%s\" with columns %s.', table_name, list_of_columns)\n- table.create(*list_of_columns)\n- table.enable_if_exists_checks()\n- else:\n- existing_column_families = set(table.columns())\n- schema_column_families = set(HBASE_SCHEMA[table_name])\n- columns_to_add = list(schema_column_families - existing_column_families)\n- columns_to_drop = list(existing_column_families - schema_column_families)\n-\n- if columns_to_add:\n- if args.yes or user_agrees('Do you want to add columns {} to \"{}\"?'.format(columns_to_add, table_name)):\n- table.add_columns(*columns_to_add)\n-\n- if columns_to_drop:\n- if args.yes or user_agrees('Do you want to drop columns {} from \"{}\"?'.format(columns_to_drop, table_name)):\n- table.drop_columns(*columns_to_drop)\n+\n+def main() -> None:\n+ \"\"\"Run main functionality of this script.\"\"\"\n+ connection = get_connection_to_hbase()\n+ existing_tables = set(connection.tables())\n+ schema_tables = set(HBaseClient.HBASE_SCHEMA)\n+ tables_to_drop = list(existing_tables - schema_tables)\n+\n+ for table_name in tables_to_drop:\n+ table = connection.table(table_name)\n+ drop_table(table)\n+\n+ for table_name in HBaseClient.HBASE_SCHEMA:\n+ table = connection.table(table_name)\n+ if not table.exists():\n+ create_new_table(table)\n+ else:\n+ update_table_schema(table)\n+\n+\n+if __name__ == '__main__':\n+ main()\n", "issue": "Add \"radon\" tool to the Backend and enable it in CI\n## Expected Behavior\r\n\r\nPython code in backend should be validated by \"radon\" tool in CI.\r\n\r\n## Actual Behavior\r\n\r\nMedTagger backend uses a few linters already but we should add more validators to increate automation and code quality.\n", "before_files": [{"content": "\"\"\"Script that can migrate existing HBase schema or prepare empty database with given schema.\n\nHow to use it?\n--------------\nRun this script just by executing following line in the root directory of this project:\n\n (venv) $ python3.6 scripts/migrate_hbase.py\n\n\"\"\"\nimport argparse\nimport logging\nimport logging.config\n\nfrom medtagger.clients.hbase_client import HBaseClient\nfrom utils import get_connection_to_hbase, user_agrees\n\nlogging.config.fileConfig('logging.conf')\nlogger = logging.getLogger(__name__)\n\nparser = argparse.ArgumentParser(description='HBase migration.')\nparser.add_argument('-y', '--yes', dest='yes', action='store_const', const=True)\nargs = parser.parse_args()\n\n\nHBASE_SCHEMA = HBaseClient.HBASE_SCHEMA\nconnection = get_connection_to_hbase()\nexisting_tables = set(connection.tables())\nschema_tables = set(HBASE_SCHEMA)\ntables_to_drop = list(existing_tables - schema_tables)\nfor table_name in tables_to_drop:\n if args.yes or user_agrees('Do you want to drop table \"{}\"?'.format(table_name)):\n logger.info('Dropping table \"%s\".', table_name)\n table = connection.table(table_name)\n table.drop()\n\nfor table_name in HBASE_SCHEMA:\n table = connection.table(table_name)\n if not table.exists():\n if args.yes or user_agrees('Do you want to create table \"{}\"?'.format(table_name)):\n list_of_columns = HBASE_SCHEMA[table_name]\n logger.info('Creating table \"%s\" with columns %s.', table_name, list_of_columns)\n table.create(*list_of_columns)\n table.enable_if_exists_checks()\n else:\n existing_column_families = set(table.columns())\n schema_column_families = set(HBASE_SCHEMA[table_name])\n columns_to_add = list(schema_column_families - existing_column_families)\n columns_to_drop = list(existing_column_families - schema_column_families)\n\n if columns_to_add:\n if args.yes or user_agrees('Do you want to add columns {} to \"{}\"?'.format(columns_to_add, table_name)):\n table.add_columns(*columns_to_add)\n\n if columns_to_drop:\n if args.yes or user_agrees('Do you want to drop columns {} from \"{}\"?'.format(columns_to_drop, table_name)):\n table.drop_columns(*columns_to_drop)\n", "path": "backend/scripts/migrate_hbase.py"}]} | 1,205 | 965 |
gh_patches_debug_1836 | rasdani/github-patches | git_diff | Nitrate__Nitrate-337 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Upgrade django-tinymce to 2.7.0
As per subject.
</issue>
<code>
[start of setup.py]
1 # -*- coding: utf-8 -*-
2
3 from setuptools import setup, find_packages
4
5
6 with open('VERSION.txt', 'r') as f:
7 pkg_version = f.read().strip()
8
9
10 def get_long_description():
11 with open('README.rst', 'r') as f:
12 return f.read()
13
14
15 install_requires = [
16 'PyMySQL == 0.7.11',
17 'beautifulsoup4 >= 4.1.1',
18 'celery == 4.1.0',
19 'django-contrib-comments == 1.8.0',
20 'django-tinymce == 2.6.0',
21 'django-uuslug == 1.1.8',
22 'django >= 1.10,<2.0',
23 'html2text',
24 'kobo == 0.7.0',
25 'odfpy >= 0.9.6',
26 'six',
27 'xmltodict',
28 ]
29
30 extras_require = {
31 # Required for tcms.core.contrib.auth.backends.KerberosBackend
32 'krbauth': [
33 'kerberos == 1.2.5'
34 ],
35
36 # Packages for building documentation
37 'docs': [
38 'Sphinx >= 1.1.2',
39 'sphinx_rtd_theme',
40 ],
41
42 # Necessary packages for running tests
43 'tests': [
44 'coverage',
45 'factory_boy',
46 'flake8',
47 'mock',
48 'pytest',
49 'pytest-cov',
50 'pytest-django',
51 ],
52
53 # Contain tools that assists the development
54 'devtools': [
55 'django-debug-toolbar == 1.7',
56 'tox',
57 'django-extensions',
58 'pygraphviz',
59 ]
60 }
61
62
63 setup(
64 name='Nitrate',
65 version=pkg_version,
66 description='Test Case Management System',
67 long_description=get_long_description(),
68 author='Nitrate Team',
69 maintainer='Chenxiong Qi',
70 maintainer_email='[email protected]',
71 url='https://github.com/Nitrate/Nitrate/',
72 license='GPLv2+',
73 keywords='test case',
74 install_requires=install_requires,
75 extras_require=extras_require,
76 packages=find_packages(),
77 include_package_data=True,
78 classifiers=[
79 'Framework :: Django',
80 'Framework :: Django :: 1.10',
81 'Framework :: Django :: 1.11',
82 'Intended Audience :: Developers',
83 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',
84 'Programming Language :: Python :: 2',
85 'Programming Language :: Python :: 2.7',
86 'Programming Language :: Python :: 3',
87 'Programming Language :: Python :: 3.6',
88 'Topic :: Software Development :: Quality Assurance',
89 'Topic :: Software Development :: Testing',
90 ],
91 )
92
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -17,7 +17,7 @@
'beautifulsoup4 >= 4.1.1',
'celery == 4.1.0',
'django-contrib-comments == 1.8.0',
- 'django-tinymce == 2.6.0',
+ 'django-tinymce == 2.7.0',
'django-uuslug == 1.1.8',
'django >= 1.10,<2.0',
'html2text',
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -17,7 +17,7 @@\n 'beautifulsoup4 >= 4.1.1',\n 'celery == 4.1.0',\n 'django-contrib-comments == 1.8.0',\n- 'django-tinymce == 2.6.0',\n+ 'django-tinymce == 2.7.0',\n 'django-uuslug == 1.1.8',\n 'django >= 1.10,<2.0',\n 'html2text',\n", "issue": "Upgrade django-tinymce to 2.7.0\nAs per subject.\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nfrom setuptools import setup, find_packages\n\n\nwith open('VERSION.txt', 'r') as f:\n pkg_version = f.read().strip()\n\n\ndef get_long_description():\n with open('README.rst', 'r') as f:\n return f.read()\n\n\ninstall_requires = [\n 'PyMySQL == 0.7.11',\n 'beautifulsoup4 >= 4.1.1',\n 'celery == 4.1.0',\n 'django-contrib-comments == 1.8.0',\n 'django-tinymce == 2.6.0',\n 'django-uuslug == 1.1.8',\n 'django >= 1.10,<2.0',\n 'html2text',\n 'kobo == 0.7.0',\n 'odfpy >= 0.9.6',\n 'six',\n 'xmltodict',\n]\n\nextras_require = {\n # Required for tcms.core.contrib.auth.backends.KerberosBackend\n 'krbauth': [\n 'kerberos == 1.2.5'\n ],\n\n # Packages for building documentation\n 'docs': [\n 'Sphinx >= 1.1.2',\n 'sphinx_rtd_theme',\n ],\n\n # Necessary packages for running tests\n 'tests': [\n 'coverage',\n 'factory_boy',\n 'flake8',\n 'mock',\n 'pytest',\n 'pytest-cov',\n 'pytest-django',\n ],\n\n # Contain tools that assists the development\n 'devtools': [\n 'django-debug-toolbar == 1.7',\n 'tox',\n 'django-extensions',\n 'pygraphviz',\n ]\n}\n\n\nsetup(\n name='Nitrate',\n version=pkg_version,\n description='Test Case Management System',\n long_description=get_long_description(),\n author='Nitrate Team',\n maintainer='Chenxiong Qi',\n maintainer_email='[email protected]',\n url='https://github.com/Nitrate/Nitrate/',\n license='GPLv2+',\n keywords='test case',\n install_requires=install_requires,\n extras_require=extras_require,\n packages=find_packages(),\n include_package_data=True,\n classifiers=[\n 'Framework :: Django',\n 'Framework :: Django :: 1.10',\n 'Framework :: Django :: 1.11',\n 'Intended Audience :: Developers',\n 'License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+)',\n 'Programming Language :: Python :: 2',\n 'Programming Language :: Python :: 2.7',\n 'Programming Language :: Python :: 3',\n 'Programming Language :: Python :: 3.6',\n 'Topic :: Software Development :: Quality Assurance',\n 'Topic :: Software Development :: Testing',\n ],\n)\n", "path": "setup.py"}]} | 1,345 | 135 |
gh_patches_debug_58946 | rasdani/github-patches | git_diff | ivy-llc__ivy-13797 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
diagflat
</issue>
<code>
[start of ivy/functional/frontends/numpy/creation_routines/building_matrices.py]
1 import ivy
2 from ivy.functional.frontends.numpy.func_wrapper import (
3 to_ivy_arrays_and_back,
4 handle_numpy_dtype,
5 )
6
7
8 @to_ivy_arrays_and_back
9 def tril(m, k=0):
10 return ivy.tril(m, k=k)
11
12
13 @to_ivy_arrays_and_back
14 def triu(m, k=0):
15 return ivy.triu(m, k=k)
16
17
18 @handle_numpy_dtype
19 @to_ivy_arrays_and_back
20 def tri(N, M=None, k=0, dtype="float64", *, like=None):
21 if M is None:
22 M = N
23 ones = ivy.ones((N, M), dtype=dtype)
24 return ivy.tril(ones, k=k)
25
26
27 @to_ivy_arrays_and_back
28 def diag(v, k=0):
29 return ivy.diag(v, k=k)
30
31
32 @to_ivy_arrays_and_back
33 def vander(x, N=None, increasing=False):
34 if ivy.is_float_dtype(x):
35 x = x.astype(ivy.float64)
36 elif ivy.is_bool_dtype or ivy.is_int_dtype(x):
37 x = x.astype(ivy.int64)
38 return ivy.vander(x, N=N, increasing=increasing)
39
[end of ivy/functional/frontends/numpy/creation_routines/building_matrices.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py
--- a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py
+++ b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py
@@ -36,3 +36,12 @@
elif ivy.is_bool_dtype or ivy.is_int_dtype(x):
x = x.astype(ivy.int64)
return ivy.vander(x, N=N, increasing=increasing)
+
+
+# diagflat
+@to_ivy_arrays_and_back
+def diagflat(v, k=0):
+ ret = ivy.diagflat(v, offset=k)
+ while len(ivy.shape(ret)) < 2:
+ ret = ret.expand_dims(axis=0)
+ return ret
| {"golden_diff": "diff --git a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py\n--- a/ivy/functional/frontends/numpy/creation_routines/building_matrices.py\n+++ b/ivy/functional/frontends/numpy/creation_routines/building_matrices.py\n@@ -36,3 +36,12 @@\n elif ivy.is_bool_dtype or ivy.is_int_dtype(x):\n x = x.astype(ivy.int64)\n return ivy.vander(x, N=N, increasing=increasing)\n+\n+\n+# diagflat\n+@to_ivy_arrays_and_back\n+def diagflat(v, k=0):\n+ ret = ivy.diagflat(v, offset=k)\n+ while len(ivy.shape(ret)) < 2:\n+ ret = ret.expand_dims(axis=0)\n+ return ret\n", "issue": "diagflat\n\n", "before_files": [{"content": "import ivy\nfrom ivy.functional.frontends.numpy.func_wrapper import (\n to_ivy_arrays_and_back,\n handle_numpy_dtype,\n)\n\n\n@to_ivy_arrays_and_back\ndef tril(m, k=0):\n return ivy.tril(m, k=k)\n\n\n@to_ivy_arrays_and_back\ndef triu(m, k=0):\n return ivy.triu(m, k=k)\n\n\n@handle_numpy_dtype\n@to_ivy_arrays_and_back\ndef tri(N, M=None, k=0, dtype=\"float64\", *, like=None):\n if M is None:\n M = N\n ones = ivy.ones((N, M), dtype=dtype)\n return ivy.tril(ones, k=k)\n\n\n@to_ivy_arrays_and_back\ndef diag(v, k=0):\n return ivy.diag(v, k=k)\n\n\n@to_ivy_arrays_and_back\ndef vander(x, N=None, increasing=False):\n if ivy.is_float_dtype(x):\n x = x.astype(ivy.float64)\n elif ivy.is_bool_dtype or ivy.is_int_dtype(x):\n x = x.astype(ivy.int64)\n return ivy.vander(x, N=N, increasing=increasing)\n", "path": "ivy/functional/frontends/numpy/creation_routines/building_matrices.py"}]} | 904 | 199 |
gh_patches_debug_23782 | rasdani/github-patches | git_diff | Textualize__rich-273 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[BUG] '#' sign is treated as the end of a URL
**Describe the bug**
The `#` a valid element of the URL, but Rich seems to ignore it and treats it as the end of it.
Consider this URL: https://github.com/willmcgugan/rich#rich-print-function
**To Reproduce**
```python
from rich.console import Console
console = Console()
console.log("https://github.com/willmcgugan/rich#rich-print-function")
```
Output:

**Platform**
I'm using Rich on Windows and Linux, with the currently newest version `6.1.1`.
</issue>
<code>
[start of rich/highlighter.py]
1 from abc import ABC, abstractmethod
2 from typing import List, Union
3
4 from .text import Text
5
6
7 class Highlighter(ABC):
8 """Abstract base class for highlighters."""
9
10 def __call__(self, text: Union[str, Text]) -> Text:
11 """Highlight a str or Text instance.
12
13 Args:
14 text (Union[str, ~Text]): Text to highlight.
15
16 Raises:
17 TypeError: If not called with text or str.
18
19 Returns:
20 Text: A test instance with highlighting applied.
21 """
22 if isinstance(text, str):
23 highlight_text = Text(text)
24 elif isinstance(text, Text):
25 highlight_text = text.copy()
26 else:
27 raise TypeError(f"str or Text instance required, not {text!r}")
28 self.highlight(highlight_text)
29 return highlight_text
30
31 @abstractmethod
32 def highlight(self, text: Text) -> None:
33 """Apply highlighting in place to text.
34
35 Args:
36 text (~Text): A text object highlight.
37 """
38
39
40 class NullHighlighter(Highlighter):
41 """A highlighter object that doesn't highlight.
42
43 May be used to disable highlighting entirely.
44
45 """
46
47 def highlight(self, text: Text) -> None:
48 """Nothing to do"""
49
50
51 class RegexHighlighter(Highlighter):
52 """Applies highlighting from a list of regular expressions."""
53
54 highlights: List[str] = []
55 base_style: str = ""
56
57 def highlight(self, text: Text) -> None:
58 """Highlight :class:`rich.text.Text` using regular expressions.
59
60 Args:
61 text (~Text): Text to highlighted.
62
63 """
64 highlight_regex = text.highlight_regex
65 for re_highlight in self.highlights:
66 highlight_regex(re_highlight, style_prefix=self.base_style)
67
68
69 class ReprHighlighter(RegexHighlighter):
70 """Highlights the text typically produced from ``__repr__`` methods."""
71
72 base_style = "repr."
73 highlights = [
74 r"(?P<brace>[\{\[\(\)\]\}])",
75 r"(?P<tag_start>\<)(?P<tag_name>[\w\-\.\:]*)(?P<tag_contents>.*?)(?P<tag_end>\>)",
76 r"(?P<attrib_name>\w+?)=(?P<attrib_value>\"?[\w_]+\"?)",
77 r"(?P<bool_true>True)|(?P<bool_false>False)|(?P<none>None)",
78 r"(?P<number>(?<!\w)\-?[0-9]+\.?[0-9]*(e[\-\+]?\d+?)?\b)",
79 r"(?P<number>0x[0-9a-f]*)",
80 r"(?P<path>\B(\/[\w\.\-\_\+]+)*\/)(?P<filename>[\w\.\-\_\+]*)?",
81 r"(?<!\\)(?P<str>b?\'\'\'.*?(?<!\\)\'\'\'|b?\'.*?(?<!\\)\'|b?\"\"\".*?(?<!\\)\"\"\"|b?\".*?(?<!\\)\")",
82 r"(?P<url>https?:\/\/[0-9a-zA-Z\$\-\_\+\!`\(\)\,\.\?\/\;\:\&\=\%]*)",
83 r"(?P<uuid>[a-fA-F0-9]{8}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{12})",
84 ]
85
86
87 if __name__ == "__main__": # pragma: no cover
88 from .console import Console
89
90 console = Console()
91 console.print("[bold green]hello world![/bold green]")
92 console.print("'[bold green]hello world![/bold green]'")
93
94 console.print(" /foo")
95 console.print("/foo/")
96 console.print("/foo/bar")
97 console.print("foo/bar/baz")
98
99 console.print("/foo/bar/baz?foo=bar+egg&egg=baz")
100 console.print("/foo/bar/baz/")
101 console.print("/foo/bar/baz/egg")
102 console.print("/foo/bar/baz/egg.py")
103 console.print("/foo/bar/baz/egg.py word")
104 console.print(" /foo/bar/baz/egg.py word")
105 console.print("foo /foo/bar/baz/egg.py word")
106 console.print("foo /foo/bar/ba._++z/egg+.py word")
107 console.print("https://example.org?foo=bar")
108
109 console.print(1234567.34)
110 console.print(1 / 2)
111 console.print(-1 / 123123123123)
112
[end of rich/highlighter.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/rich/highlighter.py b/rich/highlighter.py
--- a/rich/highlighter.py
+++ b/rich/highlighter.py
@@ -79,7 +79,7 @@
r"(?P<number>0x[0-9a-f]*)",
r"(?P<path>\B(\/[\w\.\-\_\+]+)*\/)(?P<filename>[\w\.\-\_\+]*)?",
r"(?<!\\)(?P<str>b?\'\'\'.*?(?<!\\)\'\'\'|b?\'.*?(?<!\\)\'|b?\"\"\".*?(?<!\\)\"\"\"|b?\".*?(?<!\\)\")",
- r"(?P<url>https?:\/\/[0-9a-zA-Z\$\-\_\+\!`\(\)\,\.\?\/\;\:\&\=\%]*)",
+ r"(?P<url>https?:\/\/[0-9a-zA-Z\$\-\_\+\!`\(\)\,\.\?\/\;\:\&\=\%\#]*)",
r"(?P<uuid>[a-fA-F0-9]{8}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{4}\-[a-fA-F0-9]{12})",
]
@@ -104,7 +104,7 @@
console.print(" /foo/bar/baz/egg.py word")
console.print("foo /foo/bar/baz/egg.py word")
console.print("foo /foo/bar/ba._++z/egg+.py word")
- console.print("https://example.org?foo=bar")
+ console.print("https://example.org?foo=bar#header")
console.print(1234567.34)
console.print(1 / 2)
| {"golden_diff": "diff --git a/rich/highlighter.py b/rich/highlighter.py\n--- a/rich/highlighter.py\n+++ b/rich/highlighter.py\n@@ -79,7 +79,7 @@\n r\"(?P<number>0x[0-9a-f]*)\",\n r\"(?P<path>\\B(\\/[\\w\\.\\-\\_\\+]+)*\\/)(?P<filename>[\\w\\.\\-\\_\\+]*)?\",\n r\"(?<!\\\\)(?P<str>b?\\'\\'\\'.*?(?<!\\\\)\\'\\'\\'|b?\\'.*?(?<!\\\\)\\'|b?\\\"\\\"\\\".*?(?<!\\\\)\\\"\\\"\\\"|b?\\\".*?(?<!\\\\)\\\")\",\n- r\"(?P<url>https?:\\/\\/[0-9a-zA-Z\\$\\-\\_\\+\\!`\\(\\)\\,\\.\\?\\/\\;\\:\\&\\=\\%]*)\",\n+ r\"(?P<url>https?:\\/\\/[0-9a-zA-Z\\$\\-\\_\\+\\!`\\(\\)\\,\\.\\?\\/\\;\\:\\&\\=\\%\\#]*)\",\n r\"(?P<uuid>[a-fA-F0-9]{8}\\-[a-fA-F0-9]{4}\\-[a-fA-F0-9]{4}\\-[a-fA-F0-9]{4}\\-[a-fA-F0-9]{12})\",\n ]\n \n@@ -104,7 +104,7 @@\n console.print(\" /foo/bar/baz/egg.py word\")\n console.print(\"foo /foo/bar/baz/egg.py word\")\n console.print(\"foo /foo/bar/ba._++z/egg+.py word\")\n- console.print(\"https://example.org?foo=bar\")\n+ console.print(\"https://example.org?foo=bar#header\")\n \n console.print(1234567.34)\n console.print(1 / 2)\n", "issue": "[BUG] '#' sign is treated as the end of a URL\n**Describe the bug**\r\nThe `#` a valid element of the URL, but Rich seems to ignore it and treats it as the end of it. \r\nConsider this URL: https://github.com/willmcgugan/rich#rich-print-function\r\n\r\n**To Reproduce**\r\n```python\r\nfrom rich.console import Console\r\n\r\nconsole = Console()\r\n\r\nconsole.log(\"https://github.com/willmcgugan/rich#rich-print-function\")\r\n```\r\n\r\nOutput: \r\n\r\n\r\n\r\n**Platform**\r\nI'm using Rich on Windows and Linux, with the currently newest version `6.1.1`.\r\n\n", "before_files": [{"content": "from abc import ABC, abstractmethod\nfrom typing import List, Union\n\nfrom .text import Text\n\n\nclass Highlighter(ABC):\n \"\"\"Abstract base class for highlighters.\"\"\"\n\n def __call__(self, text: Union[str, Text]) -> Text:\n \"\"\"Highlight a str or Text instance.\n\n Args:\n text (Union[str, ~Text]): Text to highlight.\n\n Raises:\n TypeError: If not called with text or str.\n\n Returns:\n Text: A test instance with highlighting applied.\n \"\"\"\n if isinstance(text, str):\n highlight_text = Text(text)\n elif isinstance(text, Text):\n highlight_text = text.copy()\n else:\n raise TypeError(f\"str or Text instance required, not {text!r}\")\n self.highlight(highlight_text)\n return highlight_text\n\n @abstractmethod\n def highlight(self, text: Text) -> None:\n \"\"\"Apply highlighting in place to text.\n\n Args:\n text (~Text): A text object highlight.\n \"\"\"\n\n\nclass NullHighlighter(Highlighter):\n \"\"\"A highlighter object that doesn't highlight.\n\n May be used to disable highlighting entirely.\n\n \"\"\"\n\n def highlight(self, text: Text) -> None:\n \"\"\"Nothing to do\"\"\"\n\n\nclass RegexHighlighter(Highlighter):\n \"\"\"Applies highlighting from a list of regular expressions.\"\"\"\n\n highlights: List[str] = []\n base_style: str = \"\"\n\n def highlight(self, text: Text) -> None:\n \"\"\"Highlight :class:`rich.text.Text` using regular expressions.\n\n Args:\n text (~Text): Text to highlighted.\n\n \"\"\"\n highlight_regex = text.highlight_regex\n for re_highlight in self.highlights:\n highlight_regex(re_highlight, style_prefix=self.base_style)\n\n\nclass ReprHighlighter(RegexHighlighter):\n \"\"\"Highlights the text typically produced from ``__repr__`` methods.\"\"\"\n\n base_style = \"repr.\"\n highlights = [\n r\"(?P<brace>[\\{\\[\\(\\)\\]\\}])\",\n r\"(?P<tag_start>\\<)(?P<tag_name>[\\w\\-\\.\\:]*)(?P<tag_contents>.*?)(?P<tag_end>\\>)\",\n r\"(?P<attrib_name>\\w+?)=(?P<attrib_value>\\\"?[\\w_]+\\\"?)\",\n r\"(?P<bool_true>True)|(?P<bool_false>False)|(?P<none>None)\",\n r\"(?P<number>(?<!\\w)\\-?[0-9]+\\.?[0-9]*(e[\\-\\+]?\\d+?)?\\b)\",\n r\"(?P<number>0x[0-9a-f]*)\",\n r\"(?P<path>\\B(\\/[\\w\\.\\-\\_\\+]+)*\\/)(?P<filename>[\\w\\.\\-\\_\\+]*)?\",\n r\"(?<!\\\\)(?P<str>b?\\'\\'\\'.*?(?<!\\\\)\\'\\'\\'|b?\\'.*?(?<!\\\\)\\'|b?\\\"\\\"\\\".*?(?<!\\\\)\\\"\\\"\\\"|b?\\\".*?(?<!\\\\)\\\")\",\n r\"(?P<url>https?:\\/\\/[0-9a-zA-Z\\$\\-\\_\\+\\!`\\(\\)\\,\\.\\?\\/\\;\\:\\&\\=\\%]*)\",\n r\"(?P<uuid>[a-fA-F0-9]{8}\\-[a-fA-F0-9]{4}\\-[a-fA-F0-9]{4}\\-[a-fA-F0-9]{4}\\-[a-fA-F0-9]{12})\",\n ]\n\n\nif __name__ == \"__main__\": # pragma: no cover\n from .console import Console\n\n console = Console()\n console.print(\"[bold green]hello world![/bold green]\")\n console.print(\"'[bold green]hello world![/bold green]'\")\n\n console.print(\" /foo\")\n console.print(\"/foo/\")\n console.print(\"/foo/bar\")\n console.print(\"foo/bar/baz\")\n\n console.print(\"/foo/bar/baz?foo=bar+egg&egg=baz\")\n console.print(\"/foo/bar/baz/\")\n console.print(\"/foo/bar/baz/egg\")\n console.print(\"/foo/bar/baz/egg.py\")\n console.print(\"/foo/bar/baz/egg.py word\")\n console.print(\" /foo/bar/baz/egg.py word\")\n console.print(\"foo /foo/bar/baz/egg.py word\")\n console.print(\"foo /foo/bar/ba._++z/egg+.py word\")\n console.print(\"https://example.org?foo=bar\")\n\n console.print(1234567.34)\n console.print(1 / 2)\n console.print(-1 / 123123123123)\n", "path": "rich/highlighter.py"}]} | 2,009 | 432 |
gh_patches_debug_480 | rasdani/github-patches | git_diff | google__flax-2136 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Flax actually requires jax 0.3.2
https://github.com/google/flax/blob/ef6bf4054c30271a58bfabb58f3d0049ef5d851a/flax/linen/initializers.py#L19
the constant initialiser was added in this commit https://github.com/google/jax/commit/86e8928e709ac07cc51c10e815db6284507c320e that was first included in jax 0.3.2
This came up in NetKet's automated oldest-version-dependencies testing.
</issue>
<code>
[start of setup.py]
1 # Copyright 2022 The Flax Authors.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 """setup.py for Flax."""
16
17 import os
18 from setuptools import find_packages
19 from setuptools import setup
20
21 here = os.path.abspath(os.path.dirname(__file__))
22 try:
23 README = open(os.path.join(here, "README.md"), encoding="utf-8").read()
24 except IOError:
25 README = ""
26
27 install_requires = [
28 "numpy>=1.12",
29 "jax>=0.3",
30 "matplotlib", # only needed for tensorboard export
31 "msgpack",
32 "optax",
33 "rich~=11.1.0",
34 "typing_extensions>=4.1.1",
35 ]
36
37 tests_require = [
38 "atari-py==0.2.5", # Last version does not have the ROMs we test on pre-packaged
39 "clu", # All examples.
40 "gym==0.18.3",
41 "jaxlib",
42 "jraph",
43 "ml-collections",
44 "opencv-python",
45 "pytest",
46 "pytest-cov",
47 "pytest-xdist==1.34.0", # upgrading to 2.0 broke tests, need to investigate
48 "pytype",
49 "sentencepiece", # WMT example.
50 "svn",
51 "tensorflow_text>=2.4.0", # WMT example.
52 "tensorflow_datasets",
53 "tensorflow",
54 "torch",
55 "pandas", # get_repo_metrics script
56 ]
57
58 __version__ = None
59
60 with open("flax/version.py") as f:
61 exec(f.read(), globals())
62
63 setup(
64 name="flax",
65 version=__version__,
66 description="Flax: A neural network library for JAX designed for flexibility",
67 long_description="\n\n".join([README]),
68 long_description_content_type="text/markdown",
69 classifiers=[
70 "Development Status :: 3 - Alpha",
71 "Intended Audience :: Developers",
72 "Intended Audience :: Science/Research",
73 "License :: OSI Approved :: Apache Software License",
74 "Programming Language :: Python :: 3.7",
75 "Topic :: Scientific/Engineering :: Artificial Intelligence",
76 ],
77 keywords="",
78 author="Flax team",
79 author_email="[email protected]",
80 url="https://github.com/google/flax",
81 packages=find_packages(),
82 package_data={"flax": ["py.typed"]},
83 zip_safe=False,
84 install_requires=install_requires,
85 extras_require={
86 "testing": tests_require,
87 },
88 )
89
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -26,7 +26,7 @@
install_requires = [
"numpy>=1.12",
- "jax>=0.3",
+ "jax>=0.3.2",
"matplotlib", # only needed for tensorboard export
"msgpack",
"optax",
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -26,7 +26,7 @@\n \n install_requires = [\n \"numpy>=1.12\",\n- \"jax>=0.3\",\n+ \"jax>=0.3.2\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n", "issue": "Flax actually requires jax 0.3.2\nhttps://github.com/google/flax/blob/ef6bf4054c30271a58bfabb58f3d0049ef5d851a/flax/linen/initializers.py#L19\r\n\r\nthe constant initialiser was added in this commit https://github.com/google/jax/commit/86e8928e709ac07cc51c10e815db6284507c320e that was first included in jax 0.3.2\r\n\r\nThis came up in NetKet's automated oldest-version-dependencies testing.\n", "before_files": [{"content": "# Copyright 2022 The Flax Authors.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n\"\"\"setup.py for Flax.\"\"\"\n\nimport os\nfrom setuptools import find_packages\nfrom setuptools import setup\n\nhere = os.path.abspath(os.path.dirname(__file__))\ntry:\n README = open(os.path.join(here, \"README.md\"), encoding=\"utf-8\").read()\nexcept IOError:\n README = \"\"\n\ninstall_requires = [\n \"numpy>=1.12\",\n \"jax>=0.3\",\n \"matplotlib\", # only needed for tensorboard export\n \"msgpack\",\n \"optax\",\n \"rich~=11.1.0\", \n \"typing_extensions>=4.1.1\",\n]\n\ntests_require = [\n \"atari-py==0.2.5\", # Last version does not have the ROMs we test on pre-packaged\n \"clu\", # All examples.\n \"gym==0.18.3\",\n \"jaxlib\",\n \"jraph\",\n \"ml-collections\",\n \"opencv-python\",\n \"pytest\",\n \"pytest-cov\",\n \"pytest-xdist==1.34.0\", # upgrading to 2.0 broke tests, need to investigate\n \"pytype\",\n \"sentencepiece\", # WMT example.\n \"svn\",\n \"tensorflow_text>=2.4.0\", # WMT example.\n \"tensorflow_datasets\",\n \"tensorflow\",\n \"torch\",\n \"pandas\", # get_repo_metrics script\n]\n\n__version__ = None\n\nwith open(\"flax/version.py\") as f:\n exec(f.read(), globals())\n\nsetup(\n name=\"flax\",\n version=__version__,\n description=\"Flax: A neural network library for JAX designed for flexibility\",\n long_description=\"\\n\\n\".join([README]),\n long_description_content_type=\"text/markdown\",\n classifiers=[\n \"Development Status :: 3 - Alpha\",\n \"Intended Audience :: Developers\",\n \"Intended Audience :: Science/Research\",\n \"License :: OSI Approved :: Apache Software License\",\n \"Programming Language :: Python :: 3.7\",\n \"Topic :: Scientific/Engineering :: Artificial Intelligence\",\n ],\n keywords=\"\",\n author=\"Flax team\",\n author_email=\"[email protected]\",\n url=\"https://github.com/google/flax\",\n packages=find_packages(),\n package_data={\"flax\": [\"py.typed\"]},\n zip_safe=False,\n install_requires=install_requires,\n extras_require={\n \"testing\": tests_require,\n },\n )\n", "path": "setup.py"}]} | 1,522 | 90 |
gh_patches_debug_1736 | rasdani/github-patches | git_diff | pyqtgraph__pyqtgraph-1045 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
PlotWidget.__getattr__ raises wrong exception type - but this has a simple fix
`hasattr(widget, "some_non_existing_attribute")` raises `NameError` instead of returning `False` for instances of `PlotWidget`. I think that `PlotWidget.__getattr__` (in PlotWidget.py) should raise `AttributeError` instead of `NameError`, which would be converted correctly to `False` by `hasattr`. I believe the same holds for `TabWindow.__getattr__` (in graphicsWindows.py).
</issue>
<code>
[start of pyqtgraph/graphicsWindows.py]
1 # -*- coding: utf-8 -*-
2 """
3 DEPRECATED: The classes below are convenience classes that create a new window
4 containting a single, specific widget. These classes are now unnecessary because
5 it is possible to place any widget into its own window by simply calling its
6 show() method.
7 """
8
9 from .Qt import QtCore, QtGui, mkQApp
10 from .widgets.PlotWidget import *
11 from .imageview import *
12 from .widgets.GraphicsLayoutWidget import GraphicsLayoutWidget
13 from .widgets.GraphicsView import GraphicsView
14
15
16 class GraphicsWindow(GraphicsLayoutWidget):
17 """
18 (deprecated; use GraphicsLayoutWidget instead)
19
20 Convenience subclass of :class:`GraphicsLayoutWidget
21 <pyqtgraph.GraphicsLayoutWidget>`. This class is intended for use from
22 the interactive python prompt.
23 """
24 def __init__(self, title=None, size=(800,600), **kargs):
25 mkQApp()
26 GraphicsLayoutWidget.__init__(self, **kargs)
27 self.resize(*size)
28 if title is not None:
29 self.setWindowTitle(title)
30 self.show()
31
32
33 class TabWindow(QtGui.QMainWindow):
34 """
35 (deprecated)
36 """
37 def __init__(self, title=None, size=(800,600)):
38 mkQApp()
39 QtGui.QMainWindow.__init__(self)
40 self.resize(*size)
41 self.cw = QtGui.QTabWidget()
42 self.setCentralWidget(self.cw)
43 if title is not None:
44 self.setWindowTitle(title)
45 self.show()
46
47 def __getattr__(self, attr):
48 if hasattr(self.cw, attr):
49 return getattr(self.cw, attr)
50 else:
51 raise NameError(attr)
52
53
54 class PlotWindow(PlotWidget):
55 """
56 (deprecated; use PlotWidget instead)
57 """
58 def __init__(self, title=None, **kargs):
59 mkQApp()
60 self.win = QtGui.QMainWindow()
61 PlotWidget.__init__(self, **kargs)
62 self.win.setCentralWidget(self)
63 for m in ['resize']:
64 setattr(self, m, getattr(self.win, m))
65 if title is not None:
66 self.win.setWindowTitle(title)
67 self.win.show()
68
69
70 class ImageWindow(ImageView):
71 """
72 (deprecated; use ImageView instead)
73 """
74 def __init__(self, *args, **kargs):
75 mkQApp()
76 self.win = QtGui.QMainWindow()
77 self.win.resize(800,600)
78 if 'title' in kargs:
79 self.win.setWindowTitle(kargs['title'])
80 del kargs['title']
81 ImageView.__init__(self, self.win)
82 if len(args) > 0 or len(kargs) > 0:
83 self.setImage(*args, **kargs)
84 self.win.setCentralWidget(self)
85 for m in ['resize']:
86 setattr(self, m, getattr(self.win, m))
87 #for m in ['setImage', 'autoRange', 'addItem', 'removeItem', 'blackLevel', 'whiteLevel', 'imageItem']:
88 #setattr(self, m, getattr(self.cw, m))
89 self.win.show()
90
[end of pyqtgraph/graphicsWindows.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/pyqtgraph/graphicsWindows.py b/pyqtgraph/graphicsWindows.py
--- a/pyqtgraph/graphicsWindows.py
+++ b/pyqtgraph/graphicsWindows.py
@@ -45,10 +45,7 @@
self.show()
def __getattr__(self, attr):
- if hasattr(self.cw, attr):
- return getattr(self.cw, attr)
- else:
- raise NameError(attr)
+ return getattr(self.cw, attr)
class PlotWindow(PlotWidget):
| {"golden_diff": "diff --git a/pyqtgraph/graphicsWindows.py b/pyqtgraph/graphicsWindows.py\n--- a/pyqtgraph/graphicsWindows.py\n+++ b/pyqtgraph/graphicsWindows.py\n@@ -45,10 +45,7 @@\n self.show()\n \n def __getattr__(self, attr):\n- if hasattr(self.cw, attr):\n- return getattr(self.cw, attr)\n- else:\n- raise NameError(attr)\n+ return getattr(self.cw, attr)\n \n \n class PlotWindow(PlotWidget):\n", "issue": "PlotWidget.__getattr__ raises wrong exception type - but this has a simple fix\n`hasattr(widget, \"some_non_existing_attribute\")` raises `NameError` instead of returning `False` for instances of `PlotWidget`. I think that `PlotWidget.__getattr__` (in PlotWidget.py) should raise `AttributeError` instead of `NameError`, which would be converted correctly to `False` by `hasattr`. I believe the same holds for `TabWindow.__getattr__` (in graphicsWindows.py).\r\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\"\"\"\nDEPRECATED: The classes below are convenience classes that create a new window\ncontainting a single, specific widget. These classes are now unnecessary because\nit is possible to place any widget into its own window by simply calling its\nshow() method.\n\"\"\"\n\nfrom .Qt import QtCore, QtGui, mkQApp\nfrom .widgets.PlotWidget import *\nfrom .imageview import *\nfrom .widgets.GraphicsLayoutWidget import GraphicsLayoutWidget\nfrom .widgets.GraphicsView import GraphicsView\n\n\nclass GraphicsWindow(GraphicsLayoutWidget):\n \"\"\"\n (deprecated; use GraphicsLayoutWidget instead)\n \n Convenience subclass of :class:`GraphicsLayoutWidget \n <pyqtgraph.GraphicsLayoutWidget>`. This class is intended for use from \n the interactive python prompt.\n \"\"\"\n def __init__(self, title=None, size=(800,600), **kargs):\n mkQApp()\n GraphicsLayoutWidget.__init__(self, **kargs)\n self.resize(*size)\n if title is not None:\n self.setWindowTitle(title)\n self.show()\n \n\nclass TabWindow(QtGui.QMainWindow):\n \"\"\"\n (deprecated)\n \"\"\"\n def __init__(self, title=None, size=(800,600)):\n mkQApp()\n QtGui.QMainWindow.__init__(self)\n self.resize(*size)\n self.cw = QtGui.QTabWidget()\n self.setCentralWidget(self.cw)\n if title is not None:\n self.setWindowTitle(title)\n self.show()\n \n def __getattr__(self, attr):\n if hasattr(self.cw, attr):\n return getattr(self.cw, attr)\n else:\n raise NameError(attr)\n \n\nclass PlotWindow(PlotWidget):\n \"\"\"\n (deprecated; use PlotWidget instead)\n \"\"\"\n def __init__(self, title=None, **kargs):\n mkQApp()\n self.win = QtGui.QMainWindow()\n PlotWidget.__init__(self, **kargs)\n self.win.setCentralWidget(self)\n for m in ['resize']:\n setattr(self, m, getattr(self.win, m))\n if title is not None:\n self.win.setWindowTitle(title)\n self.win.show()\n\n\nclass ImageWindow(ImageView):\n \"\"\"\n (deprecated; use ImageView instead)\n \"\"\"\n def __init__(self, *args, **kargs):\n mkQApp()\n self.win = QtGui.QMainWindow()\n self.win.resize(800,600)\n if 'title' in kargs:\n self.win.setWindowTitle(kargs['title'])\n del kargs['title']\n ImageView.__init__(self, self.win)\n if len(args) > 0 or len(kargs) > 0:\n self.setImage(*args, **kargs)\n self.win.setCentralWidget(self)\n for m in ['resize']:\n setattr(self, m, getattr(self.win, m))\n #for m in ['setImage', 'autoRange', 'addItem', 'removeItem', 'blackLevel', 'whiteLevel', 'imageItem']:\n #setattr(self, m, getattr(self.cw, m))\n self.win.show()\n", "path": "pyqtgraph/graphicsWindows.py"}]} | 1,486 | 118 |
gh_patches_debug_10360 | rasdani/github-patches | git_diff | ckan__ckan-624 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Tag pages still use old templates
/tag
</issue>
<code>
[start of ckan/controllers/tag.py]
1 from pylons.i18n import _
2 from pylons import request, c
3
4 import ckan.logic as logic
5 import ckan.model as model
6 import ckan.lib.base as base
7 import ckan.lib.helpers as h
8
9
10 LIMIT = 25
11
12
13 class TagController(base.BaseController):
14
15 def __before__(self, action, **env):
16 base.BaseController.__before__(self, action, **env)
17 try:
18 context = {'model': model, 'user': c.user or c.author}
19 logic.check_access('site_read', context)
20 except logic.NotAuthorized:
21 base.abort(401, _('Not authorized to see this page'))
22
23 def index(self):
24 c.q = request.params.get('q', '')
25
26 context = {'model': model, 'session': model.Session,
27 'user': c.user or c.author, 'for_view': True}
28
29 data_dict = {'all_fields': True}
30
31 if c.q:
32 page = int(request.params.get('page', 1))
33 data_dict['q'] = c.q
34 data_dict['limit'] = LIMIT
35 data_dict['offset'] = (page - 1) * LIMIT
36 data_dict['return_objects'] = True
37
38 results = logic.get_action('tag_list')(context, data_dict)
39
40 if c.q:
41 c.page = h.Page(
42 collection=results,
43 page=page,
44 item_count=len(results),
45 items_per_page=LIMIT
46 )
47 c.page.items = results
48 else:
49 c.page = h.AlphaPage(
50 collection=results,
51 page=request.params.get('page', 'A'),
52 alpha_attribute='name',
53 other_text=_('Other'),
54 )
55
56 return base.render('tag/index.html')
57
58 def read(self, id):
59 context = {'model': model, 'session': model.Session,
60 'user': c.user or c.author, 'for_view': True}
61
62 data_dict = {'id': id}
63 try:
64 c.tag = logic.get_action('tag_show')(context, data_dict)
65 except logic.NotFound:
66 base.abort(404, _('Tag not found'))
67
68 return base.render('tag/read.html')
69
[end of ckan/controllers/tag.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ckan/controllers/tag.py b/ckan/controllers/tag.py
--- a/ckan/controllers/tag.py
+++ b/ckan/controllers/tag.py
@@ -1,5 +1,5 @@
from pylons.i18n import _
-from pylons import request, c
+from pylons import request, c, config
import ckan.logic as logic
import ckan.model as model
@@ -65,4 +65,7 @@
except logic.NotFound:
base.abort(404, _('Tag not found'))
- return base.render('tag/read.html')
+ if h.asbool(config.get('ckan.legacy_templates', False)):
+ return base.render('tag/read.html')
+ else:
+ h.redirect_to(controller='package', action='search', tags=c.tag.get('name'))
| {"golden_diff": "diff --git a/ckan/controllers/tag.py b/ckan/controllers/tag.py\n--- a/ckan/controllers/tag.py\n+++ b/ckan/controllers/tag.py\n@@ -1,5 +1,5 @@\n from pylons.i18n import _\n-from pylons import request, c\n+from pylons import request, c, config\n \n import ckan.logic as logic\n import ckan.model as model\n@@ -65,4 +65,7 @@\n except logic.NotFound:\n base.abort(404, _('Tag not found'))\n \n- return base.render('tag/read.html')\n+ if h.asbool(config.get('ckan.legacy_templates', False)):\n+ return base.render('tag/read.html')\n+ else:\n+ h.redirect_to(controller='package', action='search', tags=c.tag.get('name'))\n", "issue": "Tag pages still use old templates\n/tag\n\n", "before_files": [{"content": "from pylons.i18n import _\nfrom pylons import request, c\n\nimport ckan.logic as logic\nimport ckan.model as model\nimport ckan.lib.base as base\nimport ckan.lib.helpers as h\n\n\nLIMIT = 25\n\n\nclass TagController(base.BaseController):\n\n def __before__(self, action, **env):\n base.BaseController.__before__(self, action, **env)\n try:\n context = {'model': model, 'user': c.user or c.author}\n logic.check_access('site_read', context)\n except logic.NotAuthorized:\n base.abort(401, _('Not authorized to see this page'))\n\n def index(self):\n c.q = request.params.get('q', '')\n\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True}\n\n data_dict = {'all_fields': True}\n\n if c.q:\n page = int(request.params.get('page', 1))\n data_dict['q'] = c.q\n data_dict['limit'] = LIMIT\n data_dict['offset'] = (page - 1) * LIMIT\n data_dict['return_objects'] = True\n\n results = logic.get_action('tag_list')(context, data_dict)\n\n if c.q:\n c.page = h.Page(\n collection=results,\n page=page,\n item_count=len(results),\n items_per_page=LIMIT\n )\n c.page.items = results\n else:\n c.page = h.AlphaPage(\n collection=results,\n page=request.params.get('page', 'A'),\n alpha_attribute='name',\n other_text=_('Other'),\n )\n\n return base.render('tag/index.html')\n\n def read(self, id):\n context = {'model': model, 'session': model.Session,\n 'user': c.user or c.author, 'for_view': True}\n\n data_dict = {'id': id}\n try:\n c.tag = logic.get_action('tag_show')(context, data_dict)\n except logic.NotFound:\n base.abort(404, _('Tag not found'))\n\n return base.render('tag/read.html')\n", "path": "ckan/controllers/tag.py"}]} | 1,148 | 181 |
gh_patches_debug_11659 | rasdani/github-patches | git_diff | gratipay__gratipay.com-4464 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
NPM sync is broken
https://gratipay.slack.com/archives/C36LJJF9V/p1494580201702422
</issue>
<code>
[start of gratipay/sync_npm.py]
1 # -*- coding: utf-8 -*-
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import requests
5 from couchdb import Database
6
7 from gratipay.models.package import NPM, Package
8
9
10 REGISTRY_URL = 'https://replicate.npmjs.com/'
11
12
13 def get_last_seq(db):
14 return db.one('SELECT npm_last_seq FROM worker_coordination')
15
16
17 def production_change_stream(seq):
18 """Given a sequence number in the npm registry change stream, start
19 streaming from there!
20 """
21 return Database(REGISTRY_URL).changes(feed='continuous', include_docs=True, since=seq)
22
23
24 def process_doc(doc):
25 """Return a smoothed-out doc, or None if it's not a package doc, meaning
26 there's no name key and it's probably a design doc, per:
27
28 https://github.com/npm/registry/blob/aef8a275/docs/follower.md#clean-up
29
30 """
31 if 'name' not in doc:
32 return None
33 name = doc['name']
34 description = doc.get('description', '')
35 emails = [e for e in [m.get('email') for m in doc.get('maintainers', [])] if e.strip()]
36 return {'name': name, 'description': description, 'emails': sorted(set(emails))}
37
38
39 def consume_change_stream(stream, db):
40 """Given an iterable of CouchDB change notifications and a
41 :py:class:`~GratipayDB`, read from the stream and write to the db.
42
43 The npm registry is a CouchDB app, which means we get a change stream from
44 it that allows us to follow registry updates in near-realtime. Our strategy
45 here is to maintain open connections to both the registry and our own
46 database, and write as we read.
47
48 """
49 with db.get_connection() as connection:
50 for change in stream:
51
52 # Decide what to do.
53 if change.get('deleted'):
54 package = Package.from_names(NPM, change['id'])
55 assert package is not None # right?
56 op, kw = package.delete, {}
57 else:
58 op = Package.upsert
59 kw = process_doc(change['doc'])
60 if not kw:
61 continue
62 kw['package_manager'] = NPM
63
64 # Do it.
65 cursor = connection.cursor()
66 kw['cursor'] = cursor
67 op(**kw)
68 cursor.run('UPDATE worker_coordination SET npm_last_seq=%(seq)s', change)
69 connection.commit()
70
71
72 def check(db, _print=print):
73 ours = db.one('SELECT npm_last_seq FROM worker_coordination')
74 theirs = int(requests.get(REGISTRY_URL).json()['update_seq'])
75 _print("count#npm-sync-lag={}".format(theirs - ours))
76
[end of gratipay/sync_npm.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/gratipay/sync_npm.py b/gratipay/sync_npm.py
--- a/gratipay/sync_npm.py
+++ b/gratipay/sync_npm.py
@@ -52,7 +52,11 @@
# Decide what to do.
if change.get('deleted'):
package = Package.from_names(NPM, change['id'])
- assert package is not None # right?
+ if not package:
+ # As a result of CouchDB's compaction algorithm, we might
+ # receive 'deleted' events for docs even if we haven't seen
+ # the corresponding events for when the doc was created
+ continue
op, kw = package.delete, {}
else:
op = Package.upsert
| {"golden_diff": "diff --git a/gratipay/sync_npm.py b/gratipay/sync_npm.py\n--- a/gratipay/sync_npm.py\n+++ b/gratipay/sync_npm.py\n@@ -52,7 +52,11 @@\n # Decide what to do.\n if change.get('deleted'):\n package = Package.from_names(NPM, change['id'])\n- assert package is not None # right?\n+ if not package:\n+ # As a result of CouchDB's compaction algorithm, we might\n+ # receive 'deleted' events for docs even if we haven't seen\n+ # the corresponding events for when the doc was created\n+ continue\n op, kw = package.delete, {}\n else:\n op = Package.upsert\n", "issue": "NPM sync is broken\nhttps://gratipay.slack.com/archives/C36LJJF9V/p1494580201702422\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport requests\nfrom couchdb import Database\n\nfrom gratipay.models.package import NPM, Package\n\n\nREGISTRY_URL = 'https://replicate.npmjs.com/'\n\n\ndef get_last_seq(db):\n return db.one('SELECT npm_last_seq FROM worker_coordination')\n\n\ndef production_change_stream(seq):\n \"\"\"Given a sequence number in the npm registry change stream, start\n streaming from there!\n \"\"\"\n return Database(REGISTRY_URL).changes(feed='continuous', include_docs=True, since=seq)\n\n\ndef process_doc(doc):\n \"\"\"Return a smoothed-out doc, or None if it's not a package doc, meaning\n there's no name key and it's probably a design doc, per:\n\n https://github.com/npm/registry/blob/aef8a275/docs/follower.md#clean-up\n\n \"\"\"\n if 'name' not in doc:\n return None\n name = doc['name']\n description = doc.get('description', '')\n emails = [e for e in [m.get('email') for m in doc.get('maintainers', [])] if e.strip()]\n return {'name': name, 'description': description, 'emails': sorted(set(emails))}\n\n\ndef consume_change_stream(stream, db):\n \"\"\"Given an iterable of CouchDB change notifications and a\n :py:class:`~GratipayDB`, read from the stream and write to the db.\n\n The npm registry is a CouchDB app, which means we get a change stream from\n it that allows us to follow registry updates in near-realtime. Our strategy\n here is to maintain open connections to both the registry and our own\n database, and write as we read.\n\n \"\"\"\n with db.get_connection() as connection:\n for change in stream:\n\n # Decide what to do.\n if change.get('deleted'):\n package = Package.from_names(NPM, change['id'])\n assert package is not None # right?\n op, kw = package.delete, {}\n else:\n op = Package.upsert\n kw = process_doc(change['doc'])\n if not kw:\n continue\n kw['package_manager'] = NPM\n\n # Do it.\n cursor = connection.cursor()\n kw['cursor'] = cursor\n op(**kw)\n cursor.run('UPDATE worker_coordination SET npm_last_seq=%(seq)s', change)\n connection.commit()\n\n\ndef check(db, _print=print):\n ours = db.one('SELECT npm_last_seq FROM worker_coordination')\n theirs = int(requests.get(REGISTRY_URL).json()['update_seq'])\n _print(\"count#npm-sync-lag={}\".format(theirs - ours))\n", "path": "gratipay/sync_npm.py"}]} | 1,322 | 172 |
gh_patches_debug_26298 | rasdani/github-patches | git_diff | alltheplaces__alltheplaces-3326 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Spider wsp is broken
During the global build at 2021-09-01-14-42-16, spider **wsp** failed with **0 features** and **0 errors**.
Here's [the log](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/logs/wsp.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/wsp.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/wsp.geojson))
</issue>
<code>
[start of locations/spiders/wsp.py]
1 # -*- coding: utf-8 -*-
2 import scrapy
3 import json
4 from locations.items import GeojsonPointItem
5
6
7 class wsp(scrapy.Spider):
8 name = "wsp"
9 item_attributes = {'brand': "wsp"}
10 allowed_domains = ["www.wsp.com"]
11 start_urls = (
12 'https://www.wsp.com',
13 )
14
15 def parse(self, response):
16 url = 'https://www.wsp.com/api/sitecore/Maps/GetMapPoints'
17
18 formdata = {
19 'itemId': '{2F436202-D2B9-4F3D-8ECC-5E0BCA533888}',
20 }
21
22 yield scrapy.http.FormRequest(
23 url,
24 self.parse_store,
25 method='POST',
26 formdata=formdata,
27 )
28
29 def parse_store(self, response):
30 office_data = json.loads(response.body_as_unicode())
31
32 for office in office_data:
33 try:
34 properties = {
35 'ref': office["ID"]["Guid"],
36 'addr_full': office["Address"],
37 'lat': office["Location"].split(",")[0],
38 'lon': office["Location"].split(",")[1],
39 'name': office["Name"],
40 'website': office["MapPointURL"]
41 }
42 except IndexError:
43 continue
44
45 yield GeojsonPointItem(**properties)
[end of locations/spiders/wsp.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/locations/spiders/wsp.py b/locations/spiders/wsp.py
--- a/locations/spiders/wsp.py
+++ b/locations/spiders/wsp.py
@@ -9,7 +9,7 @@
item_attributes = {'brand': "wsp"}
allowed_domains = ["www.wsp.com"]
start_urls = (
- 'https://www.wsp.com',
+ 'https://www.wsp.com/',
)
def parse(self, response):
@@ -24,10 +24,10 @@
self.parse_store,
method='POST',
formdata=formdata,
- )
+ )
def parse_store(self, response):
- office_data = json.loads(response.body_as_unicode())
+ office_data = json.loads(response.text)
for office in office_data:
try:
@@ -37,9 +37,9 @@
'lat': office["Location"].split(",")[0],
'lon': office["Location"].split(",")[1],
'name': office["Name"],
- 'website': office["MapPointURL"]
+ 'website': response.urljoin(office["MapPointURL"]),
}
except IndexError:
continue
- yield GeojsonPointItem(**properties)
\ No newline at end of file
+ yield GeojsonPointItem(**properties)
| {"golden_diff": "diff --git a/locations/spiders/wsp.py b/locations/spiders/wsp.py\n--- a/locations/spiders/wsp.py\n+++ b/locations/spiders/wsp.py\n@@ -9,7 +9,7 @@\n item_attributes = {'brand': \"wsp\"}\n allowed_domains = [\"www.wsp.com\"]\n start_urls = (\n- 'https://www.wsp.com',\n+ 'https://www.wsp.com/',\n )\n \n def parse(self, response):\n@@ -24,10 +24,10 @@\n self.parse_store,\n method='POST',\n formdata=formdata,\n- )\n+ )\n \n def parse_store(self, response):\n- office_data = json.loads(response.body_as_unicode())\n+ office_data = json.loads(response.text)\n \n for office in office_data:\n try:\n@@ -37,9 +37,9 @@\n 'lat': office[\"Location\"].split(\",\")[0],\n 'lon': office[\"Location\"].split(\",\")[1],\n 'name': office[\"Name\"],\n- 'website': office[\"MapPointURL\"]\n+ 'website': response.urljoin(office[\"MapPointURL\"]),\n }\n except IndexError:\n continue\n \n- yield GeojsonPointItem(**properties)\n\\ No newline at end of file\n+ yield GeojsonPointItem(**properties)\n", "issue": "Spider wsp is broken\nDuring the global build at 2021-09-01-14-42-16, spider **wsp** failed with **0 features** and **0 errors**.\n\nHere's [the log](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/logs/wsp.txt) and [the output](https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/wsp.geojson) ([on a map](https://data.alltheplaces.xyz/map.html?show=https://data.alltheplaces.xyz/runs/2021-09-01-14-42-16/output/wsp.geojson))\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\nimport scrapy\nimport json\nfrom locations.items import GeojsonPointItem\n\n\nclass wsp(scrapy.Spider):\n name = \"wsp\"\n item_attributes = {'brand': \"wsp\"}\n allowed_domains = [\"www.wsp.com\"]\n start_urls = (\n 'https://www.wsp.com',\n )\n\n def parse(self, response):\n url = 'https://www.wsp.com/api/sitecore/Maps/GetMapPoints'\n\n formdata = {\n 'itemId': '{2F436202-D2B9-4F3D-8ECC-5E0BCA533888}',\n }\n\n yield scrapy.http.FormRequest(\n url,\n self.parse_store,\n method='POST',\n formdata=formdata,\n )\n\n def parse_store(self, response):\n office_data = json.loads(response.body_as_unicode())\n\n for office in office_data:\n try:\n properties = {\n 'ref': office[\"ID\"][\"Guid\"],\n 'addr_full': office[\"Address\"],\n 'lat': office[\"Location\"].split(\",\")[0],\n 'lon': office[\"Location\"].split(\",\")[1],\n 'name': office[\"Name\"],\n 'website': office[\"MapPointURL\"]\n }\n except IndexError:\n continue\n\n yield GeojsonPointItem(**properties)", "path": "locations/spiders/wsp.py"}]} | 1,094 | 294 |
gh_patches_debug_10115 | rasdani/github-patches | git_diff | Qiskit__qiskit-11782 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Round durations in `GenericBackendV2`
<!--
⚠️ If you do not respect this template, your pull request will be closed.
⚠️ Your pull request title should be short detailed and understandable for all.
⚠️ Also, please add a release note file using reno if the change needs to be
documented in the release notes.
⚠️ If your pull request fixes an open issue, please link to the issue.
- [ ] I have added the tests to cover my changes.
- [ ] I have updated the documentation accordingly.
- [ ] I have read the CONTRIBUTING document.
-->
### Summary
This PR makes sure that the conversion of `GenericBackendV2` instruction durations to `dt` is exact to avoid user warnings during scheduling of type:
`UserWarning: Duration is rounded to 616 [dt] = 1.367520e-07 [s] from 1.366887e-07 [s]`
Given that the durations are sampled randomly, and the rounded duration is the one used in the scheduling passes, we might as well make sure in advance that the conversion from seconds to dt will be exact and doesn't raise warnings.
### Details and comments
I am not sure this qualifies as a bugfix but I think it improves the readability of the test logs. For example, for `test_scheduling_backend_v2` in `test/python/compiler/test_transpiler.py`. Before:
```
/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 986 [dt] = 2.188920e-07 [s] from 2.189841e-07 [s]
warnings.warn(
/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 2740 [dt] = 6.082800e-07 [s] from 6.083383e-07 [s]
warnings.warn(
/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 2697 [dt] = 5.987340e-07 [s] from 5.988312e-07 [s]
warnings.warn(
/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 178 [dt] = 3.951600e-08 [s] from 3.956636e-08 [s]
warnings.warn(
.
----------------------------------------------------------------------
Ran 1 test in 0.548s
OK
```
After:
```
.
----------------------------------------------------------------------
Ran 1 test in 0.506s
OK
```
</issue>
<code>
[start of qiskit/circuit/duration.py]
1 # This code is part of Qiskit.
2 #
3 # (C) Copyright IBM 2020.
4 #
5 # This code is licensed under the Apache License, Version 2.0. You may
6 # obtain a copy of this license in the LICENSE.txt file in the root directory
7 # of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.
8 #
9 # Any modifications or derivative works of this code must retain this
10 # copyright notice, and modified files need to carry a notice indicating
11 # that they have been altered from the originals.
12
13 """
14 Utilities for handling duration of a circuit instruction.
15 """
16 import warnings
17
18 from qiskit.circuit import QuantumCircuit
19 from qiskit.circuit.exceptions import CircuitError
20 from qiskit.utils.units import apply_prefix
21
22
23 def duration_in_dt(duration_in_sec: float, dt_in_sec: float) -> int:
24 """
25 Return duration in dt.
26
27 Args:
28 duration_in_sec: duration [s] to be converted.
29 dt_in_sec: duration of dt in seconds used for conversion.
30
31 Returns:
32 Duration in dt.
33 """
34 res = round(duration_in_sec / dt_in_sec)
35 rounding_error = abs(duration_in_sec - res * dt_in_sec)
36 if rounding_error > 1e-15:
37 warnings.warn(
38 "Duration is rounded to %d [dt] = %e [s] from %e [s]"
39 % (res, res * dt_in_sec, duration_in_sec),
40 UserWarning,
41 )
42 return res
43
44
45 def convert_durations_to_dt(qc: QuantumCircuit, dt_in_sec: float, inplace=True):
46 """Convert all the durations in SI (seconds) into those in dt.
47
48 Returns a new circuit if `inplace=False`.
49
50 Parameters:
51 qc (QuantumCircuit): Duration of dt in seconds used for conversion.
52 dt_in_sec (float): Duration of dt in seconds used for conversion.
53 inplace (bool): All durations are converted inplace or return new circuit.
54
55 Returns:
56 QuantumCircuit: Converted circuit if `inplace = False`, otherwise None.
57
58 Raises:
59 CircuitError: if fail to convert durations.
60 """
61 if inplace:
62 circ = qc
63 else:
64 circ = qc.copy()
65
66 for instruction in circ.data:
67 operation = instruction.operation
68 if operation.unit == "dt" or operation.duration is None:
69 continue
70
71 if not operation.unit.endswith("s"):
72 raise CircuitError(f"Invalid time unit: '{operation.unit}'")
73
74 duration = operation.duration
75 if operation.unit != "s":
76 duration = apply_prefix(duration, operation.unit)
77
78 operation.duration = duration_in_dt(duration, dt_in_sec)
79 operation.unit = "dt"
80
81 if circ.duration is not None:
82 circ.duration = duration_in_dt(circ.duration, dt_in_sec)
83 circ.unit = "dt"
84
85 if not inplace:
86 return circ
87 else:
88 return None
89
[end of qiskit/circuit/duration.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/qiskit/circuit/duration.py b/qiskit/circuit/duration.py
--- a/qiskit/circuit/duration.py
+++ b/qiskit/circuit/duration.py
@@ -78,8 +78,15 @@
operation.duration = duration_in_dt(duration, dt_in_sec)
operation.unit = "dt"
- if circ.duration is not None:
- circ.duration = duration_in_dt(circ.duration, dt_in_sec)
+ if circ.duration is not None and circ.unit != "dt":
+ if not circ.unit.endswith("s"):
+ raise CircuitError(f"Invalid time unit: '{circ.unit}'")
+
+ duration = circ.duration
+ if circ.unit != "s":
+ duration = apply_prefix(duration, circ.unit)
+
+ circ.duration = duration_in_dt(duration, dt_in_sec)
circ.unit = "dt"
if not inplace:
| {"golden_diff": "diff --git a/qiskit/circuit/duration.py b/qiskit/circuit/duration.py\n--- a/qiskit/circuit/duration.py\n+++ b/qiskit/circuit/duration.py\n@@ -78,8 +78,15 @@\n operation.duration = duration_in_dt(duration, dt_in_sec)\n operation.unit = \"dt\"\n \n- if circ.duration is not None:\n- circ.duration = duration_in_dt(circ.duration, dt_in_sec)\n+ if circ.duration is not None and circ.unit != \"dt\":\n+ if not circ.unit.endswith(\"s\"):\n+ raise CircuitError(f\"Invalid time unit: '{circ.unit}'\")\n+\n+ duration = circ.duration\n+ if circ.unit != \"s\":\n+ duration = apply_prefix(duration, circ.unit)\n+\n+ circ.duration = duration_in_dt(duration, dt_in_sec)\n circ.unit = \"dt\"\n \n if not inplace:\n", "issue": "Round durations in `GenericBackendV2`\n<!--\r\n\u26a0\ufe0f If you do not respect this template, your pull request will be closed.\r\n\u26a0\ufe0f Your pull request title should be short detailed and understandable for all.\r\n\u26a0\ufe0f Also, please add a release note file using reno if the change needs to be\r\n documented in the release notes.\r\n\u26a0\ufe0f If your pull request fixes an open issue, please link to the issue.\r\n\r\n- [ ] I have added the tests to cover my changes.\r\n- [ ] I have updated the documentation accordingly.\r\n- [ ] I have read the CONTRIBUTING document.\r\n-->\r\n\r\n### Summary\r\nThis PR makes sure that the conversion of `GenericBackendV2` instruction durations to `dt` is exact to avoid user warnings during scheduling of type:\r\n\r\n`UserWarning: Duration is rounded to 616 [dt] = 1.367520e-07 [s] from 1.366887e-07 [s]`\r\n\r\nGiven that the durations are sampled randomly, and the rounded duration is the one used in the scheduling passes, we might as well make sure in advance that the conversion from seconds to dt will be exact and doesn't raise warnings.\r\n\r\n### Details and comments\r\nI am not sure this qualifies as a bugfix but I think it improves the readability of the test logs. For example, for `test_scheduling_backend_v2` in `test/python/compiler/test_transpiler.py`. Before:\r\n\r\n```\r\n/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 986 [dt] = 2.188920e-07 [s] from 2.189841e-07 [s]\r\n warnings.warn(\r\n/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 2740 [dt] = 6.082800e-07 [s] from 6.083383e-07 [s]\r\n warnings.warn(\r\n/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 2697 [dt] = 5.987340e-07 [s] from 5.988312e-07 [s]\r\n warnings.warn(\r\n/Users/ept/qiskit_workspace/qiskit/qiskit/circuit/duration.py:37: UserWarning: Duration is rounded to 178 [dt] = 3.951600e-08 [s] from 3.956636e-08 [s]\r\n warnings.warn(\r\n.\r\n----------------------------------------------------------------------\r\nRan 1 test in 0.548s\r\n\r\nOK\r\n```\r\n\r\nAfter:\r\n\r\n```\r\n.\r\n----------------------------------------------------------------------\r\nRan 1 test in 0.506s\r\n\r\nOK\r\n```\r\n\n", "before_files": [{"content": "# This code is part of Qiskit.\n#\n# (C) Copyright IBM 2020.\n#\n# This code is licensed under the Apache License, Version 2.0. You may\n# obtain a copy of this license in the LICENSE.txt file in the root directory\n# of this source tree or at http://www.apache.org/licenses/LICENSE-2.0.\n#\n# Any modifications or derivative works of this code must retain this\n# copyright notice, and modified files need to carry a notice indicating\n# that they have been altered from the originals.\n\n\"\"\"\nUtilities for handling duration of a circuit instruction.\n\"\"\"\nimport warnings\n\nfrom qiskit.circuit import QuantumCircuit\nfrom qiskit.circuit.exceptions import CircuitError\nfrom qiskit.utils.units import apply_prefix\n\n\ndef duration_in_dt(duration_in_sec: float, dt_in_sec: float) -> int:\n \"\"\"\n Return duration in dt.\n\n Args:\n duration_in_sec: duration [s] to be converted.\n dt_in_sec: duration of dt in seconds used for conversion.\n\n Returns:\n Duration in dt.\n \"\"\"\n res = round(duration_in_sec / dt_in_sec)\n rounding_error = abs(duration_in_sec - res * dt_in_sec)\n if rounding_error > 1e-15:\n warnings.warn(\n \"Duration is rounded to %d [dt] = %e [s] from %e [s]\"\n % (res, res * dt_in_sec, duration_in_sec),\n UserWarning,\n )\n return res\n\n\ndef convert_durations_to_dt(qc: QuantumCircuit, dt_in_sec: float, inplace=True):\n \"\"\"Convert all the durations in SI (seconds) into those in dt.\n\n Returns a new circuit if `inplace=False`.\n\n Parameters:\n qc (QuantumCircuit): Duration of dt in seconds used for conversion.\n dt_in_sec (float): Duration of dt in seconds used for conversion.\n inplace (bool): All durations are converted inplace or return new circuit.\n\n Returns:\n QuantumCircuit: Converted circuit if `inplace = False`, otherwise None.\n\n Raises:\n CircuitError: if fail to convert durations.\n \"\"\"\n if inplace:\n circ = qc\n else:\n circ = qc.copy()\n\n for instruction in circ.data:\n operation = instruction.operation\n if operation.unit == \"dt\" or operation.duration is None:\n continue\n\n if not operation.unit.endswith(\"s\"):\n raise CircuitError(f\"Invalid time unit: '{operation.unit}'\")\n\n duration = operation.duration\n if operation.unit != \"s\":\n duration = apply_prefix(duration, operation.unit)\n\n operation.duration = duration_in_dt(duration, dt_in_sec)\n operation.unit = \"dt\"\n\n if circ.duration is not None:\n circ.duration = duration_in_dt(circ.duration, dt_in_sec)\n circ.unit = \"dt\"\n\n if not inplace:\n return circ\n else:\n return None\n", "path": "qiskit/circuit/duration.py"}]} | 1,998 | 201 |
gh_patches_debug_7618 | rasdani/github-patches | git_diff | localstack__localstack-8398 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Unable to change ContentBasedDeduplication attribute on existing queue
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
If I create a queue and try to change its `ContentDeduplication` attribute, I see this error:
` An error occurred (InvalidAttributeName) when calling the SetQueueAttributes operation: Unknown Attribute ContentBasedDeduplication.`
### Expected Behavior
I should be able to set `ContentBasedDeduplication` from `true` to `false` on an existing queue. It appears to work on AWS.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
aws sqs create-queue --queue-name test1.fifo --endpoint-url http://localhost:4566/ --attributes FifoQueue=true,ContentBasedDeduplication=true
{
"QueueUrl": "http://localhost:4566/000000000000/test1.fifo"
}
aws sqs get-queue-attributes --endpoint-url http://localhost:4566/ --queue-url http://localhost:4566/000000000000/test1.fifo --attribute-names '["ContentBasedDeduplication"]'
{
"Attributes": {
"FifoQueue": "true,
"ContentBasedDeduplication": "true"
}
}
aws sqs set-queue-attributes --endpoint-url http://localhost:4566/ --queue-url http://localhost:4566/000000000000/test1.fifo --attributes ContentBasedDeduplication=false
An error occurred (InvalidAttributeName) when calling the SetQueueAttributes operation: Unknown Attribute ContentBasedDeduplication.
```
### Environment
```markdown
- OS: MacOs Ventura 13.3.1 (a)
- LocalStack: 2.1.0
```
### Anything else?
_No response_
</issue>
<code>
[start of localstack/services/sqs/constants.py]
1 # Valid unicode values: #x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF
2 # https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html
3 from localstack.aws.api.sqs import QueueAttributeName
4
5 MSG_CONTENT_REGEX = "^[\u0009\u000A\u000D\u0020-\uD7FF\uE000-\uFFFD\U00010000-\U0010FFFF]*$"
6
7 # https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html
8 # While not documented, umlauts seem to be allowed
9 ATTR_NAME_CHAR_REGEX = "^[\u00C0-\u017Fa-zA-Z0-9_.-]*$"
10 ATTR_NAME_PREFIX_SUFFIX_REGEX = r"^(?!(aws\.|amazon\.|\.)).*(?<!\.)$"
11 ATTR_TYPE_REGEX = "^(String|Number|Binary).*$"
12 FIFO_MSG_REGEX = "^[0-9a-zA-z!\"#$%&'()*+,./:;<=>?@[\\]^_`{|}~-]*$"
13
14 DEDUPLICATION_INTERVAL_IN_SEC = 5 * 60
15
16 # When you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.
17 # see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteQueue.html
18 RECENTLY_DELETED_TIMEOUT = 60
19
20 # the default maximum message size in SQS
21 DEFAULT_MAXIMUM_MESSAGE_SIZE = 262144
22 INTERNAL_QUEUE_ATTRIBUTES = [
23 # these attributes cannot be changed by set_queue_attributes and should
24 # therefore be ignored when comparing queue attributes for create_queue
25 # 'FifoQueue' is handled on a per_queue basis
26 QueueAttributeName.ApproximateNumberOfMessages,
27 QueueAttributeName.ApproximateNumberOfMessagesDelayed,
28 QueueAttributeName.ApproximateNumberOfMessagesNotVisible,
29 QueueAttributeName.ContentBasedDeduplication,
30 QueueAttributeName.CreatedTimestamp,
31 QueueAttributeName.LastModifiedTimestamp,
32 QueueAttributeName.QueueArn,
33 ]
34
[end of localstack/services/sqs/constants.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/localstack/services/sqs/constants.py b/localstack/services/sqs/constants.py
--- a/localstack/services/sqs/constants.py
+++ b/localstack/services/sqs/constants.py
@@ -26,7 +26,6 @@
QueueAttributeName.ApproximateNumberOfMessages,
QueueAttributeName.ApproximateNumberOfMessagesDelayed,
QueueAttributeName.ApproximateNumberOfMessagesNotVisible,
- QueueAttributeName.ContentBasedDeduplication,
QueueAttributeName.CreatedTimestamp,
QueueAttributeName.LastModifiedTimestamp,
QueueAttributeName.QueueArn,
| {"golden_diff": "diff --git a/localstack/services/sqs/constants.py b/localstack/services/sqs/constants.py\n--- a/localstack/services/sqs/constants.py\n+++ b/localstack/services/sqs/constants.py\n@@ -26,7 +26,6 @@\n QueueAttributeName.ApproximateNumberOfMessages,\n QueueAttributeName.ApproximateNumberOfMessagesDelayed,\n QueueAttributeName.ApproximateNumberOfMessagesNotVisible,\n- QueueAttributeName.ContentBasedDeduplication,\n QueueAttributeName.CreatedTimestamp,\n QueueAttributeName.LastModifiedTimestamp,\n QueueAttributeName.QueueArn,\n", "issue": "Unable to change ContentBasedDeduplication attribute on existing queue\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nIf I create a queue and try to change its `ContentDeduplication` attribute, I see this error:\r\n\r\n` An error occurred (InvalidAttributeName) when calling the SetQueueAttributes operation: Unknown Attribute ContentBasedDeduplication.`\r\n\r\n### Expected Behavior\r\n\r\nI should be able to set `ContentBasedDeduplication` from `true` to `false` on an existing queue. It appears to work on AWS.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n docker run localstack/localstack\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n```\r\naws sqs create-queue --queue-name test1.fifo --endpoint-url http://localhost:4566/ --attributes FifoQueue=true,ContentBasedDeduplication=true\r\n{\r\n \"QueueUrl\": \"http://localhost:4566/000000000000/test1.fifo\"\r\n}\r\n\r\n\r\naws sqs get-queue-attributes --endpoint-url http://localhost:4566/ --queue-url http://localhost:4566/000000000000/test1.fifo --attribute-names '[\"ContentBasedDeduplication\"]'\r\n{\r\n \"Attributes\": {\r\n \"FifoQueue\": \"true,\r\n \"ContentBasedDeduplication\": \"true\"\r\n }\r\n}\r\n\r\naws sqs set-queue-attributes --endpoint-url http://localhost:4566/ --queue-url http://localhost:4566/000000000000/test1.fifo --attributes ContentBasedDeduplication=false\r\n\r\nAn error occurred (InvalidAttributeName) when calling the SetQueueAttributes operation: Unknown Attribute ContentBasedDeduplication.\r\n```\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: MacOs Ventura 13.3.1 (a) \r\n- LocalStack: 2.1.0\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_\n", "before_files": [{"content": "# Valid unicode values: #x9 | #xA | #xD | #x20 to #xD7FF | #xE000 to #xFFFD | #x10000 to #x10FFFF\n# https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_SendMessage.html\nfrom localstack.aws.api.sqs import QueueAttributeName\n\nMSG_CONTENT_REGEX = \"^[\\u0009\\u000A\\u000D\\u0020-\\uD7FF\\uE000-\\uFFFD\\U00010000-\\U0010FFFF]*$\"\n\n# https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-message-metadata.html\n# While not documented, umlauts seem to be allowed\nATTR_NAME_CHAR_REGEX = \"^[\\u00C0-\\u017Fa-zA-Z0-9_.-]*$\"\nATTR_NAME_PREFIX_SUFFIX_REGEX = r\"^(?!(aws\\.|amazon\\.|\\.)).*(?<!\\.)$\"\nATTR_TYPE_REGEX = \"^(String|Number|Binary).*$\"\nFIFO_MSG_REGEX = \"^[0-9a-zA-z!\\\"#$%&'()*+,./:;<=>?@[\\\\]^_`{|}~-]*$\"\n\nDEDUPLICATION_INTERVAL_IN_SEC = 5 * 60\n\n# When you delete a queue, you must wait at least 60 seconds before creating a queue with the same name.\n# see https://docs.aws.amazon.com/AWSSimpleQueueService/latest/APIReference/API_DeleteQueue.html\nRECENTLY_DELETED_TIMEOUT = 60\n\n# the default maximum message size in SQS\nDEFAULT_MAXIMUM_MESSAGE_SIZE = 262144\nINTERNAL_QUEUE_ATTRIBUTES = [\n # these attributes cannot be changed by set_queue_attributes and should\n # therefore be ignored when comparing queue attributes for create_queue\n # 'FifoQueue' is handled on a per_queue basis\n QueueAttributeName.ApproximateNumberOfMessages,\n QueueAttributeName.ApproximateNumberOfMessagesDelayed,\n QueueAttributeName.ApproximateNumberOfMessagesNotVisible,\n QueueAttributeName.ContentBasedDeduplication,\n QueueAttributeName.CreatedTimestamp,\n QueueAttributeName.LastModifiedTimestamp,\n QueueAttributeName.QueueArn,\n]\n", "path": "localstack/services/sqs/constants.py"}]} | 1,588 | 113 |
gh_patches_debug_37151 | rasdani/github-patches | git_diff | conan-io__conan-10960 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
[bug] version is not set correctly when using layout
When layout is being used, recipe version is not set correctly somehow using json generator, it seems that version is not being fetched from package metadata when running conan install command!
### Environment Details
* Operating System+version: macos
* Compiler+version: apple-clang 12.0
* Conan version: Conan version 1.47.0
* Python version: 3.9
### Steps to reproduce
* create a conan demo project using `conan new demo/1.0.0 --template=cmake_lib`
* create a local conan package `conan create .`
* generate deps using json generator `conan install demo/1.0.0@ -g json`
* inspect conanbuildinfo.json, version is set to null, however it should be 1.0.0
* remove the layout method from the conanfile.py and try again
* now version is set correctly
btw, it seems to be the same issue for the description attribute, maybe other attributes as well

</issue>
<code>
[start of conans/client/generators/json_generator.py]
1 import json
2
3 from conans.model import Generator
4
5
6 def serialize_cpp_info(cpp_info):
7 keys = [
8 "version",
9 "description",
10 "rootpath",
11 "sysroot",
12 "include_paths", "lib_paths", "bin_paths", "build_paths", "res_paths",
13 "libs",
14 "system_libs",
15 "defines", "cflags", "cxxflags", "sharedlinkflags", "exelinkflags",
16 "frameworks", "framework_paths", "names", "filenames",
17 "build_modules", "build_modules_paths"
18 ]
19 res = {}
20 for key in keys:
21 res[key] = getattr(cpp_info, key)
22 res["cppflags"] = cpp_info.cxxflags # Backwards compatibility
23 return res
24
25
26 def serialize_user_info(user_info):
27 res = {}
28 for key, value in user_info.items():
29 res[key] = value.vars
30 return res
31
32
33 class JsonGenerator(Generator):
34 @property
35 def filename(self):
36 return "conanbuildinfo.json"
37
38 @property
39 def content(self):
40 info = {}
41 info["deps_env_info"] = self.deps_env_info.vars
42 info["deps_user_info"] = serialize_user_info(self.deps_user_info)
43 info["dependencies"] = self.get_dependencies_info()
44 info["settings"] = self.get_settings()
45 info["options"] = self.get_options()
46 if self._user_info_build:
47 info["user_info_build"] = serialize_user_info(self._user_info_build)
48
49 return json.dumps(info, indent=2)
50
51 def get_dependencies_info(self):
52 res = []
53 for depname, cpp_info in self.deps_build_info.dependencies:
54 serialized_info = serialize_cpp_info(cpp_info)
55 serialized_info["name"] = depname
56 for cfg, cfg_cpp_info in cpp_info.configs.items():
57 serialized_info.setdefault("configs", {})[cfg] = serialize_cpp_info(cfg_cpp_info)
58 res.append(serialized_info)
59 return res
60
61 def get_settings(self):
62 settings = {}
63 for key, value in self.settings.items():
64 settings[key] = value
65 return settings
66
67 def get_options(self):
68 options = {}
69 for req in self.conanfile.requires:
70 options[req] = {}
71 for key, value in self.conanfile.options[req].items():
72 options[req][key] = value
73 return options
74
[end of conans/client/generators/json_generator.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/conans/client/generators/json_generator.py b/conans/client/generators/json_generator.py
--- a/conans/client/generators/json_generator.py
+++ b/conans/client/generators/json_generator.py
@@ -3,26 +3,6 @@
from conans.model import Generator
-def serialize_cpp_info(cpp_info):
- keys = [
- "version",
- "description",
- "rootpath",
- "sysroot",
- "include_paths", "lib_paths", "bin_paths", "build_paths", "res_paths",
- "libs",
- "system_libs",
- "defines", "cflags", "cxxflags", "sharedlinkflags", "exelinkflags",
- "frameworks", "framework_paths", "names", "filenames",
- "build_modules", "build_modules_paths"
- ]
- res = {}
- for key in keys:
- res[key] = getattr(cpp_info, key)
- res["cppflags"] = cpp_info.cxxflags # Backwards compatibility
- return res
-
-
def serialize_user_info(user_info):
res = {}
for key, value in user_info.items():
@@ -51,10 +31,10 @@
def get_dependencies_info(self):
res = []
for depname, cpp_info in self.deps_build_info.dependencies:
- serialized_info = serialize_cpp_info(cpp_info)
- serialized_info["name"] = depname
+ serialized_info = self.serialize_cpp_info(depname, cpp_info)
for cfg, cfg_cpp_info in cpp_info.configs.items():
- serialized_info.setdefault("configs", {})[cfg] = serialize_cpp_info(cfg_cpp_info)
+ serialized_info.setdefault("configs", {})[cfg] = self.serialize_cpp_info(depname,
+ cfg_cpp_info)
res.append(serialized_info)
return res
@@ -71,3 +51,31 @@
for key, value in self.conanfile.options[req].items():
options[req][key] = value
return options
+
+ def serialize_cpp_info(self, depname, cpp_info):
+ keys = [
+ "version",
+ "description",
+ "rootpath",
+ "sysroot",
+ "include_paths", "lib_paths", "bin_paths", "build_paths", "res_paths",
+ "libs",
+ "system_libs",
+ "defines", "cflags", "cxxflags", "sharedlinkflags", "exelinkflags",
+ "frameworks", "framework_paths", "names", "filenames",
+ "build_modules", "build_modules_paths"
+ ]
+ res = {}
+ for key in keys:
+ res[key] = getattr(cpp_info, key)
+ res["cppflags"] = cpp_info.cxxflags # Backwards compatibility
+ res["name"] = depname
+
+ # FIXME: trick for NewCppInfo objects when declared layout
+ try:
+ if cpp_info.version is None:
+ res["version"] = self.conanfile.dependencies.get(depname).ref.version
+ except Exception:
+ pass
+
+ return res
| {"golden_diff": "diff --git a/conans/client/generators/json_generator.py b/conans/client/generators/json_generator.py\n--- a/conans/client/generators/json_generator.py\n+++ b/conans/client/generators/json_generator.py\n@@ -3,26 +3,6 @@\n from conans.model import Generator\n \n \n-def serialize_cpp_info(cpp_info):\n- keys = [\n- \"version\",\n- \"description\",\n- \"rootpath\",\n- \"sysroot\",\n- \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\", \"res_paths\",\n- \"libs\",\n- \"system_libs\",\n- \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\", \"exelinkflags\",\n- \"frameworks\", \"framework_paths\", \"names\", \"filenames\",\n- \"build_modules\", \"build_modules_paths\"\n- ]\n- res = {}\n- for key in keys:\n- res[key] = getattr(cpp_info, key)\n- res[\"cppflags\"] = cpp_info.cxxflags # Backwards compatibility\n- return res\n-\n-\n def serialize_user_info(user_info):\n res = {}\n for key, value in user_info.items():\n@@ -51,10 +31,10 @@\n def get_dependencies_info(self):\n res = []\n for depname, cpp_info in self.deps_build_info.dependencies:\n- serialized_info = serialize_cpp_info(cpp_info)\n- serialized_info[\"name\"] = depname\n+ serialized_info = self.serialize_cpp_info(depname, cpp_info)\n for cfg, cfg_cpp_info in cpp_info.configs.items():\n- serialized_info.setdefault(\"configs\", {})[cfg] = serialize_cpp_info(cfg_cpp_info)\n+ serialized_info.setdefault(\"configs\", {})[cfg] = self.serialize_cpp_info(depname,\n+ cfg_cpp_info)\n res.append(serialized_info)\n return res\n \n@@ -71,3 +51,31 @@\n for key, value in self.conanfile.options[req].items():\n options[req][key] = value\n return options\n+\n+ def serialize_cpp_info(self, depname, cpp_info):\n+ keys = [\n+ \"version\",\n+ \"description\",\n+ \"rootpath\",\n+ \"sysroot\",\n+ \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\", \"res_paths\",\n+ \"libs\",\n+ \"system_libs\",\n+ \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\", \"exelinkflags\",\n+ \"frameworks\", \"framework_paths\", \"names\", \"filenames\",\n+ \"build_modules\", \"build_modules_paths\"\n+ ]\n+ res = {}\n+ for key in keys:\n+ res[key] = getattr(cpp_info, key)\n+ res[\"cppflags\"] = cpp_info.cxxflags # Backwards compatibility\n+ res[\"name\"] = depname\n+\n+ # FIXME: trick for NewCppInfo objects when declared layout\n+ try:\n+ if cpp_info.version is None:\n+ res[\"version\"] = self.conanfile.dependencies.get(depname).ref.version\n+ except Exception:\n+ pass\n+\n+ return res\n", "issue": "[bug] version is not set correctly when using layout\nWhen layout is being used, recipe version is not set correctly somehow using json generator, it seems that version is not being fetched from package metadata when running conan install command!\r\n\r\n\r\n### Environment Details\r\n * Operating System+version: macos\r\n * Compiler+version: apple-clang 12.0\r\n * Conan version: Conan version 1.47.0\r\n * Python version: 3.9\r\n\r\n### Steps to reproduce \r\n* create a conan demo project using `conan new demo/1.0.0 --template=cmake_lib` \r\n* create a local conan package `conan create .`\r\n* generate deps using json generator `conan install demo/1.0.0@ -g json`\r\n* inspect conanbuildinfo.json, version is set to null, however it should be 1.0.0\r\n\r\n* remove the layout method from the conanfile.py and try again\r\n* now version is set correctly \r\n\r\nbtw, it seems to be the same issue for the description attribute, maybe other attributes as well\r\n\r\n\r\n\n", "before_files": [{"content": "import json\n\nfrom conans.model import Generator\n\n\ndef serialize_cpp_info(cpp_info):\n keys = [\n \"version\",\n \"description\",\n \"rootpath\",\n \"sysroot\",\n \"include_paths\", \"lib_paths\", \"bin_paths\", \"build_paths\", \"res_paths\",\n \"libs\",\n \"system_libs\",\n \"defines\", \"cflags\", \"cxxflags\", \"sharedlinkflags\", \"exelinkflags\",\n \"frameworks\", \"framework_paths\", \"names\", \"filenames\",\n \"build_modules\", \"build_modules_paths\"\n ]\n res = {}\n for key in keys:\n res[key] = getattr(cpp_info, key)\n res[\"cppflags\"] = cpp_info.cxxflags # Backwards compatibility\n return res\n\n\ndef serialize_user_info(user_info):\n res = {}\n for key, value in user_info.items():\n res[key] = value.vars\n return res\n\n\nclass JsonGenerator(Generator):\n @property\n def filename(self):\n return \"conanbuildinfo.json\"\n\n @property\n def content(self):\n info = {}\n info[\"deps_env_info\"] = self.deps_env_info.vars\n info[\"deps_user_info\"] = serialize_user_info(self.deps_user_info)\n info[\"dependencies\"] = self.get_dependencies_info()\n info[\"settings\"] = self.get_settings()\n info[\"options\"] = self.get_options()\n if self._user_info_build:\n info[\"user_info_build\"] = serialize_user_info(self._user_info_build)\n\n return json.dumps(info, indent=2)\n\n def get_dependencies_info(self):\n res = []\n for depname, cpp_info in self.deps_build_info.dependencies:\n serialized_info = serialize_cpp_info(cpp_info)\n serialized_info[\"name\"] = depname\n for cfg, cfg_cpp_info in cpp_info.configs.items():\n serialized_info.setdefault(\"configs\", {})[cfg] = serialize_cpp_info(cfg_cpp_info)\n res.append(serialized_info)\n return res\n\n def get_settings(self):\n settings = {}\n for key, value in self.settings.items():\n settings[key] = value\n return settings\n\n def get_options(self):\n options = {}\n for req in self.conanfile.requires:\n options[req] = {}\n for key, value in self.conanfile.options[req].items():\n options[req][key] = value\n return options\n", "path": "conans/client/generators/json_generator.py"}]} | 1,534 | 705 |
gh_patches_debug_2718 | rasdani/github-patches | git_diff | pyload__pyload-1733 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES'
03.08.2015 20:46:43 INFO Free space: 6.48 TiB
630 03.08.2015 20:46:43 INFO Activating Accounts...
631 03.08.2015 20:46:43 INFO Activating Plugins...
632 03.08.2015 20:46:43 WARNING HOOK AntiStandby: Unable to change system power state | [Errno 2] No such file or directory
633 03.08.2015 20:46:43 WARNING HOOK AntiStandby: Unable to change display power state | [Errno 2] No such file or directory
634 03.08.2015 20:46:43 INFO HOOK XFileSharingPro: Handling any hoster I can!
635 03.08.2015 20:46:43 WARNING HOOK UpdateManager: Unable to retrieve server to get updates
636 03.08.2015 20:46:43 INFO HOOK XFileSharingPro: Handling any crypter I can!
637 03.08.2015 20:46:43 INFO pyLoad is up and running
638 03.08.2015 20:46:45 INFO HOOK LinkdecrypterCom: Reloading supported crypter list
639 03.08.2015 20:46:45 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
640 03.08.2015 20:46:53 INFO HOOK ClickAndLoad: Proxy listening on 127.0.0.1:9666
641 03.08.2015 20:46:53 INFO HOOK LinkdecrypterCom: Reloading supported crypter list
642 03.08.2015 20:46:53 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
643 03.08.2015 20:47:45 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
644 03.08.2015 20:47:53 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry
</issue>
<code>
[start of module/plugins/hooks/LinkdecrypterComHook.py]
1 # -*- coding: utf-8 -*-
2
3 import re
4
5 from module.plugins.internal.MultiHook import MultiHook
6
7
8 class LinkdecrypterComHook(MultiHook):
9 __name__ = "LinkdecrypterComHook"
10 __type__ = "hook"
11 __version__ = "1.07"
12 __status__ = "testing"
13
14 __config__ = [("activated" , "bool" , "Activated" , True ),
15 ("pluginmode" , "all;listed;unlisted", "Use for plugins" , "all"),
16 ("pluginlist" , "str" , "Plugin list (comma separated)", "" ),
17 ("reload" , "bool" , "Reload plugin list" , True ),
18 ("reloadinterval", "int" , "Reload interval in hours" , 12 )]
19
20 __description__ = """Linkdecrypter.com hook plugin"""
21 __license__ = "GPLv3"
22 __authors__ = [("Walter Purcaro", "[email protected]")]
23
24
25 def get_hosters(self):
26 list = re.search(r'>Supported\(\d+\)</b>: <i>(.[\w.\-, ]+)',
27 self.load("http://linkdecrypter.com/").replace("(g)", "")).group(1).split(', ')
28 try:
29 list.remove("download.serienjunkies.org")
30 except ValueError:
31 pass
32
33 return list
34
[end of module/plugins/hooks/LinkdecrypterComHook.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/module/plugins/hooks/LinkdecrypterComHook.py b/module/plugins/hooks/LinkdecrypterComHook.py
--- a/module/plugins/hooks/LinkdecrypterComHook.py
+++ b/module/plugins/hooks/LinkdecrypterComHook.py
@@ -21,6 +21,7 @@
__license__ = "GPLv3"
__authors__ = [("Walter Purcaro", "[email protected]")]
+ COOKIES = False
def get_hosters(self):
list = re.search(r'>Supported\(\d+\)</b>: <i>(.[\w.\-, ]+)',
| {"golden_diff": "diff --git a/module/plugins/hooks/LinkdecrypterComHook.py b/module/plugins/hooks/LinkdecrypterComHook.py\n--- a/module/plugins/hooks/LinkdecrypterComHook.py\n+++ b/module/plugins/hooks/LinkdecrypterComHook.py\n@@ -21,6 +21,7 @@\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n \n+ COOKIES = False\n \n def get_hosters(self):\n list = re.search(r'>Supported\\(\\d+\\)</b>: <i>(.[\\w.\\-, ]+)',\n", "issue": "HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' \n03.08.2015 20:46:43 INFO Free space: 6.48 TiB\n630 03.08.2015 20:46:43 INFO Activating Accounts...\n631 03.08.2015 20:46:43 INFO Activating Plugins...\n632 03.08.2015 20:46:43 WARNING HOOK AntiStandby: Unable to change system power state | [Errno 2] No such file or directory\n633 03.08.2015 20:46:43 WARNING HOOK AntiStandby: Unable to change display power state | [Errno 2] No such file or directory\n634 03.08.2015 20:46:43 INFO HOOK XFileSharingPro: Handling any hoster I can!\n635 03.08.2015 20:46:43 WARNING HOOK UpdateManager: Unable to retrieve server to get updates\n636 03.08.2015 20:46:43 INFO HOOK XFileSharingPro: Handling any crypter I can!\n637 03.08.2015 20:46:43 INFO pyLoad is up and running\n638 03.08.2015 20:46:45 INFO HOOK LinkdecrypterCom: Reloading supported crypter list\n639 03.08.2015 20:46:45 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry\n640 03.08.2015 20:46:53 INFO HOOK ClickAndLoad: Proxy listening on 127.0.0.1:9666\n641 03.08.2015 20:46:53 INFO HOOK LinkdecrypterCom: Reloading supported crypter list\n642 03.08.2015 20:46:53 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry\n643 03.08.2015 20:47:45 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry\n644 03.08.2015 20:47:53 WARNING HOOK LinkdecrypterCom: 'LinkdecrypterComHook' object has no attribute 'COOKIES' | Waiting 1 minute and retry\n\n", "before_files": [{"content": "# -*- coding: utf-8 -*-\n\nimport re\n\nfrom module.plugins.internal.MultiHook import MultiHook\n\n\nclass LinkdecrypterComHook(MultiHook):\n __name__ = \"LinkdecrypterComHook\"\n __type__ = \"hook\"\n __version__ = \"1.07\"\n __status__ = \"testing\"\n\n __config__ = [(\"activated\" , \"bool\" , \"Activated\" , True ),\n (\"pluginmode\" , \"all;listed;unlisted\", \"Use for plugins\" , \"all\"),\n (\"pluginlist\" , \"str\" , \"Plugin list (comma separated)\", \"\" ),\n (\"reload\" , \"bool\" , \"Reload plugin list\" , True ),\n (\"reloadinterval\", \"int\" , \"Reload interval in hours\" , 12 )]\n\n __description__ = \"\"\"Linkdecrypter.com hook plugin\"\"\"\n __license__ = \"GPLv3\"\n __authors__ = [(\"Walter Purcaro\", \"[email protected]\")]\n\n\n def get_hosters(self):\n list = re.search(r'>Supported\\(\\d+\\)</b>: <i>(.[\\w.\\-, ]+)',\n self.load(\"http://linkdecrypter.com/\").replace(\"(g)\", \"\")).group(1).split(', ')\n try:\n list.remove(\"download.serienjunkies.org\")\n except ValueError:\n pass\n\n return list\n", "path": "module/plugins/hooks/LinkdecrypterComHook.py"}]} | 1,635 | 138 |
gh_patches_debug_8093 | rasdani/github-patches | git_diff | scrapy__scrapy-1979 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
empty WARNING message in scrapy.core.downloader.tls (1.1.0rc4/master)
Sometimes I'm getting empty warnings now, on 1.1.0rc4 and master branch.
(at least on rc3 as well)
```
2016-05-07 00:33:46 [scrapy.core.downloader.tls] WARNING:
2016-05-07 00:33:47 [scrapy.core.downloader.tls] WARNING:
2016-05-07 00:33:48 [scrapy.core.downloader.tls] WARNING:
```
It happens in a broad linkcheck crawl; so I couldn't pinpoint what URLs might be responsible for that, at this time. The only other observation so far is, that it doesn't happen on a cache-replayed run (which might be obvious, as there is no TLS there).
</issue>
<code>
[start of scrapy/core/downloader/tls.py]
1 import logging
2 from OpenSSL import SSL
3
4
5 logger = logging.getLogger(__name__)
6
7 METHOD_SSLv3 = 'SSLv3'
8 METHOD_TLS = 'TLS'
9 METHOD_TLSv10 = 'TLSv1.0'
10 METHOD_TLSv11 = 'TLSv1.1'
11 METHOD_TLSv12 = 'TLSv1.2'
12
13 openssl_methods = {
14 METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)
15 METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)
16 METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only
17 METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only
18 METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only
19 }
20
21 # ClientTLSOptions requires a recent-enough version of Twisted
22 try:
23
24 # taken from twisted/twisted/internet/_sslverify.py
25 try:
26 from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START
27 except ImportError:
28 SSL_CB_HANDSHAKE_START = 0x10
29 SSL_CB_HANDSHAKE_DONE = 0x20
30
31 from twisted.internet._sslverify import (ClientTLSOptions,
32 _maybeSetHostNameIndication,
33 verifyHostname,
34 VerificationError)
35
36 class ScrapyClientTLSOptions(ClientTLSOptions):
37 # same as Twisted's ClientTLSOptions,
38 # except that VerificationError is caught
39 # and doesn't close the connection
40 def _identityVerifyingInfoCallback(self, connection, where, ret):
41 if where & SSL_CB_HANDSHAKE_START:
42 _maybeSetHostNameIndication(connection, self._hostnameBytes)
43 elif where & SSL_CB_HANDSHAKE_DONE:
44 try:
45 verifyHostname(connection, self._hostnameASCII)
46 except VerificationError as e:
47 logger.warning(e)
48
49 except ImportError:
50 # ImportError should not matter for older Twisted versions
51 # as the above is not used in the fallback ScrapyClientContextFactory
52 pass
53
[end of scrapy/core/downloader/tls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py
--- a/scrapy/core/downloader/tls.py
+++ b/scrapy/core/downloader/tls.py
@@ -44,7 +44,9 @@
try:
verifyHostname(connection, self._hostnameASCII)
except VerificationError as e:
- logger.warning(e)
+ logger.warning(
+ 'Remote certificate is not valid for hostname "{}"; {}'.format(
+ self._hostnameASCII, e))
except ImportError:
# ImportError should not matter for older Twisted versions
| {"golden_diff": "diff --git a/scrapy/core/downloader/tls.py b/scrapy/core/downloader/tls.py\n--- a/scrapy/core/downloader/tls.py\n+++ b/scrapy/core/downloader/tls.py\n@@ -44,7 +44,9 @@\n try:\n verifyHostname(connection, self._hostnameASCII)\n except VerificationError as e:\n- logger.warning(e)\n+ logger.warning(\n+ 'Remote certificate is not valid for hostname \"{}\"; {}'.format(\n+ self._hostnameASCII, e))\n \n except ImportError:\n # ImportError should not matter for older Twisted versions\n", "issue": "empty WARNING message in scrapy.core.downloader.tls (1.1.0rc4/master)\nSometimes I'm getting empty warnings now, on 1.1.0rc4 and master branch.\n(at least on rc3 as well)\n\n```\n2016-05-07 00:33:46 [scrapy.core.downloader.tls] WARNING: \n2016-05-07 00:33:47 [scrapy.core.downloader.tls] WARNING: \n2016-05-07 00:33:48 [scrapy.core.downloader.tls] WARNING: \n```\n\nIt happens in a broad linkcheck crawl; so I couldn't pinpoint what URLs might be responsible for that, at this time. The only other observation so far is, that it doesn't happen on a cache-replayed run (which might be obvious, as there is no TLS there).\n\n", "before_files": [{"content": "import logging\nfrom OpenSSL import SSL\n\n\nlogger = logging.getLogger(__name__)\n\nMETHOD_SSLv3 = 'SSLv3'\nMETHOD_TLS = 'TLS'\nMETHOD_TLSv10 = 'TLSv1.0'\nMETHOD_TLSv11 = 'TLSv1.1'\nMETHOD_TLSv12 = 'TLSv1.2'\n\nopenssl_methods = {\n METHOD_TLS: SSL.SSLv23_METHOD, # protocol negotiation (recommended)\n METHOD_SSLv3: SSL.SSLv3_METHOD, # SSL 3 (NOT recommended)\n METHOD_TLSv10: SSL.TLSv1_METHOD, # TLS 1.0 only\n METHOD_TLSv11: getattr(SSL, 'TLSv1_1_METHOD', 5), # TLS 1.1 only\n METHOD_TLSv12: getattr(SSL, 'TLSv1_2_METHOD', 6), # TLS 1.2 only\n}\n\n# ClientTLSOptions requires a recent-enough version of Twisted\ntry:\n\n # taken from twisted/twisted/internet/_sslverify.py\n try:\n from OpenSSL.SSL import SSL_CB_HANDSHAKE_DONE, SSL_CB_HANDSHAKE_START\n except ImportError:\n SSL_CB_HANDSHAKE_START = 0x10\n SSL_CB_HANDSHAKE_DONE = 0x20\n\n from twisted.internet._sslverify import (ClientTLSOptions,\n _maybeSetHostNameIndication,\n verifyHostname,\n VerificationError)\n\n class ScrapyClientTLSOptions(ClientTLSOptions):\n # same as Twisted's ClientTLSOptions,\n # except that VerificationError is caught\n # and doesn't close the connection\n def _identityVerifyingInfoCallback(self, connection, where, ret):\n if where & SSL_CB_HANDSHAKE_START:\n _maybeSetHostNameIndication(connection, self._hostnameBytes)\n elif where & SSL_CB_HANDSHAKE_DONE:\n try:\n verifyHostname(connection, self._hostnameASCII)\n except VerificationError as e:\n logger.warning(e)\n\nexcept ImportError:\n # ImportError should not matter for older Twisted versions\n # as the above is not used in the fallback ScrapyClientContextFactory\n pass\n", "path": "scrapy/core/downloader/tls.py"}]} | 1,320 | 128 |
gh_patches_debug_16913 | rasdani/github-patches | git_diff | Kinto__kinto-809 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
deadlock on __heartbeat__
If I set up the same postgresql database/user for the storage, cache and permission connectors, I get a thread deadlock on the second call to **heartbeat**, leading to a blocking call, that makes the HB fails.
Looks like a DB connector race condition issue
</issue>
<code>
[start of kinto/core/views/heartbeat.py]
1 from concurrent.futures import ThreadPoolExecutor, wait
2 from pyramid.security import NO_PERMISSION_REQUIRED
3
4 from kinto import logger
5 from kinto.core import Service
6
7
8 heartbeat = Service(name="heartbeat", path='/__heartbeat__',
9 description="Server health")
10
11
12 @heartbeat.get(permission=NO_PERMISSION_REQUIRED)
13 def get_heartbeat(request):
14 """Return information about server health."""
15 status = {}
16
17 def heartbeat_check(name, func):
18 status[name] = False
19 status[name] = func(request)
20
21 # Start executing heartbeats concurrently.
22 heartbeats = request.registry.heartbeats
23 pool = ThreadPoolExecutor(max_workers=max(1, len(heartbeats.keys())))
24 futures = []
25 for name, func in heartbeats.items():
26 future = pool.submit(heartbeat_check, name, func)
27 future.__heartbeat_name = name # For logging purposes.
28 futures.append(future)
29
30 # Wait for the results, with timeout.
31 seconds = float(request.registry.settings['heartbeat_timeout_seconds'])
32 done, not_done = wait(futures, timeout=seconds)
33
34 # A heartbeat is supposed to return True or False, and never raise.
35 # Just in case, go though results to spot any potential exception.
36 for future in done:
37 exc = future.exception()
38 if exc is not None:
39 logger.error("%r heartbeat failed." % future.__heartbeat_name)
40 logger.error(exc)
41
42 # Log timed-out heartbeats.
43 for future in not_done:
44 name = future.__heartbeat_name
45 error_msg = "%r heartbeat has exceeded timeout of %s seconds."
46 logger.error(error_msg % (name, seconds))
47
48 # If any has failed, return a 503 error response.
49 has_error = not all([v or v is None for v in status.values()])
50 if has_error:
51 request.response.status = 503
52
53 return status
54
55
56 lbheartbeat = Service(name="lbheartbeat", path='/__lbheartbeat__',
57 description="Web head health")
58
59
60 @lbheartbeat.get(permission=NO_PERMISSION_REQUIRED)
61 def get_lbheartbeat(request):
62 """Return successful healthy response.
63
64 If the load-balancer tries to access this URL and fails, this means the
65 Web head is not operational and should be dropped.
66 """
67 status = {}
68 return status
69
[end of kinto/core/views/heartbeat.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kinto/core/views/heartbeat.py b/kinto/core/views/heartbeat.py
--- a/kinto/core/views/heartbeat.py
+++ b/kinto/core/views/heartbeat.py
@@ -1,4 +1,6 @@
from concurrent.futures import ThreadPoolExecutor, wait
+
+import transaction
from pyramid.security import NO_PERMISSION_REQUIRED
from kinto import logger
@@ -17,6 +19,11 @@
def heartbeat_check(name, func):
status[name] = False
status[name] = func(request)
+ # Since the heartbeat checks run concurrently, their transactions
+ # overlap and might end in shared lock errors. By aborting here
+ # we clean-up the state on each heartbeat call instead of once at the
+ # end of the request. See bug Kinto/kinto#804
+ transaction.abort()
# Start executing heartbeats concurrently.
heartbeats = request.registry.heartbeats
| {"golden_diff": "diff --git a/kinto/core/views/heartbeat.py b/kinto/core/views/heartbeat.py\n--- a/kinto/core/views/heartbeat.py\n+++ b/kinto/core/views/heartbeat.py\n@@ -1,4 +1,6 @@\n from concurrent.futures import ThreadPoolExecutor, wait\n+\n+import transaction\n from pyramid.security import NO_PERMISSION_REQUIRED\n \n from kinto import logger\n@@ -17,6 +19,11 @@\n def heartbeat_check(name, func):\n status[name] = False\n status[name] = func(request)\n+ # Since the heartbeat checks run concurrently, their transactions\n+ # overlap and might end in shared lock errors. By aborting here\n+ # we clean-up the state on each heartbeat call instead of once at the\n+ # end of the request. See bug Kinto/kinto#804\n+ transaction.abort()\n \n # Start executing heartbeats concurrently.\n heartbeats = request.registry.heartbeats\n", "issue": "deadlock on __heartbeat__\nIf I set up the same postgresql database/user for the storage, cache and permission connectors, I get a thread deadlock on the second call to **heartbeat**, leading to a blocking call, that makes the HB fails.\n\nLooks like a DB connector race condition issue\n\n", "before_files": [{"content": "from concurrent.futures import ThreadPoolExecutor, wait\nfrom pyramid.security import NO_PERMISSION_REQUIRED\n\nfrom kinto import logger\nfrom kinto.core import Service\n\n\nheartbeat = Service(name=\"heartbeat\", path='/__heartbeat__',\n description=\"Server health\")\n\n\[email protected](permission=NO_PERMISSION_REQUIRED)\ndef get_heartbeat(request):\n \"\"\"Return information about server health.\"\"\"\n status = {}\n\n def heartbeat_check(name, func):\n status[name] = False\n status[name] = func(request)\n\n # Start executing heartbeats concurrently.\n heartbeats = request.registry.heartbeats\n pool = ThreadPoolExecutor(max_workers=max(1, len(heartbeats.keys())))\n futures = []\n for name, func in heartbeats.items():\n future = pool.submit(heartbeat_check, name, func)\n future.__heartbeat_name = name # For logging purposes.\n futures.append(future)\n\n # Wait for the results, with timeout.\n seconds = float(request.registry.settings['heartbeat_timeout_seconds'])\n done, not_done = wait(futures, timeout=seconds)\n\n # A heartbeat is supposed to return True or False, and never raise.\n # Just in case, go though results to spot any potential exception.\n for future in done:\n exc = future.exception()\n if exc is not None:\n logger.error(\"%r heartbeat failed.\" % future.__heartbeat_name)\n logger.error(exc)\n\n # Log timed-out heartbeats.\n for future in not_done:\n name = future.__heartbeat_name\n error_msg = \"%r heartbeat has exceeded timeout of %s seconds.\"\n logger.error(error_msg % (name, seconds))\n\n # If any has failed, return a 503 error response.\n has_error = not all([v or v is None for v in status.values()])\n if has_error:\n request.response.status = 503\n\n return status\n\n\nlbheartbeat = Service(name=\"lbheartbeat\", path='/__lbheartbeat__',\n description=\"Web head health\")\n\n\[email protected](permission=NO_PERMISSION_REQUIRED)\ndef get_lbheartbeat(request):\n \"\"\"Return successful healthy response.\n\n If the load-balancer tries to access this URL and fails, this means the\n Web head is not operational and should be dropped.\n \"\"\"\n status = {}\n return status\n", "path": "kinto/core/views/heartbeat.py"}]} | 1,224 | 210 |
gh_patches_debug_6517 | rasdani/github-patches | git_diff | ivy-llc__ivy-22309 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
fft2
</issue>
<code>
[start of ivy/functional/frontends/jax/numpy/fft.py]
1 # local
2 import ivy
3 from ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back
4 from ivy.func_wrapper import with_unsupported_dtypes
5
6
7 @to_ivy_arrays_and_back
8 def fft(a, n=None, axis=-1, norm=None):
9 if norm is None:
10 norm = "backward"
11 return ivy.fft(a, axis, norm=norm, n=n)
12
13
14 @to_ivy_arrays_and_back
15 @with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
16 def fftshift(x, axes=None, name=None):
17 shape = x.shape
18
19 if axes is None:
20 axes = tuple(range(x.ndim))
21 shifts = [(dim // 2) for dim in shape]
22 elif isinstance(axes, int):
23 shifts = shape[axes] // 2
24 else:
25 shifts = [shape[ax] // 2 for ax in axes]
26
27 roll = ivy.roll(x, shifts, axis=axes)
28
29 return roll
30
[end of ivy/functional/frontends/jax/numpy/fft.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py
--- a/ivy/functional/frontends/jax/numpy/fft.py
+++ b/ivy/functional/frontends/jax/numpy/fft.py
@@ -11,6 +11,13 @@
return ivy.fft(a, axis, norm=norm, n=n)
+@to_ivy_arrays_and_back
+def fft2(a, s=None, axes=(-2, -1), norm=None):
+ if norm is None:
+ norm = "backward"
+ return ivy.array(ivy.fft2(a, s=s, dim=axes, norm=norm), dtype=ivy.dtype(a))
+
+
@to_ivy_arrays_and_back
@with_unsupported_dtypes({"2.4.2 and below": ("float16", "bfloat16")}, "paddle")
def fftshift(x, axes=None, name=None):
| {"golden_diff": "diff --git a/ivy/functional/frontends/jax/numpy/fft.py b/ivy/functional/frontends/jax/numpy/fft.py\n--- a/ivy/functional/frontends/jax/numpy/fft.py\n+++ b/ivy/functional/frontends/jax/numpy/fft.py\n@@ -11,6 +11,13 @@\n return ivy.fft(a, axis, norm=norm, n=n)\n \n \n+@to_ivy_arrays_and_back\n+def fft2(a, s=None, axes=(-2, -1), norm=None):\n+ if norm is None:\n+ norm = \"backward\"\n+ return ivy.array(ivy.fft2(a, s=s, dim=axes, norm=norm), dtype=ivy.dtype(a))\n+\n+\n @to_ivy_arrays_and_back\n @with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\n def fftshift(x, axes=None, name=None):\n", "issue": "fft2\n\n", "before_files": [{"content": "# local\nimport ivy\nfrom ivy.functional.frontends.jax.func_wrapper import to_ivy_arrays_and_back\nfrom ivy.func_wrapper import with_unsupported_dtypes\n\n\n@to_ivy_arrays_and_back\ndef fft(a, n=None, axis=-1, norm=None):\n if norm is None:\n norm = \"backward\"\n return ivy.fft(a, axis, norm=norm, n=n)\n\n\n@to_ivy_arrays_and_back\n@with_unsupported_dtypes({\"2.4.2 and below\": (\"float16\", \"bfloat16\")}, \"paddle\")\ndef fftshift(x, axes=None, name=None):\n shape = x.shape\n\n if axes is None:\n axes = tuple(range(x.ndim))\n shifts = [(dim // 2) for dim in shape]\n elif isinstance(axes, int):\n shifts = shape[axes] // 2\n else:\n shifts = [shape[ax] // 2 for ax in axes]\n\n roll = ivy.roll(x, shifts, axis=axes)\n\n return roll\n", "path": "ivy/functional/frontends/jax/numpy/fft.py"}]} | 840 | 218 |
gh_patches_debug_49166 | rasdani/github-patches | git_diff | scoutapp__scout_apm_python-489 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installation seems to be broken on python3.6.4
<img width="1125" alt="Screen Shot 2020-02-26 at 12 31 00 PM" src="https://user-images.githubusercontent.com/17484350/75380353-e2224900-58a4-11ea-96b3-2629b94c7107.png">
</issue>
<code>
[start of setup.py]
1 # coding=utf-8
2 from __future__ import absolute_import, division, print_function, unicode_literals
3
4 import sys
5
6 from setuptools import Extension, find_packages, setup
7
8 with open("README.md", "r") as fp:
9 long_description = fp.read()
10
11 packages = find_packages("src")
12 if sys.version_info < (3, 6):
13 packages = [p for p in packages if not p.startswith("scout_apm.async_")]
14
15 compile_extensions = (
16 # Python 3+
17 sys.version_info >= (3,)
18 # Not Jython
19 and not sys.platform.startswith("java")
20 # Not PyPy
21 and "__pypy__" not in sys.builtin_module_names
22 )
23 if compile_extensions:
24 ext_modules = [
25 Extension(
26 str("scout_apm.core._objtrace"), [str("src/scout_apm/core/_objtrace.c")]
27 )
28 ]
29 else:
30 ext_modules = []
31
32 setup(
33 name="scout_apm",
34 version="2.11.0",
35 description="Scout Application Performance Monitoring Agent",
36 long_description=long_description,
37 long_description_content_type="text/markdown",
38 url="https://github.com/scoutapp/scout_apm_python",
39 project_urls={
40 "Documentation": "https://docs.scoutapm.com/#python-agent",
41 "Changelog": (
42 "https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md"
43 ),
44 },
45 author="Scout",
46 author_email="[email protected]",
47 license="MIT",
48 zip_safe=False,
49 python_requires=">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4",
50 packages=packages,
51 package_dir={str(""): str("src")},
52 ext_modules=ext_modules,
53 entry_points={
54 "console_scripts": [
55 "core-agent-manager = scout_apm.core.cli.core_agent_manager:main"
56 ]
57 },
58 install_requires=[
59 'asgiref ; python_version >= "3.5"',
60 'importlib-metadata ; python_version < "3.8"',
61 "psutil>=5,<6",
62 'urllib3[secure] < 1.25 ; python_version < "3.5"',
63 'urllib3[secure] < 2 ; python_version >= "3.5"',
64 "wrapt>=1.10,<2.0",
65 ],
66 keywords="apm performance monitoring development",
67 classifiers=[
68 "Development Status :: 5 - Production/Stable",
69 "Framework :: Bottle",
70 "Framework :: Django",
71 "Framework :: Django :: 1.8",
72 "Framework :: Django :: 1.9",
73 "Framework :: Django :: 1.10",
74 "Framework :: Django :: 1.11",
75 "Framework :: Django :: 2.0",
76 "Framework :: Django :: 2.1",
77 "Framework :: Django :: 2.2",
78 "Framework :: Django :: 3.0",
79 "Framework :: Flask",
80 "Framework :: Pyramid",
81 "Intended Audience :: Developers",
82 "Topic :: System :: Monitoring",
83 "License :: OSI Approved :: MIT License",
84 "Operating System :: MacOS",
85 "Operating System :: POSIX",
86 "Operating System :: POSIX :: Linux",
87 "Programming Language :: Python :: 2",
88 "Programming Language :: Python :: 2.7",
89 "Programming Language :: Python :: 3",
90 "Programming Language :: Python :: 3.4",
91 "Programming Language :: Python :: 3.5",
92 "Programming Language :: Python :: 3.6",
93 "Programming Language :: Python :: 3.7",
94 "Programming Language :: Python :: 3.8",
95 ],
96 )
97
[end of setup.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/setup.py b/setup.py
--- a/setup.py
+++ b/setup.py
@@ -23,7 +23,9 @@
if compile_extensions:
ext_modules = [
Extension(
- str("scout_apm.core._objtrace"), [str("src/scout_apm/core/_objtrace.c")]
+ name=str("scout_apm.core._objtrace"),
+ sources=[str("src/scout_apm/core/_objtrace.c")],
+ optional=True,
)
]
else:
| {"golden_diff": "diff --git a/setup.py b/setup.py\n--- a/setup.py\n+++ b/setup.py\n@@ -23,7 +23,9 @@\n if compile_extensions:\n ext_modules = [\n Extension(\n- str(\"scout_apm.core._objtrace\"), [str(\"src/scout_apm/core/_objtrace.c\")]\n+ name=str(\"scout_apm.core._objtrace\"),\n+ sources=[str(\"src/scout_apm/core/_objtrace.c\")],\n+ optional=True,\n )\n ]\n else:\n", "issue": "Installation seems to be broken on python3.6.4\n<img width=\"1125\" alt=\"Screen Shot 2020-02-26 at 12 31 00 PM\" src=\"https://user-images.githubusercontent.com/17484350/75380353-e2224900-58a4-11ea-96b3-2629b94c7107.png\">\r\n\n", "before_files": [{"content": "# coding=utf-8\nfrom __future__ import absolute_import, division, print_function, unicode_literals\n\nimport sys\n\nfrom setuptools import Extension, find_packages, setup\n\nwith open(\"README.md\", \"r\") as fp:\n long_description = fp.read()\n\npackages = find_packages(\"src\")\nif sys.version_info < (3, 6):\n packages = [p for p in packages if not p.startswith(\"scout_apm.async_\")]\n\ncompile_extensions = (\n # Python 3+\n sys.version_info >= (3,)\n # Not Jython\n and not sys.platform.startswith(\"java\")\n # Not PyPy\n and \"__pypy__\" not in sys.builtin_module_names\n)\nif compile_extensions:\n ext_modules = [\n Extension(\n str(\"scout_apm.core._objtrace\"), [str(\"src/scout_apm/core/_objtrace.c\")]\n )\n ]\nelse:\n ext_modules = []\n\nsetup(\n name=\"scout_apm\",\n version=\"2.11.0\",\n description=\"Scout Application Performance Monitoring Agent\",\n long_description=long_description,\n long_description_content_type=\"text/markdown\",\n url=\"https://github.com/scoutapp/scout_apm_python\",\n project_urls={\n \"Documentation\": \"https://docs.scoutapm.com/#python-agent\",\n \"Changelog\": (\n \"https://github.com/scoutapp/scout_apm_python/blob/master/CHANGELOG.md\"\n ),\n },\n author=\"Scout\",\n author_email=\"[email protected]\",\n license=\"MIT\",\n zip_safe=False,\n python_requires=\">=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*, <4\",\n packages=packages,\n package_dir={str(\"\"): str(\"src\")},\n ext_modules=ext_modules,\n entry_points={\n \"console_scripts\": [\n \"core-agent-manager = scout_apm.core.cli.core_agent_manager:main\"\n ]\n },\n install_requires=[\n 'asgiref ; python_version >= \"3.5\"',\n 'importlib-metadata ; python_version < \"3.8\"',\n \"psutil>=5,<6\",\n 'urllib3[secure] < 1.25 ; python_version < \"3.5\"',\n 'urllib3[secure] < 2 ; python_version >= \"3.5\"',\n \"wrapt>=1.10,<2.0\",\n ],\n keywords=\"apm performance monitoring development\",\n classifiers=[\n \"Development Status :: 5 - Production/Stable\",\n \"Framework :: Bottle\",\n \"Framework :: Django\",\n \"Framework :: Django :: 1.8\",\n \"Framework :: Django :: 1.9\",\n \"Framework :: Django :: 1.10\",\n \"Framework :: Django :: 1.11\",\n \"Framework :: Django :: 2.0\",\n \"Framework :: Django :: 2.1\",\n \"Framework :: Django :: 2.2\",\n \"Framework :: Django :: 3.0\",\n \"Framework :: Flask\",\n \"Framework :: Pyramid\",\n \"Intended Audience :: Developers\",\n \"Topic :: System :: Monitoring\",\n \"License :: OSI Approved :: MIT License\",\n \"Operating System :: MacOS\",\n \"Operating System :: POSIX\",\n \"Operating System :: POSIX :: Linux\",\n \"Programming Language :: Python :: 2\",\n \"Programming Language :: Python :: 2.7\",\n \"Programming Language :: Python :: 3\",\n \"Programming Language :: Python :: 3.4\",\n \"Programming Language :: Python :: 3.5\",\n \"Programming Language :: Python :: 3.6\",\n \"Programming Language :: Python :: 3.7\",\n \"Programming Language :: Python :: 3.8\",\n ],\n)\n", "path": "setup.py"}]} | 1,643 | 115 |
gh_patches_debug_19929 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-1173 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
clear sanbox command doesnot remove tables in `_sandbox`
</issue>
<code>
[start of dataedit/management/commands/clear_sandbox.py]
1 from typing import List
2
3 import sqlalchemy as sqla
4 from django.core.management.base import BaseCommand
5
6 from api.connection import _get_engine
7 from dataedit.models import Table
8 from oeplatform.securitysettings import PLAYGROUNDS
9
10 SANDBOX_SCHEMA = "sandbox"
11 assert SANDBOX_SCHEMA in PLAYGROUNDS, f"{SANDBOX_SCHEMA} not in playground schemas"
12
13
14 def get_sandbox_tables_django() -> List[Table]:
15 """
16 Returns:
17 List[Table]: list of table objects in django db in sandbox schema
18 """
19 return Table.objects.filter(schema__name=SANDBOX_SCHEMA).all()
20
21
22 def get_sandbox_table_names_oedb() -> List[str]:
23 """
24 Returns:
25 List[str]: list of table names in oedb in sandbox schema
26 """
27 engine = _get_engine()
28 return sqla.inspect(engine).get_table_names(schema=SANDBOX_SCHEMA)
29
30
31 def clear_sandbox(output: bool = False) -> None:
32 """delete all tables from the sandbox schema.
33
34 Maybe we should use the API (not just django objects)
35 so all the other actions like deleting the meta tables
36 are also performed properly
37
38 For now, we delete tables in oedb and django individually
39
40 !!! DANGER ZONE !!! MAKE SURE YOU KNOW WHAT YOU ARE DOING!
41
42
43 Args:
44 output: if True, print actions
45
46 """
47
48 # delete all from oedb
49 engine = _get_engine()
50 for table_name in get_sandbox_table_names_oedb():
51 sql = f'DROP TABLE "{SANDBOX_SCHEMA}"."{table_name}" CASCADE;'
52 if output:
53 print(f"oedb: {sql}")
54 engine.execute(sql)
55
56 # delete all from django
57 for table in get_sandbox_tables_django():
58 if output:
59 print(f"django: delete {table.schema.name}.{table.name}")
60 table.delete()
61
62
63 class Command(BaseCommand):
64 def handle(self, *args, **options):
65 # ask for confirmation
66 answ = input(f"Delete all tables from {SANDBOX_SCHEMA} [y|n]: ")
67 if not answ == "y":
68 print("Abort")
69 return
70
71 clear_sandbox(output=True)
72
73 print("Done")
74
[end of dataedit/management/commands/clear_sandbox.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/dataedit/management/commands/clear_sandbox.py b/dataedit/management/commands/clear_sandbox.py
--- a/dataedit/management/commands/clear_sandbox.py
+++ b/dataedit/management/commands/clear_sandbox.py
@@ -28,6 +28,15 @@
return sqla.inspect(engine).get_table_names(schema=SANDBOX_SCHEMA)
+def get_sandbox_meta_table_names_oedb() -> List[str]:
+ """
+ Returns:
+ List[str]: list of table names in oedb in sandbox meta schema
+ """
+ engine = _get_engine()
+ return sqla.inspect(engine).get_table_names(schema="_" + SANDBOX_SCHEMA)
+
+
def clear_sandbox(output: bool = False) -> None:
"""delete all tables from the sandbox schema.
@@ -53,6 +62,12 @@
print(f"oedb: {sql}")
engine.execute(sql)
+ for table_name in get_sandbox_meta_table_names_oedb():
+ sql = f'DROP TABLE "_{SANDBOX_SCHEMA}"."{table_name}" CASCADE;'
+ if output:
+ print(f"oedb: {sql}")
+ engine.execute(sql)
+
# delete all from django
for table in get_sandbox_tables_django():
if output:
| {"golden_diff": "diff --git a/dataedit/management/commands/clear_sandbox.py b/dataedit/management/commands/clear_sandbox.py\n--- a/dataedit/management/commands/clear_sandbox.py\n+++ b/dataedit/management/commands/clear_sandbox.py\n@@ -28,6 +28,15 @@\n return sqla.inspect(engine).get_table_names(schema=SANDBOX_SCHEMA)\n \n \n+def get_sandbox_meta_table_names_oedb() -> List[str]:\n+ \"\"\"\n+ Returns:\n+ List[str]: list of table names in oedb in sandbox meta schema\n+ \"\"\"\n+ engine = _get_engine()\n+ return sqla.inspect(engine).get_table_names(schema=\"_\" + SANDBOX_SCHEMA)\n+\n+\n def clear_sandbox(output: bool = False) -> None:\n \"\"\"delete all tables from the sandbox schema.\n \n@@ -53,6 +62,12 @@\n print(f\"oedb: {sql}\")\n engine.execute(sql)\n \n+ for table_name in get_sandbox_meta_table_names_oedb():\n+ sql = f'DROP TABLE \"_{SANDBOX_SCHEMA}\".\"{table_name}\" CASCADE;'\n+ if output:\n+ print(f\"oedb: {sql}\")\n+ engine.execute(sql)\n+\n # delete all from django\n for table in get_sandbox_tables_django():\n if output:\n", "issue": "clear sanbox command doesnot remove tables in `_sandbox`\n\n", "before_files": [{"content": "from typing import List\n\nimport sqlalchemy as sqla\nfrom django.core.management.base import BaseCommand\n\nfrom api.connection import _get_engine\nfrom dataedit.models import Table\nfrom oeplatform.securitysettings import PLAYGROUNDS\n\nSANDBOX_SCHEMA = \"sandbox\"\nassert SANDBOX_SCHEMA in PLAYGROUNDS, f\"{SANDBOX_SCHEMA} not in playground schemas\"\n\n\ndef get_sandbox_tables_django() -> List[Table]:\n \"\"\"\n Returns:\n List[Table]: list of table objects in django db in sandbox schema\n \"\"\"\n return Table.objects.filter(schema__name=SANDBOX_SCHEMA).all()\n\n\ndef get_sandbox_table_names_oedb() -> List[str]:\n \"\"\"\n Returns:\n List[str]: list of table names in oedb in sandbox schema\n \"\"\"\n engine = _get_engine()\n return sqla.inspect(engine).get_table_names(schema=SANDBOX_SCHEMA)\n\n\ndef clear_sandbox(output: bool = False) -> None:\n \"\"\"delete all tables from the sandbox schema.\n\n Maybe we should use the API (not just django objects)\n so all the other actions like deleting the meta tables\n are also performed properly\n\n For now, we delete tables in oedb and django individually\n\n !!! DANGER ZONE !!! MAKE SURE YOU KNOW WHAT YOU ARE DOING!\n\n\n Args:\n output: if True, print actions\n\n \"\"\"\n\n # delete all from oedb\n engine = _get_engine()\n for table_name in get_sandbox_table_names_oedb():\n sql = f'DROP TABLE \"{SANDBOX_SCHEMA}\".\"{table_name}\" CASCADE;'\n if output:\n print(f\"oedb: {sql}\")\n engine.execute(sql)\n\n # delete all from django\n for table in get_sandbox_tables_django():\n if output:\n print(f\"django: delete {table.schema.name}.{table.name}\")\n table.delete()\n\n\nclass Command(BaseCommand):\n def handle(self, *args, **options):\n # ask for confirmation\n answ = input(f\"Delete all tables from {SANDBOX_SCHEMA} [y|n]: \")\n if not answ == \"y\":\n print(\"Abort\")\n return\n\n clear_sandbox(output=True)\n\n print(\"Done\")\n", "path": "dataedit/management/commands/clear_sandbox.py"}]} | 1,193 | 295 |
gh_patches_debug_26574 | rasdani/github-patches | git_diff | streamlink__streamlink-3205 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
cdnbg can't open new BNT links
## Bug Report
- [x] This is a bug report and I have read the contribution guidelines.
### Description
There have been changes to the bnt.bg live channel links, which have made them unrecognizable by the cdnbg plugin.
**Note:** Streamlink can still open these links, which are now hidden away in a small part of the website and are not protected by an SSL certificate:
```
http://tv.bnt.bg/bnt1
http://tv.bnt.bg/bnt2
http://tv.bnt.bg/bnt3
http://tv.bnt.bg/bnt4
```
**Other plugin issues:**
1. https://mmtvmusic.com/live/ has moved away to another service provider and hence can be deleted from cdnbg. Can't be opened with anything else atm.
2. https://chernomore.bg/ can be removed - the owner of the media group closed down the newspaper and television and converted the website into an information agency.
### Expected / Actual behavior
When I input them through CLI, they should open.
### Reproduction steps / Explicit stream URLs to test
```
streamlink https://bnt.bg/live best
streamlink https://bnt.bg/live/bnt1 best
streamlink https://bnt.bg/live/bnt2 best
streamlink https://bnt.bg/live/bnt3 best
streamlink https://bnt.bg/live/bnt4 best
```
### Log output
```
C:\Users\XXXX> streamlink https://bnt.bg/live/bnt1 best --loglevel debug
[cli][debug] OS: Windows 7
[cli][debug] Python: 3.6.6
[cli][debug] Streamlink: 1.6.0
[cli][debug] Requests(2.24.0), Socks(1.7.1), Websocket(0.57.0)
error: No plugin can handle URL: https://bnt.bg/live/bnt1
```
</issue>
<code>
[start of src/streamlink/plugins/cdnbg.py]
1 import logging
2 import re
3
4 from streamlink.compat import urlparse
5 from streamlink.plugin import Plugin
6 from streamlink.plugin.api import useragents
7 from streamlink.plugin.api import validate
8 from streamlink.stream import HLSStream
9 from streamlink.utils import update_scheme
10
11 log = logging.getLogger(__name__)
12
13
14 class CDNBG(Plugin):
15 url_re = re.compile(r"""
16 https?://(?:www\.)?(?:
17 tv\.bnt\.bg/\w+(?:/\w+)?|
18 nova\.bg/live|
19 bgonair\.bg/tvonline|
20 mmtvmusic\.com/live|
21 mu-vi\.tv/LiveStreams/pages/Live\.aspx|
22 live\.bstv\.bg|
23 bloombergtv.bg/video|
24 armymedia.bg|
25 chernomore.bg|
26 i.cdn.bg/live/
27 )/?
28 """, re.VERBOSE)
29 iframe_re = re.compile(r"iframe .*?src=\"((?:https?(?::|:))?//(?:\w+\.)?cdn.bg/live[^\"]+)\"", re.DOTALL)
30 sdata_re = re.compile(r"sdata\.src.*?=.*?(?P<q>[\"'])(?P<url>http.*?)(?P=q)")
31 hls_file_re = re.compile(r"(src|file): (?P<q>[\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)")
32 hls_src_re = re.compile(r"video src=(?P<url>http[^ ]+m3u8[^ ]*)")
33
34 stream_schema = validate.Schema(
35 validate.any(
36 validate.all(validate.transform(sdata_re.search), validate.get("url")),
37 validate.all(validate.transform(hls_file_re.search), validate.get("url")),
38 validate.all(validate.transform(hls_src_re.search), validate.get("url")),
39 )
40 )
41
42 @classmethod
43 def can_handle_url(cls, url):
44 return cls.url_re.match(url) is not None
45
46 def find_iframe(self, url):
47 self.session.http.headers.update({"User-Agent": useragents.CHROME})
48 res = self.session.http.get(self.url)
49 p = urlparse(url)
50 for iframe_url in self.iframe_re.findall(res.text):
51 if "googletagmanager" not in iframe_url:
52 log.debug("Found iframe: {0}", iframe_url)
53 iframe_url = iframe_url.replace(":", ":")
54 if iframe_url.startswith("//"):
55 return "{0}:{1}".format(p.scheme, iframe_url)
56 else:
57 return iframe_url
58
59 def _get_streams(self):
60 if "i.cdn.bg/live/" in self.url:
61 iframe_url = self.url
62 else:
63 iframe_url = self.find_iframe(self.url)
64
65 if iframe_url:
66 res = self.session.http.get(iframe_url, headers={"Referer": self.url})
67 stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))
68 log.warning("SSL Verification disabled.")
69 return HLSStream.parse_variant_playlist(self.session,
70 stream_url,
71 verify=False)
72
73
74 __plugin__ = CDNBG
75
[end of src/streamlink/plugins/cdnbg.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py
--- a/src/streamlink/plugins/cdnbg.py
+++ b/src/streamlink/plugins/cdnbg.py
@@ -14,16 +14,14 @@
class CDNBG(Plugin):
url_re = re.compile(r"""
https?://(?:www\.)?(?:
- tv\.bnt\.bg/\w+(?:/\w+)?|
- nova\.bg/live|
+ armymedia\.bg|
bgonair\.bg/tvonline|
- mmtvmusic\.com/live|
- mu-vi\.tv/LiveStreams/pages/Live\.aspx|
+ bloombergtv\.bg/video|
+ (?:tv\.)?bnt\.bg/\w+(?:/\w+)?|
live\.bstv\.bg|
- bloombergtv.bg/video|
- armymedia.bg|
- chernomore.bg|
- i.cdn.bg/live/
+ i\.cdn\.bg/live/|
+ nova\.bg/live|
+ mu-vi\.tv/LiveStreams/pages/Live\.aspx
)/?
""", re.VERBOSE)
iframe_re = re.compile(r"iframe .*?src=\"((?:https?(?::|:))?//(?:\w+\.)?cdn.bg/live[^\"]+)\"", re.DOTALL)
@@ -52,7 +50,7 @@
log.debug("Found iframe: {0}", iframe_url)
iframe_url = iframe_url.replace(":", ":")
if iframe_url.startswith("//"):
- return "{0}:{1}".format(p.scheme, iframe_url)
+ return update_scheme(p.scheme, iframe_url)
else:
return iframe_url
| {"golden_diff": "diff --git a/src/streamlink/plugins/cdnbg.py b/src/streamlink/plugins/cdnbg.py\n--- a/src/streamlink/plugins/cdnbg.py\n+++ b/src/streamlink/plugins/cdnbg.py\n@@ -14,16 +14,14 @@\n class CDNBG(Plugin):\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n- tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n- nova\\.bg/live|\n+ armymedia\\.bg|\n bgonair\\.bg/tvonline|\n- mmtvmusic\\.com/live|\n- mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n+ bloombergtv\\.bg/video|\n+ (?:tv\\.)?bnt\\.bg/\\w+(?:/\\w+)?|\n live\\.bstv\\.bg|\n- bloombergtv.bg/video|\n- armymedia.bg|\n- chernomore.bg|\n- i.cdn.bg/live/\n+ i\\.cdn\\.bg/live/|\n+ nova\\.bg/live|\n+ mu-vi\\.tv/LiveStreams/pages/Live\\.aspx\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n@@ -52,7 +50,7 @@\n log.debug(\"Found iframe: {0}\", iframe_url)\n iframe_url = iframe_url.replace(\":\", \":\")\n if iframe_url.startswith(\"//\"):\n- return \"{0}:{1}\".format(p.scheme, iframe_url)\n+ return update_scheme(p.scheme, iframe_url)\n else:\n return iframe_url\n", "issue": "cdnbg can't open new BNT links\n## Bug Report\r\n- [x] This is a bug report and I have read the contribution guidelines.\r\n\r\n\r\n### Description\r\nThere have been changes to the bnt.bg live channel links, which have made them unrecognizable by the cdnbg plugin.\r\n**Note:** Streamlink can still open these links, which are now hidden away in a small part of the website and are not protected by an SSL certificate:\r\n```\r\nhttp://tv.bnt.bg/bnt1\r\nhttp://tv.bnt.bg/bnt2\r\nhttp://tv.bnt.bg/bnt3\r\nhttp://tv.bnt.bg/bnt4\r\n```\r\n\r\n**Other plugin issues:**\r\n1. https://mmtvmusic.com/live/ has moved away to another service provider and hence can be deleted from cdnbg. Can't be opened with anything else atm.\r\n2. https://chernomore.bg/ can be removed - the owner of the media group closed down the newspaper and television and converted the website into an information agency.\r\n### Expected / Actual behavior\r\nWhen I input them through CLI, they should open.\r\n\r\n\r\n### Reproduction steps / Explicit stream URLs to test\r\n```\r\nstreamlink https://bnt.bg/live best\r\nstreamlink https://bnt.bg/live/bnt1 best\r\nstreamlink https://bnt.bg/live/bnt2 best\r\nstreamlink https://bnt.bg/live/bnt3 best\r\nstreamlink https://bnt.bg/live/bnt4 best\r\n```\r\n\r\n\r\n### Log output\r\n```\r\nC:\\Users\\XXXX> streamlink https://bnt.bg/live/bnt1 best --loglevel debug\r\n[cli][debug] OS: Windows 7\r\n[cli][debug] Python: 3.6.6\r\n[cli][debug] Streamlink: 1.6.0\r\n[cli][debug] Requests(2.24.0), Socks(1.7.1), Websocket(0.57.0)\r\nerror: No plugin can handle URL: https://bnt.bg/live/bnt1\r\n```\n", "before_files": [{"content": "import logging\nimport re\n\nfrom streamlink.compat import urlparse\nfrom streamlink.plugin import Plugin\nfrom streamlink.plugin.api import useragents\nfrom streamlink.plugin.api import validate\nfrom streamlink.stream import HLSStream\nfrom streamlink.utils import update_scheme\n\nlog = logging.getLogger(__name__)\n\n\nclass CDNBG(Plugin):\n url_re = re.compile(r\"\"\"\n https?://(?:www\\.)?(?:\n tv\\.bnt\\.bg/\\w+(?:/\\w+)?|\n nova\\.bg/live|\n bgonair\\.bg/tvonline|\n mmtvmusic\\.com/live|\n mu-vi\\.tv/LiveStreams/pages/Live\\.aspx|\n live\\.bstv\\.bg|\n bloombergtv.bg/video|\n armymedia.bg|\n chernomore.bg|\n i.cdn.bg/live/\n )/?\n \"\"\", re.VERBOSE)\n iframe_re = re.compile(r\"iframe .*?src=\\\"((?:https?(?::|:))?//(?:\\w+\\.)?cdn.bg/live[^\\\"]+)\\\"\", re.DOTALL)\n sdata_re = re.compile(r\"sdata\\.src.*?=.*?(?P<q>[\\\"'])(?P<url>http.*?)(?P=q)\")\n hls_file_re = re.compile(r\"(src|file): (?P<q>[\\\"'])(?P<url>(https?:)?//.+?m3u8.*?)(?P=q)\")\n hls_src_re = re.compile(r\"video src=(?P<url>http[^ ]+m3u8[^ ]*)\")\n\n stream_schema = validate.Schema(\n validate.any(\n validate.all(validate.transform(sdata_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_file_re.search), validate.get(\"url\")),\n validate.all(validate.transform(hls_src_re.search), validate.get(\"url\")),\n )\n )\n\n @classmethod\n def can_handle_url(cls, url):\n return cls.url_re.match(url) is not None\n\n def find_iframe(self, url):\n self.session.http.headers.update({\"User-Agent\": useragents.CHROME})\n res = self.session.http.get(self.url)\n p = urlparse(url)\n for iframe_url in self.iframe_re.findall(res.text):\n if \"googletagmanager\" not in iframe_url:\n log.debug(\"Found iframe: {0}\", iframe_url)\n iframe_url = iframe_url.replace(\":\", \":\")\n if iframe_url.startswith(\"//\"):\n return \"{0}:{1}\".format(p.scheme, iframe_url)\n else:\n return iframe_url\n\n def _get_streams(self):\n if \"i.cdn.bg/live/\" in self.url:\n iframe_url = self.url\n else:\n iframe_url = self.find_iframe(self.url)\n\n if iframe_url:\n res = self.session.http.get(iframe_url, headers={\"Referer\": self.url})\n stream_url = update_scheme(self.url, self.stream_schema.validate(res.text))\n log.warning(\"SSL Verification disabled.\")\n return HLSStream.parse_variant_playlist(self.session,\n stream_url,\n verify=False)\n\n\n__plugin__ = CDNBG\n", "path": "src/streamlink/plugins/cdnbg.py"}]} | 1,797 | 394 |
gh_patches_debug_12532 | rasdani/github-patches | git_diff | explosion__spaCy-866 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
💫 Lemmatizer should apply rules on OOV words
@juanmirocks points out in #327 that the lemmatizer fails on OOV words:
```python
>>> nlp.vocab.morphology.lemmatizer(u'endosomes', 'noun', morphology={'number': 'plur'})set([u'endosomes'])
>>> nlp.vocab.morphology.lemmatizer(u'chromosomes', 'noun', morphology={'number': 'plur'})
set([u'chromosome'])
```
Suggested patch to lemmatizer.py
```python
oov_forms = []
for old, new in rules:
if string.endswith(old):
form = string[:len(string) - len(old)] + new
if form in index or not form.isalpha():
forms.append(form)
else:
oov_forms.append(form)
if not forms:
forms.extend(oov_forms)
```
## Your Environment
<!-- Include details of your environment -->
* Operating System:
* Python Version Used:
* spaCy Version Used:
* Environment Information:
</issue>
<code>
[start of spacy/lemmatizer.py]
1 from __future__ import unicode_literals, print_function
2 import codecs
3 import pathlib
4
5 import ujson as json
6
7 from .symbols import POS, NOUN, VERB, ADJ, PUNCT
8
9
10 class Lemmatizer(object):
11 @classmethod
12 def load(cls, path, rules=None):
13 index = {}
14 exc = {}
15 for pos in ['adj', 'noun', 'verb']:
16 pos_index_path = path / 'wordnet' / 'index.{pos}'.format(pos=pos)
17 if pos_index_path.exists():
18 with pos_index_path.open() as file_:
19 index[pos] = read_index(file_)
20 else:
21 index[pos] = set()
22 pos_exc_path = path / 'wordnet' / '{pos}.exc'.format(pos=pos)
23 if pos_exc_path.exists():
24 with pos_exc_path.open() as file_:
25 exc[pos] = read_exc(file_)
26 else:
27 exc[pos] = {}
28 if rules is None and (path / 'vocab' / 'lemma_rules.json').exists():
29 with (path / 'vocab' / 'lemma_rules.json').open('r', encoding='utf8') as file_:
30 rules = json.load(file_)
31 elif rules is None:
32 rules = {}
33 return cls(index, exc, rules)
34
35 def __init__(self, index, exceptions, rules):
36 self.index = index
37 self.exc = exceptions
38 self.rules = rules
39
40 def __call__(self, string, univ_pos, morphology=None):
41 if univ_pos == NOUN:
42 univ_pos = 'noun'
43 elif univ_pos == VERB:
44 univ_pos = 'verb'
45 elif univ_pos == ADJ:
46 univ_pos = 'adj'
47 elif univ_pos == PUNCT:
48 univ_pos = 'punct'
49 # See Issue #435 for example of where this logic is requied.
50 if self.is_base_form(univ_pos, morphology):
51 return set([string.lower()])
52 lemmas = lemmatize(string, self.index.get(univ_pos, {}),
53 self.exc.get(univ_pos, {}),
54 self.rules.get(univ_pos, []))
55 return lemmas
56
57 def is_base_form(self, univ_pos, morphology=None):
58 '''Check whether we're dealing with an uninflected paradigm, so we can
59 avoid lemmatization entirely.'''
60 morphology = {} if morphology is None else morphology
61 others = [key for key in morphology if key not in (POS, 'number', 'pos', 'verbform')]
62 if univ_pos == 'noun' and morphology.get('number') == 'sing' and not others:
63 return True
64 elif univ_pos == 'verb' and morphology.get('verbform') == 'inf' and not others:
65 return True
66 else:
67 return False
68
69 def noun(self, string, morphology=None):
70 return self(string, 'noun', morphology)
71
72 def verb(self, string, morphology=None):
73 return self(string, 'verb', morphology)
74
75 def adj(self, string, morphology=None):
76 return self(string, 'adj', morphology)
77
78 def punct(self, string, morphology=None):
79 return self(string, 'punct', morphology)
80
81
82 def lemmatize(string, index, exceptions, rules):
83 string = string.lower()
84 forms = []
85 # TODO: Is this correct? See discussion in Issue #435.
86 #if string in index:
87 # forms.append(string)
88 forms.extend(exceptions.get(string, []))
89 for old, new in rules:
90 if string.endswith(old):
91 form = string[:len(string) - len(old)] + new
92 if form in index or not form.isalpha():
93 forms.append(form)
94 if not forms:
95 forms.append(string)
96 return set(forms)
97
98
99 def read_index(fileobj):
100 index = set()
101 for line in fileobj:
102 if line.startswith(' '):
103 continue
104 pieces = line.split()
105 word = pieces[0]
106 if word.count('_') == 0:
107 index.add(word)
108 return index
109
110
111 def read_exc(fileobj):
112 exceptions = {}
113 for line in fileobj:
114 if line.startswith(' '):
115 continue
116 pieces = line.split()
117 exceptions[pieces[0]] = tuple(pieces[1:])
118 return exceptions
119
[end of spacy/lemmatizer.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/spacy/lemmatizer.py b/spacy/lemmatizer.py
--- a/spacy/lemmatizer.py
+++ b/spacy/lemmatizer.py
@@ -86,13 +86,16 @@
#if string in index:
# forms.append(string)
forms.extend(exceptions.get(string, []))
+ oov_forms = []
for old, new in rules:
if string.endswith(old):
form = string[:len(string) - len(old)] + new
if form in index or not form.isalpha():
forms.append(form)
+ else:
+ oov_forms.append(form)
if not forms:
- forms.append(string)
+ forms.extend(oov_forms)
return set(forms)
| {"golden_diff": "diff --git a/spacy/lemmatizer.py b/spacy/lemmatizer.py\n--- a/spacy/lemmatizer.py\n+++ b/spacy/lemmatizer.py\n@@ -86,13 +86,16 @@\n #if string in index:\n # forms.append(string)\n forms.extend(exceptions.get(string, []))\n+ oov_forms = []\n for old, new in rules:\n if string.endswith(old):\n form = string[:len(string) - len(old)] + new\n if form in index or not form.isalpha():\n forms.append(form)\n+ else:\n+ oov_forms.append(form)\n if not forms:\n- forms.append(string)\n+ forms.extend(oov_forms)\n return set(forms)\n", "issue": "\ud83d\udcab Lemmatizer should apply rules on OOV words\n@juanmirocks points out in #327 that the lemmatizer fails on OOV words:\r\n\r\n```python\r\n\r\n>>> nlp.vocab.morphology.lemmatizer(u'endosomes', 'noun', morphology={'number': 'plur'})set([u'endosomes'])\r\n>>> nlp.vocab.morphology.lemmatizer(u'chromosomes', 'noun', morphology={'number': 'plur'})\r\nset([u'chromosome'])\r\n```\r\n\r\nSuggested patch to lemmatizer.py\r\n\r\n```python\r\n\r\n oov_forms = []\r\n for old, new in rules:\r\n if string.endswith(old):\r\n form = string[:len(string) - len(old)] + new\r\n if form in index or not form.isalpha():\r\n forms.append(form)\r\n else:\r\n oov_forms.append(form)\r\n if not forms:\r\n forms.extend(oov_forms)\r\n```\r\n\r\n\r\n## Your Environment\r\n<!-- Include details of your environment -->\r\n* Operating System: \r\n* Python Version Used: \r\n* spaCy Version Used: \r\n* Environment Information: \r\n\n", "before_files": [{"content": "from __future__ import unicode_literals, print_function\nimport codecs\nimport pathlib\n\nimport ujson as json\n\nfrom .symbols import POS, NOUN, VERB, ADJ, PUNCT\n\n\nclass Lemmatizer(object):\n @classmethod\n def load(cls, path, rules=None):\n index = {}\n exc = {}\n for pos in ['adj', 'noun', 'verb']:\n pos_index_path = path / 'wordnet' / 'index.{pos}'.format(pos=pos)\n if pos_index_path.exists():\n with pos_index_path.open() as file_:\n index[pos] = read_index(file_)\n else:\n index[pos] = set()\n pos_exc_path = path / 'wordnet' / '{pos}.exc'.format(pos=pos)\n if pos_exc_path.exists():\n with pos_exc_path.open() as file_:\n exc[pos] = read_exc(file_)\n else:\n exc[pos] = {}\n if rules is None and (path / 'vocab' / 'lemma_rules.json').exists():\n with (path / 'vocab' / 'lemma_rules.json').open('r', encoding='utf8') as file_:\n rules = json.load(file_)\n elif rules is None:\n rules = {}\n return cls(index, exc, rules)\n\n def __init__(self, index, exceptions, rules):\n self.index = index\n self.exc = exceptions\n self.rules = rules\n\n def __call__(self, string, univ_pos, morphology=None):\n if univ_pos == NOUN:\n univ_pos = 'noun'\n elif univ_pos == VERB:\n univ_pos = 'verb'\n elif univ_pos == ADJ:\n univ_pos = 'adj'\n elif univ_pos == PUNCT:\n univ_pos = 'punct'\n # See Issue #435 for example of where this logic is requied.\n if self.is_base_form(univ_pos, morphology):\n return set([string.lower()])\n lemmas = lemmatize(string, self.index.get(univ_pos, {}),\n self.exc.get(univ_pos, {}),\n self.rules.get(univ_pos, []))\n return lemmas\n\n def is_base_form(self, univ_pos, morphology=None):\n '''Check whether we're dealing with an uninflected paradigm, so we can\n avoid lemmatization entirely.'''\n morphology = {} if morphology is None else morphology\n others = [key for key in morphology if key not in (POS, 'number', 'pos', 'verbform')]\n if univ_pos == 'noun' and morphology.get('number') == 'sing' and not others:\n return True\n elif univ_pos == 'verb' and morphology.get('verbform') == 'inf' and not others:\n return True\n else:\n return False\n\n def noun(self, string, morphology=None):\n return self(string, 'noun', morphology)\n\n def verb(self, string, morphology=None):\n return self(string, 'verb', morphology)\n\n def adj(self, string, morphology=None):\n return self(string, 'adj', morphology)\n\n def punct(self, string, morphology=None):\n return self(string, 'punct', morphology)\n\n\ndef lemmatize(string, index, exceptions, rules):\n string = string.lower()\n forms = []\n # TODO: Is this correct? See discussion in Issue #435.\n #if string in index:\n # forms.append(string)\n forms.extend(exceptions.get(string, []))\n for old, new in rules:\n if string.endswith(old):\n form = string[:len(string) - len(old)] + new\n if form in index or not form.isalpha():\n forms.append(form)\n if not forms:\n forms.append(string)\n return set(forms)\n\n\ndef read_index(fileobj):\n index = set()\n for line in fileobj:\n if line.startswith(' '):\n continue\n pieces = line.split()\n word = pieces[0]\n if word.count('_') == 0:\n index.add(word)\n return index\n\n\ndef read_exc(fileobj):\n exceptions = {}\n for line in fileobj:\n if line.startswith(' '):\n continue\n pieces = line.split()\n exceptions[pieces[0]] = tuple(pieces[1:])\n return exceptions\n", "path": "spacy/lemmatizer.py"}]} | 1,960 | 167 |
gh_patches_debug_63189 | rasdani/github-patches | git_diff | OpenEnergyPlatform__oeplatform-605 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add OEO Steering Committee Subpage
The OEO Steering Committee needs its own web page, which should be a sub page of the OEP. Please create such a sub page @jh-RLI . I think it makes sense to link it somewhere under ontology.
https://openenergy-platform.org/ontology/
The URL would then be
https://openenergy-platform.org/ontology/oeo-steering-committee
Content for the page is here:
https://github.com/OpenEnergyPlatform/ontology/wiki/OEO-Steering-Committee
An issue to create an English translation is open here: https://github.com/OpenEnergyPlatform/ontology/issues/313
Creating the page and making it look simple, but decent enough are priorities. The final text and location can easily be changed later on. Contact me if you have any questions.
Feel free to give feedback make changes to this issue @Ludee
</issue>
<code>
[start of ontology/urls.py]
1 from django.conf.urls import url
2 from django.conf.urls.static import static
3 from django.views.generic import TemplateView
4
5 from modelview import views
6 from oeplatform import settings
7
8 urlpatterns = [
9 url(r"^$", TemplateView.as_view(template_name="ontology/about.html")),
10 ]
11
[end of ontology/urls.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/ontology/urls.py b/ontology/urls.py
--- a/ontology/urls.py
+++ b/ontology/urls.py
@@ -7,4 +7,7 @@
urlpatterns = [
url(r"^$", TemplateView.as_view(template_name="ontology/about.html")),
+ url(r"^ontology/oeo-steering-committee$",
+ TemplateView.as_view(template_name="ontology/oeo-steering-committee.html"),
+ name="oeo-s-c"),
]
| {"golden_diff": "diff --git a/ontology/urls.py b/ontology/urls.py\n--- a/ontology/urls.py\n+++ b/ontology/urls.py\n@@ -7,4 +7,7 @@\n \n urlpatterns = [\n url(r\"^$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n+ url(r\"^ontology/oeo-steering-committee$\",\n+ TemplateView.as_view(template_name=\"ontology/oeo-steering-committee.html\"),\n+ name=\"oeo-s-c\"),\n ]\n", "issue": "Add OEO Steering Committee Subpage\nThe OEO Steering Committee needs its own web page, which should be a sub page of the OEP. Please create such a sub page @jh-RLI . I think it makes sense to link it somewhere under ontology.\r\n\r\nhttps://openenergy-platform.org/ontology/\r\n\r\nThe URL would then be \r\n\r\nhttps://openenergy-platform.org/ontology/oeo-steering-committee\r\n\r\nContent for the page is here:\r\n\r\nhttps://github.com/OpenEnergyPlatform/ontology/wiki/OEO-Steering-Committee\r\n\r\nAn issue to create an English translation is open here: https://github.com/OpenEnergyPlatform/ontology/issues/313\r\n\r\nCreating the page and making it look simple, but decent enough are priorities. The final text and location can easily be changed later on. Contact me if you have any questions. \r\n\r\nFeel free to give feedback make changes to this issue @Ludee \n", "before_files": [{"content": "from django.conf.urls import url\nfrom django.conf.urls.static import static\nfrom django.views.generic import TemplateView\n\nfrom modelview import views\nfrom oeplatform import settings\n\nurlpatterns = [\n url(r\"^$\", TemplateView.as_view(template_name=\"ontology/about.html\")),\n]\n", "path": "ontology/urls.py"}]} | 792 | 107 |
gh_patches_debug_986 | rasdani/github-patches | git_diff | marshmallow-code__webargs-482 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix simple typo: objec -> object
There is a small typo in src/webargs/flaskparser.py.
Should read `object` rather than `objec`.
</issue>
<code>
[start of src/webargs/flaskparser.py]
1 """Flask request argument parsing module.
2
3 Example: ::
4
5 from flask import Flask
6
7 from webargs import fields
8 from webargs.flaskparser import use_args
9
10 app = Flask(__name__)
11
12 hello_args = {
13 'name': fields.Str(required=True)
14 }
15
16 @app.route('/')
17 @use_args(hello_args)
18 def index(args):
19 return 'Hello ' + args['name']
20 """
21 import flask
22 from werkzeug.exceptions import HTTPException
23
24 from webargs import core
25 from webargs.compat import MARSHMALLOW_VERSION_INFO
26 from webargs.multidictproxy import MultiDictProxy
27
28
29 def abort(http_status_code, exc=None, **kwargs):
30 """Raise a HTTPException for the given http_status_code. Attach any keyword
31 arguments to the exception for later processing.
32
33 From Flask-Restful. See NOTICE file for license information.
34 """
35 try:
36 flask.abort(http_status_code)
37 except HTTPException as err:
38 err.data = kwargs
39 err.exc = exc
40 raise err
41
42
43 def is_json_request(req):
44 return core.is_json(req.mimetype)
45
46
47 class FlaskParser(core.Parser):
48 """Flask request argument parser."""
49
50 __location_map__ = dict(
51 view_args="load_view_args",
52 path="load_view_args",
53 **core.Parser.__location_map__,
54 )
55
56 def _raw_load_json(self, req):
57 """Return a json payload from the request for the core parser's load_json
58
59 Checks the input mimetype and may return 'missing' if the mimetype is
60 non-json, even if the request body is parseable as json."""
61 if not is_json_request(req):
62 return core.missing
63
64 return core.parse_json(req.get_data(cache=True))
65
66 def _handle_invalid_json_error(self, error, req, *args, **kwargs):
67 abort(400, exc=error, messages={"json": ["Invalid JSON body."]})
68
69 def load_view_args(self, req, schema):
70 """Return the request's ``view_args`` or ``missing`` if there are none."""
71 return req.view_args or core.missing
72
73 def load_querystring(self, req, schema):
74 """Return query params from the request as a MultiDictProxy."""
75 return MultiDictProxy(req.args, schema)
76
77 def load_form(self, req, schema):
78 """Return form values from the request as a MultiDictProxy."""
79 return MultiDictProxy(req.form, schema)
80
81 def load_headers(self, req, schema):
82 """Return headers from the request as a MultiDictProxy."""
83 return MultiDictProxy(req.headers, schema)
84
85 def load_cookies(self, req, schema):
86 """Return cookies from the request."""
87 return req.cookies
88
89 def load_files(self, req, schema):
90 """Return files from the request as a MultiDictProxy."""
91 return MultiDictProxy(req.files, schema)
92
93 def handle_error(self, error, req, schema, *, error_status_code, error_headers):
94 """Handles errors during parsing. Aborts the current HTTP request and
95 responds with a 422 error.
96 """
97 status_code = error_status_code or self.DEFAULT_VALIDATION_STATUS
98 # on marshmallow 2, a many schema receiving a non-list value will
99 # produce this specific error back -- reformat it to match the
100 # marshmallow 3 message so that Flask can properly encode it
101 messages = error.messages
102 if (
103 MARSHMALLOW_VERSION_INFO[0] < 3
104 and schema.many
105 and messages == {0: {}, "_schema": ["Invalid input type."]}
106 ):
107 messages.pop(0)
108 abort(
109 status_code,
110 exc=error,
111 messages=error.messages,
112 schema=schema,
113 headers=error_headers,
114 )
115
116 def get_default_request(self):
117 """Override to use Flask's thread-local request objec by default"""
118 return flask.request
119
120
121 parser = FlaskParser()
122 use_args = parser.use_args
123 use_kwargs = parser.use_kwargs
124
[end of src/webargs/flaskparser.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/src/webargs/flaskparser.py b/src/webargs/flaskparser.py
--- a/src/webargs/flaskparser.py
+++ b/src/webargs/flaskparser.py
@@ -114,7 +114,7 @@
)
def get_default_request(self):
- """Override to use Flask's thread-local request objec by default"""
+ """Override to use Flask's thread-local request object by default"""
return flask.request
| {"golden_diff": "diff --git a/src/webargs/flaskparser.py b/src/webargs/flaskparser.py\n--- a/src/webargs/flaskparser.py\n+++ b/src/webargs/flaskparser.py\n@@ -114,7 +114,7 @@\n )\n \n def get_default_request(self):\n- \"\"\"Override to use Flask's thread-local request objec by default\"\"\"\n+ \"\"\"Override to use Flask's thread-local request object by default\"\"\"\n return flask.request\n", "issue": "Fix simple typo: objec -> object\nThere is a small typo in src/webargs/flaskparser.py.\nShould read `object` rather than `objec`.\n\n\n", "before_files": [{"content": "\"\"\"Flask request argument parsing module.\n\nExample: ::\n\n from flask import Flask\n\n from webargs import fields\n from webargs.flaskparser import use_args\n\n app = Flask(__name__)\n\n hello_args = {\n 'name': fields.Str(required=True)\n }\n\n @app.route('/')\n @use_args(hello_args)\n def index(args):\n return 'Hello ' + args['name']\n\"\"\"\nimport flask\nfrom werkzeug.exceptions import HTTPException\n\nfrom webargs import core\nfrom webargs.compat import MARSHMALLOW_VERSION_INFO\nfrom webargs.multidictproxy import MultiDictProxy\n\n\ndef abort(http_status_code, exc=None, **kwargs):\n \"\"\"Raise a HTTPException for the given http_status_code. Attach any keyword\n arguments to the exception for later processing.\n\n From Flask-Restful. See NOTICE file for license information.\n \"\"\"\n try:\n flask.abort(http_status_code)\n except HTTPException as err:\n err.data = kwargs\n err.exc = exc\n raise err\n\n\ndef is_json_request(req):\n return core.is_json(req.mimetype)\n\n\nclass FlaskParser(core.Parser):\n \"\"\"Flask request argument parser.\"\"\"\n\n __location_map__ = dict(\n view_args=\"load_view_args\",\n path=\"load_view_args\",\n **core.Parser.__location_map__,\n )\n\n def _raw_load_json(self, req):\n \"\"\"Return a json payload from the request for the core parser's load_json\n\n Checks the input mimetype and may return 'missing' if the mimetype is\n non-json, even if the request body is parseable as json.\"\"\"\n if not is_json_request(req):\n return core.missing\n\n return core.parse_json(req.get_data(cache=True))\n\n def _handle_invalid_json_error(self, error, req, *args, **kwargs):\n abort(400, exc=error, messages={\"json\": [\"Invalid JSON body.\"]})\n\n def load_view_args(self, req, schema):\n \"\"\"Return the request's ``view_args`` or ``missing`` if there are none.\"\"\"\n return req.view_args or core.missing\n\n def load_querystring(self, req, schema):\n \"\"\"Return query params from the request as a MultiDictProxy.\"\"\"\n return MultiDictProxy(req.args, schema)\n\n def load_form(self, req, schema):\n \"\"\"Return form values from the request as a MultiDictProxy.\"\"\"\n return MultiDictProxy(req.form, schema)\n\n def load_headers(self, req, schema):\n \"\"\"Return headers from the request as a MultiDictProxy.\"\"\"\n return MultiDictProxy(req.headers, schema)\n\n def load_cookies(self, req, schema):\n \"\"\"Return cookies from the request.\"\"\"\n return req.cookies\n\n def load_files(self, req, schema):\n \"\"\"Return files from the request as a MultiDictProxy.\"\"\"\n return MultiDictProxy(req.files, schema)\n\n def handle_error(self, error, req, schema, *, error_status_code, error_headers):\n \"\"\"Handles errors during parsing. Aborts the current HTTP request and\n responds with a 422 error.\n \"\"\"\n status_code = error_status_code or self.DEFAULT_VALIDATION_STATUS\n # on marshmallow 2, a many schema receiving a non-list value will\n # produce this specific error back -- reformat it to match the\n # marshmallow 3 message so that Flask can properly encode it\n messages = error.messages\n if (\n MARSHMALLOW_VERSION_INFO[0] < 3\n and schema.many\n and messages == {0: {}, \"_schema\": [\"Invalid input type.\"]}\n ):\n messages.pop(0)\n abort(\n status_code,\n exc=error,\n messages=error.messages,\n schema=schema,\n headers=error_headers,\n )\n\n def get_default_request(self):\n \"\"\"Override to use Flask's thread-local request objec by default\"\"\"\n return flask.request\n\n\nparser = FlaskParser()\nuse_args = parser.use_args\nuse_kwargs = parser.use_kwargs\n", "path": "src/webargs/flaskparser.py"}]} | 1,706 | 101 |
gh_patches_debug_1568 | rasdani/github-patches | git_diff | cobbler__cobbler-1266 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
build_reporting fails if empty string in ignorelist
The default configuration in the ubuntu 12.04 cobbler 2.6.5 package has the following in `/etc/settings`:
```
build_reporting_ignorelist = [""]
```
The code that reads this value is in `install_post_report.py`, and the condition that determines whether to send a build report email is:
```
for prefix in settings.build_reporting_ignorelist:
if name.lower().startswith(prefix) == True:
sendmail = False
```
With the default configuration, this check always succeeds, and **mail is not sent**.
Fix the issue by modifying the condition to:
```
if prefix != '' and name.lower().startswith(prefix):
```
</issue>
<code>
[start of cobbler/modules/install_post_report.py]
1 # (c) 2008-2009
2 # Jeff Schroeder <[email protected]>
3 # Michael DeHaan <michael.dehaan AT gmail>
4 #
5 # License: GPLv2+
6
7 # Post install trigger for cobbler to
8 # send out a pretty email report that
9 # contains target information.
10
11 import distutils.sysconfig
12 import smtplib
13 import sys
14
15 plib = distutils.sysconfig.get_python_lib()
16 mod_path = "%s/cobbler" % plib
17 sys.path.insert(0, mod_path)
18
19 from cobbler.cexceptions import CX
20 import cobbler.templar as templar
21 import utils
22
23
24 def register():
25 # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.
26 # the return of this method indicates the trigger type
27 return "/var/lib/cobbler/triggers/install/post/*"
28
29
30 def run(api, args, logger):
31 # FIXME: make everything use the logger
32
33 settings = api.settings()
34
35 # go no further if this feature is turned off
36 if not str(settings.build_reporting_enabled).lower() in ["1", "yes", "y", "true"]:
37 return 0
38
39 objtype = args[0] # "target" or "profile"
40 name = args[1] # name of target or profile
41 boot_ip = args[2] # ip or "?"
42
43 if objtype == "system":
44 target = api.find_system(name)
45 else:
46 target = api.find_profile(name)
47
48 # collapse the object down to a rendered datastructure
49 target = utils.blender(api, False, target)
50
51 if target == {}:
52 raise CX("failure looking up target")
53
54 to_addr = settings.build_reporting_email
55 if to_addr == "":
56 return 0
57
58 # add the ability to specify an MTA for servers that don't run their own
59 smtp_server = settings.build_reporting_smtp_server
60 if smtp_server == "":
61 smtp_server = "localhost"
62
63 # use a custom from address or fall back to a reasonable default
64 from_addr = settings.build_reporting_sender
65 if from_addr == "":
66 from_addr = "cobbler@%s" % settings.server
67
68 subject = settings.build_reporting_subject
69 if subject == "":
70 subject = '[Cobbler] install complete '
71
72 to_addr = ",".join(to_addr)
73 metadata = {
74 "from_addr": from_addr,
75 "to_addr": to_addr,
76 "subject": subject,
77 "boot_ip": boot_ip
78 }
79 metadata.update(target)
80
81 input_template = open("/etc/cobbler/reporting/build_report_email.template")
82 input_data = input_template.read()
83 input_template.close()
84
85 message = templar.Templar(api._config).render(input_data, metadata, None)
86
87 sendmail = True
88 for prefix in settings.build_reporting_ignorelist:
89 if name.lower().startswith(prefix):
90 sendmail = False
91
92 if sendmail:
93 # Send the mail
94 # FIXME: on error, return non-zero
95 server_handle = smtplib.SMTP(smtp_server)
96 server_handle.sendmail(from_addr, to_addr.split(','), message)
97 server_handle.quit()
98
99 return 0
100
[end of cobbler/modules/install_post_report.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/cobbler/modules/install_post_report.py b/cobbler/modules/install_post_report.py
--- a/cobbler/modules/install_post_report.py
+++ b/cobbler/modules/install_post_report.py
@@ -86,7 +86,7 @@
sendmail = True
for prefix in settings.build_reporting_ignorelist:
- if name.lower().startswith(prefix):
+ if prefix != '' and name.lower().startswith(prefix):
sendmail = False
if sendmail:
| {"golden_diff": "diff --git a/cobbler/modules/install_post_report.py b/cobbler/modules/install_post_report.py\n--- a/cobbler/modules/install_post_report.py\n+++ b/cobbler/modules/install_post_report.py\n@@ -86,7 +86,7 @@\n \n sendmail = True\n for prefix in settings.build_reporting_ignorelist:\n- if name.lower().startswith(prefix):\n+ if prefix != '' and name.lower().startswith(prefix):\n sendmail = False\n \n if sendmail:\n", "issue": "build_reporting fails if empty string in ignorelist\nThe default configuration in the ubuntu 12.04 cobbler 2.6.5 package has the following in `/etc/settings`:\n\n```\nbuild_reporting_ignorelist = [\"\"]\n```\n\nThe code that reads this value is in `install_post_report.py`, and the condition that determines whether to send a build report email is:\n\n```\nfor prefix in settings.build_reporting_ignorelist:\n if name.lower().startswith(prefix) == True:\n sendmail = False\n```\n\nWith the default configuration, this check always succeeds, and **mail is not sent**.\n\nFix the issue by modifying the condition to:\n\n```\n if prefix != '' and name.lower().startswith(prefix):\n```\n\n", "before_files": [{"content": "# (c) 2008-2009\n# Jeff Schroeder <[email protected]>\n# Michael DeHaan <michael.dehaan AT gmail>\n#\n# License: GPLv2+\n\n# Post install trigger for cobbler to\n# send out a pretty email report that\n# contains target information.\n\nimport distutils.sysconfig\nimport smtplib\nimport sys\n\nplib = distutils.sysconfig.get_python_lib()\nmod_path = \"%s/cobbler\" % plib\nsys.path.insert(0, mod_path)\n\nfrom cobbler.cexceptions import CX\nimport cobbler.templar as templar\nimport utils\n\n\ndef register():\n # this pure python trigger acts as if it were a legacy shell-trigger, but is much faster.\n # the return of this method indicates the trigger type\n return \"/var/lib/cobbler/triggers/install/post/*\"\n\n\ndef run(api, args, logger):\n # FIXME: make everything use the logger\n\n settings = api.settings()\n\n # go no further if this feature is turned off\n if not str(settings.build_reporting_enabled).lower() in [\"1\", \"yes\", \"y\", \"true\"]:\n return 0\n\n objtype = args[0] # \"target\" or \"profile\"\n name = args[1] # name of target or profile\n boot_ip = args[2] # ip or \"?\"\n\n if objtype == \"system\":\n target = api.find_system(name)\n else:\n target = api.find_profile(name)\n\n # collapse the object down to a rendered datastructure\n target = utils.blender(api, False, target)\n\n if target == {}:\n raise CX(\"failure looking up target\")\n\n to_addr = settings.build_reporting_email\n if to_addr == \"\":\n return 0\n\n # add the ability to specify an MTA for servers that don't run their own\n smtp_server = settings.build_reporting_smtp_server\n if smtp_server == \"\":\n smtp_server = \"localhost\"\n\n # use a custom from address or fall back to a reasonable default\n from_addr = settings.build_reporting_sender\n if from_addr == \"\":\n from_addr = \"cobbler@%s\" % settings.server\n\n subject = settings.build_reporting_subject\n if subject == \"\":\n subject = '[Cobbler] install complete '\n\n to_addr = \",\".join(to_addr)\n metadata = {\n \"from_addr\": from_addr,\n \"to_addr\": to_addr,\n \"subject\": subject,\n \"boot_ip\": boot_ip\n }\n metadata.update(target)\n\n input_template = open(\"/etc/cobbler/reporting/build_report_email.template\")\n input_data = input_template.read()\n input_template.close()\n\n message = templar.Templar(api._config).render(input_data, metadata, None)\n\n sendmail = True\n for prefix in settings.build_reporting_ignorelist:\n if name.lower().startswith(prefix):\n sendmail = False\n\n if sendmail:\n # Send the mail\n # FIXME: on error, return non-zero\n server_handle = smtplib.SMTP(smtp_server)\n server_handle.sendmail(from_addr, to_addr.split(','), message)\n server_handle.quit()\n\n return 0\n", "path": "cobbler/modules/install_post_report.py"}]} | 1,605 | 107 |
gh_patches_debug_5535 | rasdani/github-patches | git_diff | kornia__kornia-421 | You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Fix simple typo: suports -> supports
There is a small typo in kornia/filters/laplacian.py.
Should read `supports` rather than `suports`.
</issue>
<code>
[start of kornia/filters/laplacian.py]
1 from typing import Tuple
2
3 import torch
4 import torch.nn as nn
5
6 import kornia
7 from kornia.filters.kernels import get_laplacian_kernel2d
8 from kornia.filters.kernels import normalize_kernel2d
9
10
11 class Laplacian(nn.Module):
12 r"""Creates an operator that returns a tensor using a Laplacian filter.
13
14 The operator smooths the given tensor with a laplacian kernel by convolving
15 it to each channel. It suports batched operation.
16
17 Arguments:
18 kernel_size (int): the size of the kernel.
19 border_type (str): the padding mode to be applied before convolving.
20 The expected modes are: ``'constant'``, ``'reflect'``,
21 ``'replicate'`` or ``'circular'``. Default: ``'reflect'``.
22 normalized (bool): if True, L1 norm of the kernel is set to 1.
23
24 Returns:
25 Tensor: the tensor.
26
27 Shape:
28 - Input: :math:`(B, C, H, W)`
29 - Output: :math:`(B, C, H, W)`
30
31 Examples::
32
33 >>> input = torch.rand(2, 4, 5, 5)
34 >>> laplace = kornia.filters.Laplacian(5)
35 >>> output = laplace(input) # 2x4x5x5
36 """
37
38 def __init__(self,
39 kernel_size: int, border_type: str = 'reflect',
40 normalized: bool = True) -> None:
41 super(Laplacian, self).__init__()
42 self.kernel_size: int = kernel_size
43 self.border_type: str = border_type
44 self.normalized: bool = normalized
45 self.kernel: torch.Tensor = torch.unsqueeze(
46 get_laplacian_kernel2d(kernel_size), dim=0)
47 if self.normalized:
48 self.kernel = normalize_kernel2d(self.kernel)
49
50 def __repr__(self) -> str:
51 return self.__class__.__name__ +\
52 '(kernel_size=' + str(self.kernel_size) + ', ' +\
53 'normalized=' + str(self.normalized) + ', ' + \
54 'border_type=' + self.border_type + ')'
55
56 def forward(self, input: torch.Tensor): # type: ignore
57 return kornia.filter2D(input, self.kernel, self.border_type)
58
59
60 ######################
61 # functional interface
62 ######################
63
64
65 def laplacian(
66 input: torch.Tensor,
67 kernel_size: int,
68 border_type: str = 'reflect',
69 normalized: bool = True) -> torch.Tensor:
70 r"""Function that returns a tensor using a Laplacian filter.
71
72 See :class:`~kornia.filters.Laplacian` for details.
73 """
74 return Laplacian(kernel_size, border_type, normalized)(input)
75
[end of kornia/filters/laplacian.py]
</code>
I need you to solve this issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the following format.
<patch>
diff --git a/file.py b/file.py
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
- err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
+ err -= dy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+
+ points.append((x, y))
return points
</patch> | diff --git a/kornia/filters/laplacian.py b/kornia/filters/laplacian.py
--- a/kornia/filters/laplacian.py
+++ b/kornia/filters/laplacian.py
@@ -12,7 +12,7 @@
r"""Creates an operator that returns a tensor using a Laplacian filter.
The operator smooths the given tensor with a laplacian kernel by convolving
- it to each channel. It suports batched operation.
+ it to each channel. It supports batched operation.
Arguments:
kernel_size (int): the size of the kernel.
| {"golden_diff": "diff --git a/kornia/filters/laplacian.py b/kornia/filters/laplacian.py\n--- a/kornia/filters/laplacian.py\n+++ b/kornia/filters/laplacian.py\n@@ -12,7 +12,7 @@\n r\"\"\"Creates an operator that returns a tensor using a Laplacian filter.\n \n The operator smooths the given tensor with a laplacian kernel by convolving\n- it to each channel. It suports batched operation.\n+ it to each channel. It supports batched operation.\n \n Arguments:\n kernel_size (int): the size of the kernel.\n", "issue": "Fix simple typo: suports -> supports\nThere is a small typo in kornia/filters/laplacian.py.\nShould read `supports` rather than `suports`.\n\n\n", "before_files": [{"content": "from typing import Tuple\n\nimport torch\nimport torch.nn as nn\n\nimport kornia\nfrom kornia.filters.kernels import get_laplacian_kernel2d\nfrom kornia.filters.kernels import normalize_kernel2d\n\n\nclass Laplacian(nn.Module):\n r\"\"\"Creates an operator that returns a tensor using a Laplacian filter.\n\n The operator smooths the given tensor with a laplacian kernel by convolving\n it to each channel. It suports batched operation.\n\n Arguments:\n kernel_size (int): the size of the kernel.\n border_type (str): the padding mode to be applied before convolving.\n The expected modes are: ``'constant'``, ``'reflect'``,\n ``'replicate'`` or ``'circular'``. Default: ``'reflect'``.\n normalized (bool): if True, L1 norm of the kernel is set to 1.\n\n Returns:\n Tensor: the tensor.\n\n Shape:\n - Input: :math:`(B, C, H, W)`\n - Output: :math:`(B, C, H, W)`\n\n Examples::\n\n >>> input = torch.rand(2, 4, 5, 5)\n >>> laplace = kornia.filters.Laplacian(5)\n >>> output = laplace(input) # 2x4x5x5\n \"\"\"\n\n def __init__(self,\n kernel_size: int, border_type: str = 'reflect',\n normalized: bool = True) -> None:\n super(Laplacian, self).__init__()\n self.kernel_size: int = kernel_size\n self.border_type: str = border_type\n self.normalized: bool = normalized\n self.kernel: torch.Tensor = torch.unsqueeze(\n get_laplacian_kernel2d(kernel_size), dim=0)\n if self.normalized:\n self.kernel = normalize_kernel2d(self.kernel)\n\n def __repr__(self) -> str:\n return self.__class__.__name__ +\\\n '(kernel_size=' + str(self.kernel_size) + ', ' +\\\n 'normalized=' + str(self.normalized) + ', ' + \\\n 'border_type=' + self.border_type + ')'\n\n def forward(self, input: torch.Tensor): # type: ignore\n return kornia.filter2D(input, self.kernel, self.border_type)\n\n\n######################\n# functional interface\n######################\n\n\ndef laplacian(\n input: torch.Tensor,\n kernel_size: int,\n border_type: str = 'reflect',\n normalized: bool = True) -> torch.Tensor:\n r\"\"\"Function that returns a tensor using a Laplacian filter.\n\n See :class:`~kornia.filters.Laplacian` for details.\n \"\"\"\n return Laplacian(kernel_size, border_type, normalized)(input)\n", "path": "kornia/filters/laplacian.py"}]} | 1,341 | 144 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.